# On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval
Abstract
Visually-rich document entity retrieval (VDER), which extracts key information (e.g. date, address) from document images like invoices and receipts, has become an important topic in industrial NLP applications. The emergence of new document types at a constant pace, each with its unique entity types, presents a unique challenge: many documents contain unseen entity types that occur only a couple of times. Addressing this challenge requires models to have the ability of learning entities in a few-shot manner. However, prior works for Few-shot VDER mainly address the problem at the document level with a predefined global entity space, which doesn’t account for the entity-level few-shot scenario: target entity types are locally personalized by each task and entity occurrences vary significantly among documents. To address this unexplored scenario, this paper studies a novel entity-level few-shot VDER task. The challenges lie in the uniqueness of the label space for each task and the increased complexity of out-of-distribution (OOD) contents. To tackle this novel task, we present a task-aware meta-learning based framework, with a central focus on achieving effective task personalization that distinguishes between in-task and out-of-task distribution. Specifically, we adopt a hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost future research in the field of entity-level few-shot VDER. Experimental results demonstrate our approaches significantly improve the robustness of popular meta-learning baselines. ${}^{\star}$ Work done when Jiayi Chen interned at Google. ${}^{\diamond}$ Corresponding author
1 Introduction
Table 1: Comparison on task formulations and application scenarios.
Visually-rich Document Understanding (VrDU) aims to analyze scanned documents composed of structured and organized information. As a sub-problem of VrDU, the goal of Visually-rich Document Entity Retrieval (VDER) is to extract key information (e.g., date, address, signatures) from the document images such as invoices and receipts with complementary multimodal information Xu et al. (2021); Garncarek et al. (2021); Lee et al. (2022). In real-world VDER systems, new document types continuously emerge at a constant pace, each with its unique entity spaces (i.e., the set of entity categories that we are going to extract from the document). This poses a substantial challenge: a large amount of documents lack sufficient annotations for their unique entity types, which is referred to as few-shot entities. To tackle this, Few-shot Visually-rich Document Entity Retrieval (FVDER) has become a crucial research topic.
Despite the importance of FVDER, there has been limited amount of prior works in this area. Recent efforts have leveraged pre-trained language models Wang and Shang (2022) or prompt mechanisms Wang et al. (2023b) to obtain transferable knowledge from a source domain and apply it to a target domain, where a small number of document images are labeled for fine-tuning. These prior works address the few-shot problem in a granularity of document level, assuming a globally predefined entity space and balanced entity occurrences across documents. However, in certain real-world scenarios, the few-shot challenge can also manifest at the entity level –i.e., the number of entity occurrences in labeled documents are limited, assuming the situations where entity classes might be locally specialized by each user (task) and their occurrences maintain a significant imbalance across documents. In such scenarios, prior methods struggle to (1) efficiently address the model personalization on each task-specific label space and (2) effectively handle the increased complexity of out-of-distribution contents, with few-shot entity annotations.
To provide a complementary research perspective alongside the existing document-level work, in this paper, we initiate the investigation for the unexplored entity-level few-shot VDER. To begin with, we formulate an $N$ -way soft- $K$ -shot VDER task setting together with a distribution of individual tasks, which simulates the application scenario of such task, that is, the user or annotator of each few-shot task is only interested in $N$ personalized entity types and the number of labelled entity occurrences in a task is within a flexible range determined by $K$ shots. Table 1 summarizes the differences in application scenarios between prior works and ours.
Then, to tackle the limitations of prior methods on this new task, we adopt a meta-learning based framework build upon pretrained language models, along with several proposed techniques for achieving task personalization and handling out-of-task distribution contents. With the help of the meta-learning paradigm, (1) the learning experiences on some example tasks could be effectively utilized and (2) the domain gap between the pre-trained model and novel FVDER tasks is largely reduced, promoting quicker and more effective fine-tuning on future novel entity types. Yet we found popular meta-learning algorithms (Finn et al., 2017; Snell et al., 2017; Chen et al., 2021) are still not robust to the entity-level $N$ -way soft- $K$ -shot VDER tasks. The difficulty is that the background context that does not belong to the task-personalized entity types occupies most of the predictive efforts, and also, such noisy contextual information varies a lot across tasks and documents. To address this, we propose task-aware meta-learning techniques (ContrastProtoNet, ANIL+HC, etc.) to allow the meta-learners to be aware of those multi-mode contextual out-of-task distribution and achieve fast adaptation to the task-personalized entity types.
Furthermore, we present a new dataset, named FewVEX, which comprises thousands of entity-level $N$ -way soft- $K$ -shot VDER tasks. We also introduce an automatic dataset generation algorithm, XDR, designed to facilitate future improvement of FewVEX, such as the expansion of the number of document types and entity types. Specifically, we set an upper bound for entity occurrences and sample across the training documents in a way that guarantees to return soft-balanced few-shot annotations. Training documents are constructed in a way such that they cooperatively contain a certain number of entity occurrences per information type.
Our contributions are summarized as follows. (1) To the best of our knowledge, this paper is the first attempt studying the Few-shot VDER problem at the entity level, providing a complementary research perspective in addition to the existing document-level works. (2) We propose a meta-learning based framework for solving the newly introduced task. While vanilla meta-learning approaches have limitations on this task, we propose several task-aware meta-learners to enhance task personalization by dealing with out-of-task distribution. (3) Experiment results on FewVEX demonstrate our proposed approaches significantly improve the performance of baseline methods.
<details>
<summary>2311.00693v2/extracted/5283733/figures/task2.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Entity Type Classification System
## Diagram Overview
The image depicts a multi-stage entity classification pipeline with color-coded entity types and task generation workflows. Key components include:
1. **Raw Dataset Composition**
2. **Task Generation Process**
3. **Meta-Training/Testing Framework**
4. **Support/Query Set Structure**
---
## 1. Raw Dataset Composition
### Entity Type Distribution
- **Pie Chart**: "All Entity Types in Raw Dataset"
- **Colors & Labels**:
- Red: Entity Type 1
- Orange: Entity Type 2
- Blue: Entity Type 3
- Green: Entity Type 4
- Purple: Entity Type 5
- Yellow: Entity Type 6
- Gray: Out-of-task entities/background
- Light Gray: Paddings
### Base Types vs Novel Types
- **Base Types** (Larger segments):
- Entity Type 1 (Red): 35%
- Entity Type 2 (Orange): 25%
- Entity Type 3 (Blue): 20%
- Entity Type 4 (Green): 15%
- Entity Type 5 (Purple): 5%
- **Novel Types** (Smaller segments):
- Entity Type 6 (Yellow): 10%
- Entity Type 7 (Light Blue): 5%
---
## 2. Task Generation Process
### Task Generator Output
- **Meta-Train Tasks** (`T₁` to `Tₙ`):
- Each task contains 3 entity types (e.g., `T₁`: Red, Orange, Blue)
- Tasks are color-coded using the same legend
- **Meta-Test Tasks** (`T₁*` to `Tₙ*`):
- Similar structure to meta-train tasks
- Includes novel entity types (e.g., `T₁*`: Orange, Purple, Green)
---
## 3. Support/Query Set Structure
### Support (Training) Set
- **Documents** (`Doc 1` to `Doc 5`):
- Each document contains 3-5 entity blocks
- Example (`Doc 1`):
- Red block (Entity Type 1)
- Orange block (Entity Type 2)
- Blue block (Entity Type 3)
- Gray padding
### Query (Validation) Set
- **Documents** (`Doc 1` to `Doc 6`):
- Similar structure to support set
- Includes novel entity types in later documents
- Example (`Doc 6`):
- Blue block (Entity Type 3)
- Red block (Entity Type 1)
- Orange block (Entity Type 2)
- Gray padding
---
## 4. Legend Analysis
- **Color Legend** (Right side):
- **Entity Types**:
- Red: Entity Type 1
- Orange: Entity Type 2
- Blue: Entity Type 3
- Green: Entity Type 4
- Purple: Entity Type 5
- Yellow: Entity Type 6
- **Special Elements**:
- Gray: Out-of-task entities/background
- Light Gray: Paddings
---
## 5. Spatial Grounding & Color Verification
- **Legend Position**: Right side of diagram
- **Color Consistency Check**:
- All entity blocks in support/query sets match legend colors
- Example: `Doc 3` in support set contains Red (Type 1), Orange (Type 2), Blue (Type 3)
---
## 6. Trend Verification
- **Pie Chart Trends**:
- Base Types dominate raw dataset (80% combined)
- Novel Types represent 20% of entities
- **Task Complexity**:
- Meta-train tasks use common entity types
- Meta-test tasks introduce novel combinations
---
## 7. Component Isolation
### Header Region
- Title: "Entity Type Classification System"
- Subtitle: "Multi-stage pipeline for entity recognition"
### Main Chart Region
- Left: Raw dataset → Task Generator → Meta-tasks
- Right: Support/Query sets with color-coded entities
### Footer Region
- Legend explaining color coding
- Spatial grounding of all components
---
## 8. Data Table Reconstruction
### Support Set Table
| Document | Entity Type 1 | Entity Type 2 | Entity Type 3 | Paddings |
|----------|---------------|---------------|---------------|----------|
| Doc 1 | Present | Present | Present | Present |
| Doc 2 | Present | Present | Present | Present |
| Doc 3 | Present | Present | Present | Present |
| Doc 4 | Present | Present | Present | Present |
| Doc 5 | Present | Present | Present | Present |
### Query Set Table
| Document | Entity Type 1 | Entity Type 2 | Entity Type 3 | Paddings |
|----------|---------------|---------------|---------------|----------|
| Doc 1 | Present | Present | Present | Present |
| Doc 2 | Present | Present | Present | Present |
| Doc 3 | Present | Present | Present | Present |
| Doc 4 | Present | Present | Present | Present |
| Doc 5 | Present | Present | Present | Present |
| Doc 6 | Present | Present | Present | Present |
---
## 9. Critical Observations
1. **Data Imbalance**: Base types (Types 1-4) dominate training data
2. **Novelty Introduction**: Meta-test tasks include Type 6 (Yellow) and Type 7 (Light Blue)
3. **Padding Strategy**: Gray blocks used for sequence alignment
4. **Task Diversity**: Each meta-task combines 3 distinct entity types
---
## 10. Missing Information
- No numerical values provided for entity type frequencies
- No explicit timestamps or version information
- No explanation of "out-of-task entities" purpose
---
## Conclusion
This diagram illustrates a comprehensive entity classification system with:
- Color-coded entity type representation
- Multi-stage task generation pipeline
- Support/validation set structure
- Clear spatial organization of components
</details>
Figure 1: Proposed task formulation and problem setting. Different colors represent different entity types. The pie chart split on the left indicates that the target classes in testing tasks are not seen in training tasks. On the right area, we show an example 3-way soft-2-shot task. In this example, $\rho=2$ .
2 Entity-level Few-shot VDER Setting
General VDER.
A document image is processed through Optical Character Recognition (OCR) Chaudhuri et al. (2017) to form a sequence of tokens $X=[\mathbf{x}_{1},\mathbf{x}_{2},…,\mathbf{x}_{L}]$ , where $L$ is the sequence length and each token $\mathbf{x}_{l}$ is composed of multiple modalities $\mathbf{x}_{l}=\{\mathbf{x}_{l}^{(v)},\mathbf{x}_{l}^{(p)},\mathbf{x}_{l}^{(b)%
},...\}$ such as the token id ( $v$ ), the 1d position ( $p$ ) of the token in the sequence, the bounding box ( $b$ ) representing the token’s relative 2d position, scale in the image, and so on. The goal is to predict $Y=[y_{1},y_{2},…,y_{L}]$ , which assigns each token $\mathbf{x}_{l}$ a label $y_{l}$ to indicate either the token is one of entities in a set of predefined entity types or does not belong to any entity (denoted as O class).
$N$ -way Soft- $K$ -shot VDER in Entity Level.
In real-world Few-shot VDER systems with label scarcity, individual users often show their personal interests in a small number of new entity types. Such user-dependent task personalization gives rise to a novel entity-level task formulation of Few-shot VDER. Formally, an entity-level $N$ -way soft- $K$ -shot VDER task $\mathcal{T}=\{S,Q,\mathcal{E}\}$ consists of a train (support) set $S$ containing $M_{s}$ documents, a test (query) set $Q$ containing $M_{q}$ documents, and a target class set $\mathcal{E}$ containing $N$ target entity types
$$
\begin{split}S&=\{(X_{1},Y_{1}),…,(X_{M_{s}},Y_{M_{s}})\}\\
Q&=\{X^{*}_{1},X^{*}_{2},…,X^{*}_{M_{q}}\}\\
\mathcal{E}&=\{e_{1},e_{2},…,e_{N}\},\end{split} \tag{1}
$$
where $X_{j}=[\mathbf{x}_{j1},\mathbf{x}_{j2},...,\mathbf{x}_{jL}]$ is the sequence of multimodal token features of document $j$ , $Y_{j}=[y_{j1},y_{j2},...,y_{jL}]$ is the sequence of token labels corresponding to $X_{j}$ , and $e_{c}$ denotes the $c$ -th entity type in $\mathcal{T}$ . “ $N$ -way ” refers to the $N$ unique entity types the user is interested in, reflecting task personalization. It is important to highlight that within $S$ and $Q$ documents, there may exist entities that fall outside the $N$ target classes ( $e^{\prime}∉\mathcal{E}$ ). These entity types come from the out-of-distribution in contrast to what the task $\mathcal{T}$ aims to train on, which do not attract user interest, remain unlabeled, and thus are treated as the background O class. “ Soft- $K$ -shot ” refers that, among the $M_{s}$ labelled documents in $S$ , the total number of occurrences of each entity type $e∈\mathcal{E}$ is within a range $K\sim\rho K$ , where $\rho>1$ is the softening hyperparameter. An entity occurrence is defined as a contiguous subsequence in the document with the same entity type as labels. We do not impose a strict constraint on the exact count $K$ sicne the entity-level personalization scenario implies that the frequency of entity occurrences may vary dramatically from one document to the other, which makes it difficult to set a strict limit. For instance, an entity type may occur more frequent in some documents and less so in others. The right area of Figure 1 shows an example $N$ -way soft- $K$ -shot VDER task. The goal of task $\mathcal{T}$ is to obtain a model that assigns each token as either one of $\mathcal{E}$ (task-personalized entity types) or O (background or out-of-task entity types), based on the few labeled entity occurrences for those in $\mathcal{E}$ in support set $S$ , such that the model achieves high performance on the query set $Q$ .
<details>
<summary>2311.00693v2/extracted/5283733/figures/model2.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Multimodal Transformer for Document Classification
## Diagram Overview
This diagram illustrates a multimodal transformer architecture for document classification, showing data flow from input sampling through model processing to prediction. Key components include entity distributions, meta-parameters, embedding spaces, and token labeling.
---
## Legend Analysis
Legend located at top-left corner with four categories:
1. **In-Task Distribution (ITD) Entities (Labeled)**
- Represented by colored squares (blue, green, red)
- Spatial coordinates: [x=20, y=20] to [x=120, y=60]
2. **In-Task Distribution (ITD) Entities (Unlabeled)**
- White squares with colored borders
- Spatial coordinates: [x=140, y=20] to [x=240, y=60]
3. **Out-of-Task Distribution (OTD) Entities/Background (Labeled)**
- Yellow circles
- Spatial coordinates: [x=260, y=20] to [x=360, y=60]
4. **Out-of-Task Distribution (OTD) Entities/Background (Unlabeled)**
- Yellow triangles
- Spatial coordinates: [x=380, y=20] to [x=480, y=60]
*Note: All legend elements match their corresponding visual representations in the diagram.*
---
## Component Breakdown
### 1. Sampling Stage (Left Panel)
- **Input Sources**:
- `Si (train)`: Labeled in-task documents (blue/green/red squares)
- `Qi (test)`: Unlabeled in-task documents (white squares with borders)
- `P(T)`: Out-of-task distribution sampling (yellow circles/triangles)
- **Document Representation**:
- Token sequences shown as colored blocks (0 to L-1 positions)
- Example: "tan chay yee" document with mixed ITD/OTD tokens
### 2. Meta-Parameters Processing (Center Panel)
- **Attention Mechanism**:
- Red arrows indicate cross-document attention between:
- `Ym` (in-task features)
- `Ymq` (query features)
- Positional encoding: 1D/2D position markers
- **Output**: Task-dependent embedding space (see next section)
### 3. Task-Dependent Embedding Space (Right Panel)
- **Clustering**:
- **Class-1**: Green squares (OTD background)
- **Class-2**: Red squares (ITD entities)
- **Class-3**: Blue squares (ITD entities)
- **Token Labeling**:
- Final predictions shown as yellow triangles connected to input documents via red lines
---
## Data Flow Diagram
1. **Input**: Mixed ITD/OTD documents sampled from distributions
2. **Processing**:
- Meta-parameters (φ) encode document features
- Attention mechanism aligns tokens across documents
3. **Output**:
- Embeddings clustered by class in 2D space
- Predictions mapped back to original documents
---
## Key Observations
1. **Data Distribution**:
- ITD entities (labeled/unlabeled) form distinct clusters
- OTD entities/background show broader distribution
2. **Model Behavior**:
- Attention mechanism prioritizes cross-document relationships
- Embedding space separates classes with clear decision boundaries
3. **Prediction Accuracy**:
- Red/yellow connections show high-confidence predictions for ITD entities
---
## Language Note
All textual content is in English. No non-English text detected.
</details>
Figure 2: The proposed task-aware meta-learning framework. The framework is applicable to both the metric-based method (aiming to learn $\phi$ ) and gradient-based method (aiming to learn $\{\phi,\psi\}$ ) .
Distribution over FVDER Tasks.
Based on the above formulation for a single FVDER task, we further formulate a task distribution $P(\mathcal{T})$ over FVDER task s. Assume there is a large pool of entity types $\mathcal{C}$ corresponding to the domain of $P(\mathcal{T})$ . For any task $\mathcal{T}_{i}=\{S_{i},Q_{i},\mathcal{E}_{i}\}\sim P(\mathcal{T})$ , its target entity types come from the class pool $\mathcal{E}_{i}⊂\mathcal{C}$ .
Global Objective.
The global objective is to train a meta-learner for $P(\mathcal{T})$ such that any task $\mathcal{T}_{i}\sim P(\mathcal{T})$ can take advantage of it and then quickly obtain a good task-personalized model. Following Finn et al. (2017); Chen et al. (2021), to train the meta-learner, we simulate a meta-level dataset consisting of example FVDER task instances from $P(\mathcal{T})$ . Figure 1 shows an overview of the dataset simulation of $P(\mathcal{T})$ . Specifically, a meta-learner is trained from the experiences of solving a set of meta-training tasks $\mathcal{D}_{meta}^{trn}=\{\mathcal{T}_{1},\mathcal{T}_{2}...\mathcal{T}_{\tau%
_{trn}}\}$ over a set of base classes $\mathcal{C}_{base}⊂\mathcal{C}$ , where each training task is from the base classes $\mathcal{E}_{i}⊂\mathcal{C}_{base}$ . The experiences are given in the form of the ground truth labels of query sets. That is, the query sets of training tasks are treated as validation sets, $Q_{i}=\{(X^{*}_{j},Y^{*}_{j})\}_{j=1}^{M_{qi}}$ for $∀\mathcal{T}_{i}∈\mathcal{D}_{meta}^{trn}$ . To evaluate the performance of the meta-learner on solving FVDER tasks with novel entity types $\mathcal{C}_{novel}=C\setminus\mathcal{C}_{base}$ , we will individually train a set of meta-testing tasks $\mathcal{D}_{meta}^{test}=\{\mathcal{T}^{*}_{1},\mathcal{T}^{*}_{2}...,%
\mathcal{T}^{*}_{\tau_{tst}}\}$ , where each testing task $\mathcal{E}^{*}_{i}⊂\mathcal{C}_{novel}$ . The query sets of meta-testing tasks are unlabelled testing data, that is, $Q^{*}_{i}=\{X^{*}_{j}\}_{j=1}^{M_{qi}^{*}}$ , $∀\mathcal{T}^{*}_{i}∈\mathcal{D}_{meta}^{test}$ .
3 Methodology
We propose a meta-learning (i.e., learning-to-learn) framework to solve the entity-level few-shot VDER tasks. Different from the recent advancements based on pre-training or prompts Wang and Shang (2022); Wang et al. (2023b), meta-learning helps to significantly promote quick adaptation and improve model personalization on task-specific entity types.
The proposed framework consists of three components: (1) a multimodal encoder (Section 3.1) that encodes the document images within a task into a task-dependent embedding space (Section 3.2); (2) a token labelling function (Section 3.3); and (3) a meta-learner built upon the encoder-decoder model, where we propose two task-personalized meta-learning methods (Section 3.4). Figure 2 shows an overview of the framework.
3.1 Multimodal Encoder
We consider an encoder network represented by a parameterized function $f^{enc}_{\phi}$ with parameters $\phi$ . The encoder aims to capture the cross-modal semantic relationships between tokens in a document image. To achieve this, we employ a BERT-base Transformer Kenton and Toutanova (2019) with an additional positional embedding layer for the 2d position of each input token, through which the complex spatial structure of the input document can be incorporated and then interacted with the textual contents via attention mechanisms. The embedding of token $l$ in the document image $j$ of task $\mathcal{T}_{i}$ is computed as $\mathbf{h}_{ijl}=f^{enc}_{\phi}(\mathbf{x}_{ijl}|X_{ij}).$ In practice, before meta-training, the multimodal Transformer is pretrained on the IIT-CDIP dataset Harley et al. (2015). Details can be found in Appendix C.1.
3.2 Task-dependent Embedding Space
Through the multimodal encoder, each task $\mathcal{T}_{i}$ is encoded to a task-dependent embedding space. As illustrated in Figure 2, on the task-dependent embedding space, there are all the token embeddings in the task: $H_{i}=\{\mathbf{h}_{ijl}|l∈[L],(X_{j},Y_{j})∈ S_{i}\cup Q_{i}\}$ .
There are several properties on the task’s embedding space: (1) First, in addition to in-task distribution (ITD) entities from the target classes, there exists a large portion (nearly 90% as observed in our dataset FewVEX) of out-of-task distribution (OTD) entities or background, which serve as the context for ITD entities but dominate the task’s embedding space. (2) Second, the OTD entities follows a multi-mode distribution $P_{i}^{\texttt{OTD}}$ that consists of several unimodal distributions, each of which represents an outlier entity type aside from ITD. (3) Finally, it is not guaranteed that each unimodal component of $P_{i}^{\texttt{OTD}}$ is observable in the train set $S_{i}$ –in many cases, an OTD entity type could occur in the query documents but is absent in the support documents. To sum up, the OTD distribution in a $N$ -way $K$ -shot FVDER task is complex, dominates the entire task, and may vary between documents.
3.3 Token Labelling
On the basis of the task-dependent embedding space, the token labelling or decoding process can either leverage a parameterized decoder $f^{dec}_{\psi}$ that acts as the classification head, or rely on non-parametric methods, like nearest neighbors.
3.4 Task-aware Meta Learners
We consider two main categories of the meta-learning approaches: the gradient-based and the metric-based meta-learning, on each of which we propose our own methods. We specifically pay attention to two properties when solving the entity-level $N$ -way $K$ -shot FVDER tasks: 1) Few-shot out-of-task distribution detection, which aims to distinguish the ITD (i.e., the target $N$ entity types) against the OTD (i.e., background or any outlier entity type). 2) Few-shot token labelling for in-task distribution tokens, which assigns each ITD token to one of the $N$ in-task entity types.
3.4.1 Task-aware ContrastProtoNet
We first focus on metric-based meta-learning Snell et al. (2017); Oreshkin et al. (2018). The goal is to learn meta-parameters $\phi$ for the encoder network, generally shared by all tasks $\mathcal{T}_{i}\sim P(\mathcal{T})$ , such that, on each task’s specific embedding space, the distances between token points in $S_{i}$ and $Q_{i}$ are measured by some metrics, e.g., Euclidean distances.
ProtoNet with or without Estimated OTD.
One of the most popular and effective metric-based meta-learning methods is the Prototypical Network (ProtoNet) Snell et al. (2017). For each FVDER task $\mathcal{T}_{i}=\{S_{i},Q_{i},\mathcal{E}_{i}\}$ , the prototype for each entity type $e∈\mathcal{E}_{i}$ can be computed as the mean embedding of the tokens from $S_{i}$ belonging to that entity type, that is, $\boldsymbol{\mathbf{\mu}}_{i,e}=1/|I^{\texttt{trn}}_{e}|\sum_{(j,l)∈ I^{%
\texttt{trn}}_{e}}\mathbf{h}_{ijl}$ , where $I^{\texttt{trn}}_{e}$ is a collection of the token indices for the type- $e$ tokens in the support set. For the out-of-task distribution (OTD), one may consider to estimate its mean embedding as an extra O -type prototype: $\overline{\boldsymbol{\mathbf{\mu}}}_{i}=1/|I^{\texttt{trn}}_{\texttt{OTD}}|%
\sum_{(j,l)∈ I^{\texttt{trn}}_{\texttt{OTD}}}\mathbf{h}_{ijl}$ .
Challenges.
A problem of the vanilla methods is that there is no specific mechanism distinguishing the ITD entities against the OTD entities, which are weakly-supervised and partially observed from a multi-mode distribution $P_{i}^{\texttt{OTD}}$ . The prototype $\overline{\boldsymbol{\mathbf{\mu}}}_{i}$ is a biased estimation of the mean of $P_{i}^{\texttt{OTD}}$ and the covariance of $P_{i}^{\texttt{OTD}}$ can be larger than any of the ITD classes. In consequence, the task-specific ITD classes may not be clearly distinguished from the OTD classes on the task-dependent embedding space and most of tokens will be misclassified.
Regarding the above challenges, we propose a task-aware method that adopts two techniques to boost the performance.
Meta Contrastive Loss.
During meta-training, we encourage the $N$ ITD entity types to be distinguished from each other as well as far away from any unimodal component of OTD. To achieve this, we adopt the idea from supervised contrastive learning Khosla et al. (2020) to compute a meta contrastive loss (MCON) from each task, which will be further used to compute meta-gradients for updating the meta-parameters $\phi$ . Intuitively, our meta-objective is that the query tokens from the ITD type- $e$ should be pushed away from any OTD tokens and other types of ITD tokens within the same task, and should be pulled towards the prototype $\boldsymbol{\mathbf{\mu}}_{i,e}$ of support tokens and the other query tokens belonging to the same entity type. Formally, let $I^{\texttt{val}}_{\texttt{ITD}}=\{(j,l)|l∈[L],(X^{*}_{j},Y^{*}_{j})∈ Q_{i}%
,y^{*}_{ijl}∈\mathcal{E}_{i}\}$ denote a collection of ITD validation tokens. The meta contrastive loss computed from $\mathcal{T}_{i}$ is
$$
\begin{split}&\mathcal{L}_{i}^{\texttt{MCON}}=\sum_{(j,l)\in I^{\texttt{val}}_%
{\texttt{ITD}}}\frac{-1}{|A^{{}^{+}}(j,l)|}\sum_{\mathbf{v}\in A^{{}^{+}}(j,l)%
}a_{ijl}(\mathbf{v})\\
&a_{ijl}(\mathbf{v})=\log\frac{\exp(\mathbf{h}_{ijl}^{\top}\mathbf{v})}{\sum_{%
\mathbf{u}\in A(j,l)}\exp(\mathbf{h}_{ijl}^{\top}\mathbf{u})}.\end{split} \tag{2}
$$
For each anchor, i.e., the ITD validation token $l$ in document $j$ , we let $A^{{}^{+}}(j,l)=\{\mathbf{h}_{irm}|(r,m)∈ I^{\texttt{val}}_{\texttt{ITD}}%
\setminus\{(j,l)\},y^{*}_{ijl}=y^{*}_{irm}\}\cup\{\boldsymbol{\mathbf{\mu}}_{i%
,e}|e∈\mathcal{E}_{i},y^{*}_{ijl}=e\}$ denote a collection of the positive embeddings/prototype for the anchor and let $A(j,l)=\{\mathbf{h}_{irm}|(r,m)∈ I_{\texttt{ALL}}\setminus\{(j,l)\}\}\cup\{%
\boldsymbol{\mathbf{\mu}}_{i,e}\}_{e∈\mathcal{E}_{i}}$ contain all the ITD/OTD embeddings and prototypes ( $I_{\texttt{ALL}}=\{(j,l)|l∈[L],(X_{j},Y_{j})∈ S_{i}\cup Q_{i}\}$ ) in $\mathcal{T}_{i}$ .
Unsupervised OTD Detector.
During the testing time for novel entity types, we adopt the nonparametric token-level nearest neighbor classifier, which assigns $\mathbf{x}_{ijl}$ the same label as the support token that is nearest in the task’s embedding space:
$$
\hat{y}^{\texttt{nn}}_{ijl}=\text{argmax}_{{y}_{irm}\text{ where }(r,m)\in I^{%
\texttt{trn}}_{\texttt{ALL}}}\mathbf{h}_{ijl}^{\top}\mathbf{h}_{irm}, \tag{3}
$$
where $I^{\texttt{trn}}_{\texttt{ALL}}=\{(r,m)|m∈[L],(X_{r},Y_{r})∈ S_{i}\}$ . The ITD or OTD entity tokens in $Q_{i}$ should be closer to the corresponding ITD or OTD tokens in $S_{i}$ that belong to the same entity type. However, since the embedding space dependent on the support set is not sufficiently rich, the network may be blind to properties of the out-of-task distribution $P_{i}^{\texttt{OTD}}$ that turn out to be necessary for accurate entity retrieval. To tackle this, we exploit an unsupervised out-of-distribution detector Ren et al. (2021) operating on the task-dependent embedding space, in assistance with the classifier. Specifically, we define an OTD detector: $\hat{y}_{ijl}=\texttt{O}$ if $r(\mathbf{h}_{ijl})≥ R_{i}$ ; otherwise, $\hat{y}_{ijl}=\hat{y}^{\text{nn}}_{ijl}$ , where $R_{i}$ is the task-dependent uncertainty threshold and $r(\mathbf{h}_{ijl})$ is defined as the OTD score of each token computed as its minimum Mahalanobis distance among the $N$ ITD classes: $r(\mathbf{h}_{ijl})=\min_{e∈\mathcal{E}_{i}}(\mathbf{h}_{ijl}-\boldsymbol{%
\mathbf{\mu}}_{i,e})^{→p}\Omega_{i,e}^{-1}(\mathbf{h}_{ijl}-\boldsymbol{%
\mathbf{\mu}}_{i,e})$ . Here, $\Omega_{i,e}=\sum_{(j,l)∈ I^{\texttt{trn}}_{e}}(\mathbf{h}_{ijl}-\boldsymbol%
{\mathbf{\mu}}_{i,e})^{→p}(\mathbf{h}_{ijl}-\boldsymbol{\mathbf{\mu}}_{i,e})$ is the covariance matrix for entity type $e$ computed from the type- $e$ tokens in the support set ( $I^{\texttt{trn}}_{e})$ . The higher OTD score indicates the more likely the token belongs to the background.
3.4.2 Computation-efficient Gradient-based Meta-learning with OTD Detection
For gradient-based meta learning, the goal is to learn the meta-parameters $\theta=\{\phi,\psi\}$ globally shared over the task distribution $P(\mathcal{T})$ , which can be fast fine-tuned for any given individual task $\mathcal{T}_{i}$ .
Computation-efficient Meta Optimization.
Although MAML Finn et al. (2017) is the most widely adopted approach, the fact that it needs to differentiate through the fine-tuning optimization process makes it a bad candidate for Transformer-based encoder-decoder model, where we need to save a large number of high-order gradients for the encoder. Instead, we consider two alternatives which require less computing resources and more efficient. ANIL Raghu et al. (2019) employs the same bilevel optimization framework as MAML but the encoder is not fine-tuned during the inner loop. The features from the encoder are reused in different tasks, to enable the rapid fine tuning of the decoder. Reptile Nichol et al. (2018) is a first-order gradient based approach that avoids the high-order meta-gradients. To further boost training efficiency, we exploit Federated Learning Tian et al. (2022); Chen and Zhang (2022a) for meta-optimization of Transformer.
Task-aware Hierarchical Classifier (HC).
A vanilla classifier can achieve high performance in the label-sufficient VDER. However, it turns out to be not robust in few-shot FVDER tasks because of the existence of the complicated out-of-task entities–the models usually either get overconfident on the $N$ IID entity types or fail to distinguish target entities from the OTD background. For this reason, we incorporate OTD detection into the decoder and propose a hierarchical classifier, which has two classifiers $\psi=\{\psi_{1},\psi_{2}\}$ : 1) binary classifier $f^{bin}_{\psi_{1}}$ , so that all ITD tokens are classified against OTD ones, and 2) entity classifier $f^{ent}_{\psi_{2}}$ , so that ITD tokens are classified to one of the $N$ entity types of the task. Specifically, suppose $P_{i}^{\texttt{OTD}}$ and $P_{i}^{\texttt{ITD}}$ denotes the OTD and ITD of the task $\mathcal{T}_{i}$ , respectively. The probability that the token $\mathbf{h}_{ijl}$ is from OTD is denoted as $P(y_{ijl}=\texttt{O})=f^{ent}_{\psi^{\prime}_{i1}}(\mathbf{h}_{l})$ , which is used as the OTD score to weight the entity prediction. The probability that the token is the entity type- $e$ is computed as $P(y_{ijl}=e|\mathbf{x}_{ijl}∈ P_{i}^{\texttt{ITD}})=(1-P(y_{ijl}=\texttt{O})%
)f^{ent}_{\psi^{\prime}_{i2}}(\mathbf{h}_{ijl})_{e}.$
| FewVEX(S) | CORD | 18 | 3000 | CORD | 5 | 128 | [1, 5] |
| --- | --- | --- | --- | --- | --- | --- | --- |
| FewVEX(M) | CORD+FUNSD | 20 | 3000 | CORD+FUNSD | 6 | 256 | [1, 6] |
Table 2: Statistics of two variants of FewVEX. From each dataset, we can test different $N$ -way $K$ -shot settings.
4 FewVEX Dataset
There is no existing benchmark specifically designed for task-personalized Entity-level $N$ -way Soft- $K$ -shot VDER. To facilitate future research on this problem, we create a new dataset, FewVEX.
Source Collection Due to page limit, details are moved to Appendix B.1.:
FewVEX is built from two source datasets: FUNSD Jaume et al. (2019) contains images of forms annotated by the bounding boxes of 3 types of entities; CORD Park et al. (2019) contains scanned receipts annotated by 6 superclasses which are divided into 30 fine-grained subclasses. From them, we collect 1199 document images ( $\mathcal{D}$ ) annotated by 26 entity types ( $\mathcal{C}$ ).
Meta-learning Tasks:
We use $\mathcal{D}$ and $\mathcal{C}$ to construct FewVEX, represented by $\mathcal{D}_{meta}=\{\mathcal{D}_{meta}^{trn},\mathcal{D}_{meta}^{tst}\}$ such that the testing tasks $\mathcal{D}_{meta}^{tst}$ focus on novel classes that are unseen in $\mathcal{D}_{meta}^{trn}$ during meta-training. To create this, we split $\mathcal{C}$ into two separate sets $\mathcal{C}=\mathcal{C}_{base}\cup\mathcal{C}_{novel},\mathcal{C}_{base}\cap%
\mathcal{C}_{novel}=\emptyset$ , where $\mathcal{C}_{base}$ is used for meta-training and $\mathcal{C}_{novel}$ for meta-testing.
Single Task Generation:
Following the definition in Eq.(1), each individual entity-level $N$ -way soft- $K$ -shot VDER task $\mathcal{T}=\{S,Q,\mathcal{E}\}$ in either $\mathcal{D}_{meta}^{trn}$ or $\mathcal{D}_{meta}^{tst}$ can be generated through the following steps. (1) Task-personalized Class sampling. The task’s target classes $\mathcal{E}$ are generated by randomly sampling $N$ entity types from either $\mathcal{C}_{base}$ (for the training task) or $\mathcal{C}_{novel}$ (for the testing task). (2) Document sampling. Given the $N$ target classes, we then collect document images that satisfies the $N$ -way, soft $K$ -shot entity occurrences (as in Appendix 1). (3) Annotation Conversion. A task only focuses on its specific $N$ rarely-present entity types. The entities in the original annotated documents, whose class do not belong to $\mathcal{E}$ , are replaced with the background O class Due to page limit, details can be found in Appendix B.2.2..
Proposed Datasets:
We construct two variants of FewVEX. FewVEX(S) focuses on single-domain receipt understanding, where $\mathcal{C}_{base}$ and $\mathcal{C}_{novel}$ are split from the 23 entity types in CORD. FewVEX(M) focuses on a combination of receipt and form domains, where $\mathcal{C}_{base}$ contains 18 classes from CORD and 2 from FUNSD; $\mathcal{C}_{novel}$ contains the other 5 classes in CORD and 1 in FUNSD. The statistics of FewVEX is summarized in Table 2.
Future Extension:
While CORD and FUNSD currently serve as the source datasets for FewVEX, we anticipate that future enhancements such as expanding the number of entity types ( $|\mathcal{C}|$ ) and diversifying documents ( $|\mathcal{D}|$ ) will lead to a better version of FewVEX. We introduce Cross-document Rejection (XDR) sampling to facilitate this improvement. XDR samples the train/test documents of each task in a way such that cooperatively ensures a specific range of entity occurrences per class, which mimics real-world user annotation behaviors motivated by class-balanced requirements. In the future, with access to open-source VDER datasets containing a wide array of classes and documents, XDR will enable the automated generation of numerous distinct task simulations. The pseudocode of XDR is shown in Algorithm 1 in the appendix.
| Methods | 4-way 1-shot | 4-way 4-shot | 5-way 2-shot | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Overall | TS | Overall | TS | Overall | TS | | | | | | | |
| P | R | F1 | AUROC | P | R | F1 | AUROC | P | R | F1 | AUROC | |
| ProtoNet | 0.02 | 0.10 | 0.03 | N/A | 0.02 | 0.09 | 0.03 | N/A | 0.02 | 0.09 | 0.03 | N/A |
| ProtoNet+EOD | 0.13 | 0.47 | 0.21 | N/A | 0.11 | 0.58 | 0.23 | N/A | 0.11 | 0.35 | 0.17 | N/A |
| ContrastProtoNet | 0.54 | 0.43 | 0.47 | 0.59 | 0.61 | 0.59 | 0.60 | 0.89 | 0.49 | 0.41 | 0.44 | 0.62 |
| Reptile | 0.48 | 0.10 | 0.15 | 0.58 | 0.62 | 0.44 | 0.51 | 0.67 | 0.39 | 0.09 | 0.14 | 0.59 |
| ANIL | 0.39 | 0.19 | 0.25 | 0.56 | 0.54 | 0.44 | 0.50 | 0.87 | 0.35 | 0.13 | 0.19 | 0.61 |
| Reptile+HC | 0.35 | 0.13 | 0.20 | 0.63 | 0.63 | 0.65 | 0.64 | 0.98 | 0.34 | 0.12 | 0.18 | 0.65 |
| ANIL+HC | 0.40 | 0.58 | 0.50 | 0.95 | 0.47 | 0.59 | 0.51 | 0.98 | 0.38 | 0.56 | 0.46 | 0.92 |
Table 3: Performance on 4-way 1-shot, 4-way 4-shot, and 5-way 2-shot settings of FewVEX(S).
5 Experiments
Setups:
We compare the proposed framework with aforementioned meta-learning baselines on FewVEX. Data generation and methods are implemented using JAX and Tensorflow. All experiments ran on 32 TPU devices. We use the Adam optimizer to update the meta-parameters. For gradient based methods, we use vanilla SGD for the inner-loop optimization and fix 15 SGD updates with a constant learning rate of $0.015$ . Setup details and hyperparameters are available in Appendix C.4.
Evaluation Metrics:
We consider two types of quantitative metrics. (1) Overall Performance: following Xu et al. (2020), we use the precision (P), recall (R), and micro F1 -score over meta-testing tasks to measure the accuracy of entity retrieval. (2) Task Specificity (TS): to evaluate how well the trained meta-learners can distinguish in-task distribution (ITD) from out-of-task distribution (OTD) for any novel given task, we plot ROC curves and calculate AUROC Xiao et al. (2020) using the ITD scores over meta-testing tasks. A random guessing detector outputs an AUROC of 0.5. A higher AUROC indicates better TS performance.
5.1 Main Results
Table 3 reports the results on FewVEX(S). Under the same $N$ and $K$ setups, traditional meta-learning methods fail to balance the precision and recall performances: ANIL and Reptile using vanilla decoders achieved high precision but tended to perform low recall; the vanilla Prototypical Networks tended to be opposite: low precision but high recall. In contrast, ANIL+HC, Reptile+HC and ContrastProtoNet, achieved better precision-recall balance and thus higher F1 scores and TS, proving that detecting and alleviating the influence of out-of-task distribution can improve task personalization and accuracy. Such phenomenon is also illustrated in Figure 3 and Figure 5 in Appendix D.2, where we plot ROC curves and tSNE visualizations of token embeddings after task adaptation. Comparing our methods against baselines, we observe an elevation in the curves and more distinct boundaries between OTD and ITD and between ITD classes.
The reasons are as follows. First, ANIL and Reptile treat the dominant OTD instances as an extra class as well. The problem turns out the imbalanced classification in meta-learning, one of the challenges in few-shot VDER tasks. By using an OTD detector, ANIL+HC and Reptile+HC can faster adapt to the task-specific boundary between OTD and ITD. Overall, this potentially increase the recall and task specificity score and the overall F1 score. Second, for the vanilla metric-based methods, where OTD instances are treated as one extra class, the ITD testing instances tend to be close to ITD class centers so that we have high recall. However, OTD instances dominate the task. It is possible that some OTD testing instances are closer to ITD centers than the OTD class center (the average center of multiple OTD classes) so that most of them are misclassified as one of ITD classes, i.e., low precision. In opposite, ContrastProtoNet does not make any assumption on the OTD distribution; instead, we enforce OTD to be far away from ITD classes and classify via token-level similarities while considering probabilistic uncertainty.
<details>
<summary>2311.00693v2/extracted/5283733/figures/viz1-v5.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Image Analysis
## Overview
The image contains **four subplots** organized in a 2x2 grid, comparing **ITD (In-Distribution)** vs. **OTD (Out-Of-Distribution)** data distributions and entity types. Each subplot includes scatter plots, legends, and inset visualizations. Below is a detailed breakdown of textual and structural components.
---
## Subplot (a): ITD v.s. OTD
### Left Chart
- **Title**: "ITD v.s. OTD"
- **Legend**:
- **Red**: `IND` (In-Distribution)
- **Blue**: `OOD` (Out-Of-Distribution)
- **Axes**:
- **X-axis**: Numerical range from `-80` to `80`
- **Y-axis**: Numerical range from `-100` to `75`
- **Data Points**:
- Red (`IND`) and blue (`OOD`) clusters are spatially separated, with red concentrated in the upper-right quadrant and blue dispersed across the plot.
- **Inset Plot**:
- A small scatter plot with a diagonal dashed line (likely a reference line for correlation).
- Multiple colored lines (red, blue, green, etc.) representing data trends, but labels are illegible.
### Right Chart
- **Title**: "ITD Entity Types"
- **Legend**:
- **Red**: `SUB_TOTAL.ETC`
- **Green**: `TOTAL.TOTAL_ETC`
- **Blue**: `SUB_TOTAL.DISCOUNT_PRICE`
- **Purple**: `MENU.SUB_UNITPRICE`
- **Black**: `support(train)`
- **Black Star**: `query(val)`
- **Black Triangle**: `prototype`
- **Axes**:
- **X-axis**: `tsne-2d-1` (range: `-15` to `30`)
- **Y-axis**: `tsne-2d-2` (range: `-20` to `20`)
- **Data Points**:
- Red (`SUB_TOTAL.ETC`) and green (`TOTAL.TOTAL_ETC`) clusters dominate the upper-right quadrant.
- Purple (`MENU.SUB_UNITPRICE`) appears in the lower-left quadrant.
- Black stars (`query(val)`) and triangles (`prototype`) are sparsely distributed.
---
## Subplot (b): ITD Entity Types (Continued)
### Left Chart
- **Title**: "ITD v.s. OTD" (same as subplot (a), left)
- **Legend**: Identical to subplot (a), left.
- **Axes**: Same numerical ranges as subplot (a), left.
- **Data Points**:
- Similar spatial distribution to subplot (a), left, with red (`IND`) and blue (`OOD`) clusters.
### Right Chart
- **Title**: "ITD Entity Types" (same as subplot (a), right)
- **Legend**: Identical to subplot (a), right.
- **Axes**: Same `tsne-2d-1` and `tsne-2d-2` axes as subplot (a), right.
- **Data Points**:
- Additional purple (`MENU.SUB_UNITPRICE`) clusters in the lower-left quadrant.
- Increased density of black stars (`query(val)`) and triangles (`prototype`).
---
## Key Observations
1. **Legend Consistency**:
- Colors in legends match data points across all charts (e.g., red = `IND`, blue = `OOD`).
- Spatial grounding confirms legend placement (e.g., upper-right in subplot (a), lower-right in subplot (b)).
2. **Trend Verification**:
- **Subplot (a), Left**: Red (`IND`) points cluster tightly, while blue (`OOD`) points are more dispersed.
- **Subplot (a), Right**: Red (`SUB_TOTAL.ETC`) and green (`TOTAL.TOTAL_ETC`) form distinct clusters, suggesting entity-type separation.
- **Subplot (b), Right**: Purple (`MENU.SUB_UNITPRICE`) introduces a new cluster, indicating additional entity-type diversity.
3. **Inset Plots**:
- Likely represent dimensionality reduction (e.g., t-SNE) or correlation analysis, but details are unclear due to small size.
---
## Missing/Unclear Elements
- Numerical values for inset plot axes and lines.
- Specific data point counts or exact coordinates.
- Descriptions of line styles in inset plots (e.g., dashed vs. solid).
---
## Conclusion
The image visualizes **ITD vs. OTD data distributions** and **entity-type clustering** using t-SNE embeddings. Red (`IND`) and blue (`OOD`) points are spatially separated, while entity types (e.g., `SUB_TOTAL.ETC`, `MENU.SUB_UNITPRICE`) form distinct clusters. Legends are consistently placed and color-coded, aiding interpretation.
</details>
Figure 3: tSNE visualization of the learned embedding space for a randomly-selected meta-testing task, comparing (a) vanilla ProtoNet and (b) ContrastProtoNet methods, under the 4-way 4-shot setting of FewVEX(S).
5.2 Class Structure Disentanglement
We examine the explanability and disentanglement of the learned representations (generated by the meta-parameters of encoder). Figure 3 shows tSNE visualizations of the learned embedding space of a selected task. Overall, by comparing Figure 3 to Table 3, the higher performance appears to be consistent with more disentangled clusters. Moreover, from the first column containing ITD (red) tokens and OTD (blue) tokens, we observe that the blue points dominate the embedding space and comprises multiple clusters, which demonstrates the out-of-task distribution is multimodal, making it hard to identify in-task entities. Further, we try to understand the disentangled structure of classes from the clusters. In the right column in Figure 3, we zoom into the four ITD classes, where purple, red, blue, and green points denotes the task-specific four entity types, respectively. We observe that “ menu (sub_uniprice)" (violet) is far away from the other three classes, while the other three classes are slightly entangled. Such class structure represents the relationships between these entity types, which is explainable: the red and blue classes belong to the same superclass sub_total; the green and red are both etc information.
5.3 Multi-domain Few-shot VDER
Table 4 reports the 4-way 2-shot results on the mixed-domain FewVEX(M), which combines receipts with forms for few-shot learning. The results slightly underperform those under the single-domain setting. A reason could be that the structure of forms is different from that of receipts and it is challenging to find the good meta-parameters for both domains. Moreover, the number of classes in the form domain is much smaller than that in the receipt domain. Such imbalanced class combination would push the meta-parameters to adapt to the relative prominent domain.
| Methods ProtoNet ProtoNet+EOD | P 0.02 0.18 | R 0.10 0.46 | F1 0.03 0.26 | AUROC N/A N/A |
| --- | --- | --- | --- | --- |
| ContrastProtoNet | 0.54 | 0.46 | 0.50 | 0.85 |
| Reptile | 0.45 | 0.17 | 0.25 | 0.57 |
| ANIL | 0.39 | 0.19 | 0.26 | 0.56 |
| Reptile+HC | 0.42 | 0.23 | 0.30 | 0.88 |
| ANIL+HC | 0.44 | 0.56 | 0.49 | 0.97 |
Table 4: Performance on 4-way 2-shot FewVEX(M).
6 Related Works
Research related to Visually-rich Documents (VD) have emerged as significant topics in NLP. Here, we briefly review the prior research of (1) models for general VD understanding; (2) the particular Entity Retrieval (ER) task for VD and existing Few-shot VDER methods; (3) the methodology-level related works in general few-shot learning An extended version of Related Works is in Appendix A..
General VD Understanding.
Pretrained LLMs for general VD understanding have shown strong performance in general understanding of visually-rich multimodal documents, and therefore, can serve as pretrained prior for Few-shot VDER. There are many LLM candidates our framework can use as the pretrained encoder, such as LayoutLM Xu et al. (2020), which extends the standard BERT Kenton and Toutanova (2019), and the recent LayoutLMv3 Huang et al. (2022) and DocGraphLM Wang et al. (2023a), which show improvements by using advanced cross-modal alignment or local-global position embeddings. In this paper, we use the basic BERT model for experiments since our focus is how to improve the post fine-tuning on few-shot downstream tasks, without a restrict on the specification of LLM type. Extending this research to other pretrained Document Understanding LLMs could be one of future works.
Few-shot VD Entity Retrieval.
The particular Entity Retrieval (ER) tasks for VD have been studied for many years using Deep Neural Networls, Graph Neural Networks, or traditional models Zhang et al. (2020); Shi et al. (2023), or empowered by the contextual prior knowledge provided by VD-understanding LLMs Xu et al. (2021); Lee et al. (2022); Hong et al. (2022). VDER in the few-shot scenarios pose unique challenges such as achieving task personalization with limited annotation, yet has garnered comparatively less attention in prior research. Recent advancements in Few-shot VDER predominantly rely on pretrained LLMs and prompt design, followed by fine tuning on a small number of VD documents Wang et al. (2021b); Wang and Shang (2022). Despite their success, this paper explores a complementary research perspective. While previous works address the situation where the entity label space is fixed over tasks and entity occurrences do not shift a lot, we tackle a different application situation–every few-shot task is user-specific, focusing on a small subspace of interested entity types (entity-level task personalization), and entity occurrences vary significantly between tasks and documents.
General Few-shot Learning.
Few-shot Learning (FSL) has been studied in various AI/ML domains Song et al. (2023). In CV or NLP domains, there are two FSL tasks closely related to Few-shot VDER: (1) Few-shot object detection or segmentation Köhler et al. (2023); Antonelli et al. (2022) aims at localizing objects in visual data, where each object can be treated as an entity in VDER; and, (2) Few-shot Named Entity Recognition (NER) aims at labelling tokens within a contextual text sequence Li et al. (2022); Huang et al. (2021). While few-shot NER and object detection algorithms can provide inspirations for few-shot VDER, the challenges we face and methodology details are relatively different. Beyond them, Multimodal Few-shot Learning (MFSL) utilizes complementary information from multiple modalities to improve a unimodal FSL Chen and Zhang (2021); Lin et al. (2023). The scope of this paper falls within the field of MFSL. While existing FSL/MFSL approaches can be categorized into meta -learning approaches Snell et al. (2017); Finn et al. (2017) and non-meta LLM pretraining-and-fine-tuning approaches Brown et al. (2020), we employ the benefits from both LLM prior knowledge and the meta-learning for task-personalized fine-tuning. Furthermore, to enhance task specificity performance of few-shot VDER, we employ Few-shot Out-of-distribution (OOD) Detection, which itself is a recently emerged task Le et al. (2021).
7 Conclusions
In this paper, we studied the multimodal few-shot learning problem for VDER. We started by proposing a new formulation of the FVDER problem to be an entity-level, $N$ -way soft- $K$ -shot learning under the framework of meta learning as well as a new dataset, FewVEX, which is designed to reflect the practical problems. To solve the new task, we exploited both metric-based and gradient-based meta-learning paradigms, along with a new technique we proposed to enhance task personalization via out-of-task-distribution awareness. The experiments showed that the proposed methods achieve major improvements over the baselines for FVDER.
For future works, our approaches might be improved in the following directions: (1) A more robust algorithm that distinguishes between the OTD and ITD. (2) An advanced decoding process considering graphical structures or implicit correlations between entity instance within each task. (3) Exploring the causal role of pretrained models.
Acknowledgements
We would like to express sincere appreciation to all those who contributed to this research. Special thanks to all the reviewers for their constructive feedback and comments, which greatly improved the quality of this paper.
Limitations
There exists a few limitations to this work. Firstly, the derived dataset is based on the current open source ones for document understanding, which are small in their size and has very limited amount of classes. A dedicated dataset that is built specifically for the purpose of studying few-shot learning for document entity retrieval is needed. Secondly, the scope of our current studies is limited to non-overlapping entities. The performance of the models under nested and entities with overlapping ground truth is yet to be examined.
Ethics Statements
The dataset created in this paper was derived from public datasets (i.e., FUNSD, CORD) which are publicly available for academic research. No data collection was made during the process of making this work. The FUNSD and CORD datasets themselves are a collection of receipts and forms collected and released by a third party paper which has been widely used in the field of visually rich document entity retrieval research and is not expected to contain any ethnics issues to the best of our knowledge.
References
- Antonelli et al. (2022) Simone Antonelli, Danilo Avola, Luigi Cinque, Donato Crisostomi, Gian Luca Foresti, Fabio Galasso, Marco Raoul Marini, Alessio Mecca, and Daniele Pannone. 2022. Few-shot object detection: A survey. ACM Computing Surveys (CSUR), 54(11s):1–37.
- Appalaraju et al. (2021) Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In Proceedings of the IEEE/CVF international conference on computer vision, pages 993–1003.
- Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
- Carbonell et al. (2021) Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornés, and Josep Lladós. 2021. Named entity recognition and relation extraction with graph neural networks in semi structured documents. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 9622–9627. IEEE.
- Chaudhuri et al. (2017) Arindam Chaudhuri, Krupa Mandaviya, Pratixa Badelia, and Soumya K Ghosh. 2017. Optical character recognition systems. In Optical Character Recognition Systems for Different Languages with Soft Computing, pages 9–41. Springer.
- Chen and Zhang (2021) Jiayi Chen and Aidong Zhang. 2021. Hetmaml: Task-heterogeneous model-agnostic meta-learning for few-shot learning across modalities. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM ’21, page 191–200, New York, NY, USA. Association for Computing Machinery.
- Chen and Zhang (2022a) Jiayi Chen and Aidong Zhang. 2022a. Fedmsplit: Correlation-adaptive federated multi-task learning across multimodal split networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 87–96.
- Chen and Zhang (2022b) Jiayi Chen and Aidong Zhang. 2022b. Topological transduction for hybrid few-shot learning. In Proceedings of the ACM Web Conference 2022, WWW ’22, page 3134–3142, New York, NY, USA. Association for Computing Machinery.
- Chen et al. (2021) Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. 2021. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9062–9071.
- Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126–1135. PMLR.
- Gal et al. (2020) Rinon Gal, Shai Ardazi, and Roy Shilkrot. 2020. Cardinal graph convolution framework for document information extraction. In Proceedings of the ACM Symposium on Document Engineering 2020, DocEng ’20, New York, NY, USA. Association for Computing Machinery.
- Garncarek et al. (2021) Łukasz Garncarek, Rafał Powalski, Tomasz Stanisławek, Bartosz Topolski, Piotr Halama, Michał Turski, and Filip Graliński. 2021. Lambert: layout-aware language modeling for information extraction. In International Conference on Document Analysis and Recognition, pages 532–547. Springer.
- Gu et al. (2021) Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unidoc: Unified pretraining framework for document understanding. In Advances in Neural Information Processing Systems, volume 34, pages 39–50. Curran Associates, Inc.
- Harley et al. (2015) Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 991–995.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
- Hong et al. (2022) Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10767–10775.
- Huang et al. (2021) Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Few-shot named entity recognition: An empirical baseline study. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10408–10423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Huang et al. (2022) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4083–4091.
- Huang et al. (2019) Zheng Huang, Kai Chen, Jianhua He, Xiang Bai, Dimosthenis Karatzas, Shijian Lu, and C. V. Jawahar. 2019. Icdar2019 competition on scanned receipt ocr and information extraction. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1516–1520.
- Jaume et al. (2019) Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), volume 2, pages 1–6.
- Jeong and Kim (2020) Taewon Jeong and Heeyoung Kim. 2020. Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification. Advances in Neural Information Processing Systems, 33:3907–3916.
- Kenton and Toutanova (2019) Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186.
- Khosla et al. (2020) Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural Information Processing Systems, volume 33, pages 18661–18673. Curran Associates, Inc.
- Koch et al. (2015) Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille.
- Köhler et al. (2023) Mona Köhler, Markus Eisenbach, and Horst-Michael Gross. 2023. Few-shot object detection: A comprehensive survey. IEEE Transactions on Neural Networks and Learning Systems, pages 1–21.
- Le et al. (2021) Duong Le, Khoi Duc Nguyen, Khoi Nguyen, Quoc-Huy Tran, Rang Nguyen, and Binh-Son Hua. 2021. Poodle: Improving few-shot learning via penalizing out-of-distribution samples. In Advances in Neural Information Processing Systems, volume 34, pages 23942–23955. Curran Associates, Inc.
- Lee et al. (2022) Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, and Tomas Pfister. 2022. FormNet: Structural encoding beyond sequential modeling in form document information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3735–3754, Dublin, Ireland. Association for Computational Linguistics.
- Li et al. (2022) Jing Li, Billy Chiu, Shanshan Feng, and Hao Wang. 2022. Few-shot named entity recognition via meta-learning. IEEE Transactions on Knowledge and Data Engineering, 34(9):4245–4256.
- Li et al. (2021) Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021. Selfdoc: Self-supervised document representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652–5660.
- Lin et al. (2023) Zhiqiu Lin, Samuel Yu, Zhiyi Kuang, Deepak Pathak, and Deva Ramanan. 2023. Multimodality helps unimodality: Cross-modal few-shot learning with multimodal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19325–19337.
- Liu et al. (2019) Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal information extraction from visually rich documents. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 32–39.
- Ma et al. (2022) Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022. Decomposed meta-learning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584–1596, Dublin, Ireland. Association for Computational Linguistics.
- Ming et al. (2022) Yifei Ming, Hang Yin, and Yixuan Li. 2022. On the impact of spurious correlation for out-of-distribution detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10051–10059.
- Nakayama (2018) Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github. com/chakki-works/seqeval.
- Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.
- Oreshkin et al. (2018) Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. 2018. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
- Pahde et al. (2021) Frederik Pahde, Mihai Puscas, Tassilo Klein, and Moin Nabi. 2021. Multimodal prototypical networks for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2644–2653.
- Park and Kim (2020) Inho Park and Sungho Kim. 2020. Performance indicator survey for object detection. In 2020 20th International Conference on Control, Automation and Systems (ICCAS), pages 284–288.
- Park et al. (2019) Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: a consolidated receipt dataset for post-ocr parsing. In Workshop on Document Intelligence at NeurIPS 2019.
- Pillutla et al. (2022) Krishna Pillutla, Sham M. Kakade, and Zaid Harchaoui. 2022. Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70:1142–1154.
- Powalski et al. (2021) Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, and Gabriela Pałka. 2021. Going full-tilt boogie on document understanding with text-image-layout transformer. In Document Analysis and Recognition–ICDAR 2021: 16th International Conference, Lausanne, Switzerland, September 5–10, 2021, Proceedings, Part II 16, pages 732–747. Springer.
- Raghu et al. (2019) Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. 2019. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations.
- Ramesh et al. (2021) Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8821–8831. PMLR.
- Ren et al. (2021) Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. 2021. A simple fix to mahalanobis distance for improving near-ood detection. arXiv preprint arXiv:2106.09022.
- Rusu et al. (2019) Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2019. Meta-learning with latent embedding optimization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
- Shen et al. (2021) Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, and Weining Li. 2021. Layoutparser: A unified toolkit for deep learning based document image analysis. In Document Analysis and Recognition–ICDAR 2021: 16th International Conference, Lausanne, Switzerland, September 5–10, 2021, Proceedings, Part I 16, pages 131–146. Springer.
- Shi et al. (2023) Dengliang Shi, Siliang Liu, Jintao Du, and Huijia Zhu. 2023. Layoutgcn: A lightweight architecture for visually rich document understanding. In Document Analysis and Recognition - ICDAR 2023, pages 149–165, Cham. Springer Nature Switzerland.
- Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
- Song et al. (2023) Yisheng Song, Ting Wang, Puyu Cai, Subrota K. Mondal, and Jyoti Prakash Sahoo. 2023. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Comput. Surv., 55(13s).
- Sun et al. (2021) Bo Sun, Banghuai Li, Shengcai Cai, Ye Yuan, and Chi Zhang. 2021. Fsce: Few-shot object detection via contrastive proposal encoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7352–7362.
- Tian et al. (2022) Yuanyishu Tian, Yao Wan, Lingjuan Lyu, Dezhong Yao, Hai Jin, and Lichao Sun. 2022. Fedbert: When federated learning meets pre-training. ACM Transactions on Intelligent Systems and Technology (TIST), 13(4):1–26.
- Tu et al. (2023) Yi Tu, Ya Guo, Huan Chen, and Jinyang Tang. 2023. Layoutmask: Enhance text-layout interaction in multi-modal pre-training for document understanding. arXiv preprint arXiv:2305.18721.
- van der Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605.
- Varoquaux et al. (2015) Gaël Varoquaux, Lars Buitinck, Gilles Louppe, Olivier Grisel, Fabian Pedregosa, and Andreas Mueller. 2015. Scikit-learn: Machine learning without learning the machinery. GetMobile Mob. Comput. Commun., 19:29–33.
- Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Neural Information Processing Systems.
- Wang et al. (2023a) Dongsheng Wang, Zhiqiang Ma, Armineh Nourbakhsh, Kang Gu, and Sameena Shah. 2023a. Docgraphlm: Documental graph language model for information extraction. Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval.
- Wang et al. (2022a) Jiapeng Wang, Lianwen Jin, and Kai Ding. 2022a. Lilt: A simple yet effective language-independent layout transformer for structured document understanding. In Annual Meeting of the Association for Computational Linguistics, pages 7747–7757.
- Wang et al. (2020) Xin Wang, Thomas Huang, Joseph Gonzalez, Trevor Darrell, and Fisher Yu. 2020. Frustratingly simple few-shot object detection. In International Conference on Machine Learning, pages 9919–9928. PMLR.
- Wang et al. (2021a) Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, and Ahmed Hassan Awadallah. 2021a. Meta self-training for few-shot neural sequence labeling. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1737–1747.
- Wang et al. (2022b) Ze Wang, Yipin Zhou, Rui Wang, Tsung-Yu Lin, Ashish Shah, and Ser Nam Lim. 2022b. Few-shot fast-adaptive anomaly detection. Advances in Neural Information Processing Systems, 35:4957–4970.
- Wang et al. (2023b) Zifeng Wang, Zizhao Zhang, Jacob Devlin, Chen-Yu Lee, Guolong Su, Hao Zhang, Jennifer Dy, Vincent Perot, and Tomas Pfister. 2023b. QueryForm: A simple zero-shot form entity query framework. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4146–4159, Toronto, Canada. Association for Computational Linguistics.
- Wang and Shang (2022) Zilong Wang and Jingbo Shang. 2022. Towards few-shot entity recognition in document images: A label-aware sequence-to-sequence framework. In Findings of the Association for Computational Linguistics: ACL 2022, pages 4174–4186, Dublin, Ireland. Association for Computational Linguistics.
- Wang et al. (2021b) Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. 2021b. Layoutreader: Pre-training of text and layout for reading order detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4735–4744.
- Xiao et al. (2020) Zhisheng Xiao, Qing Yan, and Yali Amit. 2020. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. Advances in neural information processing systems, 33:20685–20696.
- Xu et al. (2021) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. 2021. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591, Online. Association for Computational Linguistics.
- Xu et al. (2020) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1192–1200.
- Yoon et al. (2018) Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. 2018. Bayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pages 7332–7342.
- Yu et al. (2023) Lijun Yu, Jin Miao, Xiaoyu Sun, Jiayi Chen, Alexander G. Hauptmann, Hanjun Dai, and Wei Wei. 2023. Documentnet: Bridging the data gap in document pre-training. arXiv preprint arXiv:2306.08937.
- Zhang et al. (2020) Peng Zhang, Yunlu Xu, Zhanzhan Cheng, Shiliang Pu, Jing Lu, Liang Qiao, Yi Niu, and Fei Wu. 2020. Trie: end-to-end text reading and information extraction for document understanding. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1413–1422.
- Zinkevich et al. (2010) Martin Zinkevich, Markus Weimer, Lihong Li, and Alex Smola. 2010. Parallelized stochastic gradient descent. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc.
Appendix
Appendix A Related Works
We review related works corresponding to the Few-shot VDER tasks in Section A.1 and then review methodology-level related works in Section A.2.
A.1 V isually-rich D ocument Related Works
V isually-rich D ocuments (VD) are a vital category of multimodal data in the field of Document AI, typically consisting of texts, images, and layout structure of contents. Research and industrial applications pertaining to VD have emerged as significant topics in NLP over the past decade. Here we review the prior research works for VD-related tasks, including (1) LLMs for general VD understanding; (2) the particular Entity Retrieval (ER) task in VD and its prior work; and (3) existing works considering VDER in few-shot scenarios.
Visually-rich Document Understanding LLMs.
Large Language Models (LLMs) have shown strong performance in general understanding of visually-rich multimodal documents, and therefore, can serve as pretrained prior for Few-shot VDER. For this reason, here we review several LLM candidates we can use. LLMs for text-image-layout document understanding have emerged since LayoutLM Xu et al. (2020), which extends the standard BERT Kenton and Toutanova (2019) by additional layout information obtained from OCR Chaudhuri et al. (2017) preprocessing. After this, SelfDoc Li et al. (2021), UDoc Gu et al. (2021), LayoutLMv2 Xu et al. (2021), TILT Powalski et al. (2021), DocFormer Appalaraju et al. (2021), LiLT Wang et al. (2022a), and LayoutLMv3 Huang et al. (2022), show improvements by using cross-modal alignment or modern feature encoders for the image modality (e.g., ResNets He et al. (2016) and dVAE Ramesh et al. (2021)). Very recent works, UniFormer Yu et al. (2023), LayoutMask Tu et al. (2023), and DocGraphLM Wang et al. (2023a), employ token-level strategies like local text-image alignment, local position embeddings, and graph representation, to improve the modeling. In this paper, we use the basic BERT model for experiments since our focus is how to improve the post fine-tuning on few-shot downstream tasks, without a restrict on the specification of LLM type. Extending this research to other pretrained Document Understanding LLMs could be one of future works.
Entity Retrieval in Visually-rich Documents.
Visually-rich Document Entity Retrieval (VDER) aims at detecting bounding boxes for specific types of key information within scanned or digitally-born documents, which has garnered significant attention from researchers. While there are technical differences between Data-sufficient VDER and Few-shot VDER, the former offers foundational solutions and serves as a valuable baseline framework. Therefore, reviewing existing Data-sufficient VDER techniques is worthwhile. At early years, Deep Neural Networks (e.g., RNNs, CNNs) have been widely employed in addressing VDER tasks Huang et al. (2019); Zhang et al. (2020). Later, Graph Neural Networks (GNNs) Liu et al. (2019); Gal et al. (2020); Carbonell et al. (2021); Shi et al. (2023) have gained substantial attention for their effectiveness in tackling the structural layout information. Recent works empower LLMs of general VD Understanding to incorporate additional contextual prior knowledge and fine tune on VDER tasks Xu et al. (2021); Garncarek et al. (2021); Lee et al. (2022); Hong et al. (2022). Beyond these research, this paper focuses on the Few-shot VDER cases with limited annotation, which pose unique challenges in achieving task personalization with scarce data, addressing shot imbalance, and handling task complexity due to out-of-task-distribution entities.
Few-shot VDER.
There has been rare discussions on VDER in few-shot scenarios. Recent works primarily focus on pretraining LLMs and prompt design such that they can then be fine tuned on a small number of VD documents Wang et al. (2021b); Wang and Shang (2022); Xu et al. (2021); Huang et al. (2022). Despite their success, our paper explores a complementary research perspective: (1) While previous works emphasize first-stage LLM pretraining, our work focuses on the second-stage few-shot adaptation algorithm. (2) We tackle a different application situation from the previous works. Previous works address a specific few-shot situation where the entity label space is fixed and entity occurrences do not vary a lot from one document to another. In contrast, our research tackles a different situation, where the entity label spaces and entity occurrences vary significantly between tasks and documents, enabling entity-level task personalization (i.e., each personal few-shot task is only interested in a small subset of entity types). Both situations can happen in the real world. This paper addresses the second unexplored one.
A.2 General Few-shot Learning
Next, we review the methodology-level related works from the other domains that are closely related to, but beyond, VDER. First, we briefly review of general Multimodal Few-shot Learning algorithms. Then, we review literature in the fields of CV and NLP that address non-VDER but closely related tasks, including: (1) vision-only Few-shot Object Detection and Segmentation in the CV domain; (2) text-only Few-shot Named Entity Recognition in the NLP domain; and (3) the general Few-shot Out-of-distribution Detection.
Multimodal Few-shot Learning.
Few-shot Learning (FSL) has been studied in various AI/ML domains, such as CV, NLP, healthcare, etc Song et al. (2023). Multimodal Few-shot Learning (MFSL) jointly utilizes complementary information from multiple modalities to improve a uni-modal task Pahde et al. (2021); Chen and Zhang (2021); Lin et al. (2023). The scope of this paper falls within the domain of MFSL, with a specific emphasis on multimodal documents. Existing MFSL work falls into two categories: non-meta learning methods and meta -learning approaches. The former typically involves a two-stage training process LLM pretraining and fine-tuning or prompt learning Wang et al. (2023b). On the other hand, meta-learning approaches formulate a task-level distribution, and then, learn task-adaptive metric functions Snell et al. (2017); Oreshkin et al. (2018); Koch et al. (2015); Vinyals et al. (2016) or employ bilevel optimization to learn meta-parameters for fast task-adaptive fine tuning Finn et al. (2017); Yoon et al. (2018); Rusu et al. (2019); Chen and Zhang (2022b). The proposed framework benefits from both LLMs and the meta-learning paradigm by being build upon the LLMs and then using meta-learning for task-adaptive fine-tuning.
Few-shot Object Detection and Segmentation.
Few-shot object detection and Few-shot Segmentation are CV tasks that aim at recognizing and localizing novel objects or semantics in an image with only a few training examples Wang et al. (2020); Köhler et al. (2023); Antonelli et al. (2022). The output of entity retrieval from document images consists of bounding boxes for entities, allowing it to be formulated as an object detection or segmentation problem in the CV domain, where each object is treated as an entity Shen et al. (2021). However, while few-shot object detection and segmentation algorithms Sun et al. (2021) can provide inspirations for Few-shot VDER, there are still gaps between Few-shot VDER and these fields: the lack of Few-shot VDER datasets in the form of object detection or segmentation tasks; and, the scales of entity objects are often much smaller than the out-of-distribution background objects.
Few-shot Sequence Labeling.
This paper adopts the few-shot sequence labeling paradigm as introduced by Wang et al. (2021a). Many other NLP tasks have also embraced this paradigm, including Few-shot Named Entity Recognition (NER) Li et al. (2022). Few-shot NER that focus on limited entity occurrences was initially introduced in Few-NERD Huang et al. (2021), and later, Ma et al. (2022) proposed a meta-learning approach to address this task. While Few-shot NER tasks are text-only, devoid of visual and layout modalities, and typically involve short texts, Few-shot VDER at the entity level presents a greater challenge, where the difficulty lies in effectively integrating layout structure and visual information and achieving task personalization from out-of-distribution background.
Few-shot Out-of-Distribution Detection.
Machine learning models, when being deployed in open-world scenarios, have shown to erroneously produce high posterior probability for out-of-distribution (OOD) data. This gives rise to OOD detection that identifies unknown OOD inputs so that the algorithm can take safety precautions Ming et al. (2022). Recently, motivated by real-world applications, OOD detection in the few-shot settings increasingly attracts attentions Le et al. (2021); Jeong and Kim (2020); Wang et al. (2022b), which faces new challenges such as a lack of training data required for distinguishing OOD from task-specific class distribution. In this paper, the proposed framework for Few-shot VDER employs few-shot OOD detection to improve performance: to prevent the prediction of background context as one of task-personalized entities, we encourage task-aware fine tuning to exclude statistically informative yet spurious features in the support set.
Appendix B FewVEX Dataset
Since there is no dataset specifically designed for the Few-shot VDER task defined in Section 2, we construct a new dataset, FewVEX, to benchmark and evaluate Few-shot VDER tasks.
B.1 Collection of Entity Types and Documents
First, we collect the entity types $\mathcal{C}$ associated with the task distribution $P(\mathcal{T})$ and a set of document images $\mathcal{D}$ annotated by these entity types.
We consider two source datasets that are widely used in normal large-scale document understanding tasks such as entity recognition, parsing, and information extraction. The first one is the Form Understanding in Noisy Scanned Documents (FUNDS) dataset Jaume et al. (2019) comprises 199 real, fully annotated, scanned forms, with a total of three types of entities (i.e., questions, answers, heads). The second one is the Consolidated Receipt Dataset for post-OCR parsing (CORD) dataset Park et al. (2019). CORD consists of 1000 receipt images of texts and contains 6 superclasses (menu, void menu, subtotal, void total, total, and etc) which are divided into 30 fine-grained subclasses. For different entity types, the total numbers of entity occurrences over the CORD images are highly imbalanced, ranging from 1 occurrence of entity ‘‘void menu (nm)’’ to 997 occurrences of ‘‘menu (price)".
From the two datasets, we obtain a combined source dataset denoted as $\mathcal{D}$ , which contains $1199$ unique document images with original annotations on 33 classes. However, we observe that some fine-grained classes in CORD occurs in less than $max_{i}(M_{si}+M_{qi})$ images, the maximum number of documents within individual tasks. This will result in a large amount of repetitive usage of the same documents within one task and between different tasks. Therefore, we further sort the 33 classes by the number of unique document images where they occur and then discard three entity types that occurs in low frequency.
To sum up, we finally have a total of $|\mathcal{C}|=$ 30 entity types and $|\mathcal{D}|=1199$ unique document images annotated by these entity types. The pie chart (on the left) in Figure 1 illustrates the number of occurrences of the final entity types.
B.2 Collection of Training and Testing Tasks
We simulate a distribution of tasks $P(\mathcal{T})$ in FewVEx. We create a meta-learning dataset $\mathcal{D}_{meta}=\{\mathcal{D}_{meta}^{trn},\mathcal{D}_{meta}^{tst}\}$ , consisting of a meta-training set $\mathcal{D}_{meta}^{trn}=\{\mathcal{T}_{1},\mathcal{T}_{2}...\mathcal{T}_{\tau%
_{trn}}\}$ containing $\tau_{trn}$ training tasks and a meta-testing set $\mathcal{D}_{meta}^{test}=\{\mathcal{T}^{*}_{1},\mathcal{T}^{*}_{2}...,%
\mathcal{T}^{*}_{\tau_{tst}}\}$ containing $\tau_{tst}$ testing tasks. Each task instance follows the $N$ -way $K$ -shot FVDER task setting that pays attention to $N$ personalized entity types.
B.2.1 Entity Type Split
To ensure that testing tasks in $\mathcal{D}_{meta}^{tst}$ focus on novel classes that are unseen during meta-training $\mathcal{D}_{meta}^{trn}$ , we should split the total entity types $\mathcal{C}$ into two separate sets $\mathcal{C}=C_{base}\cup\mathcal{C}_{novel},\mathcal{C}_{base}\cap\mathcal{C}_%
{novel}=\emptyset$ such that $\mathcal{C}_{base}$ is used for meta-training and $\mathcal{C}_{novel}$ for meta-testing.
Specifically, we use a split ratio $\gamma$ to control the number of novel classes and randomly choose $\gamma|C|$ entity types from $\mathcal{C}$ as $\mathcal{C}_{novel}$ . Then, $\mathcal{C}_{base}=\mathcal{C}\setminus\mathcal{C}_{novel}$ . Note that for the cases that some entity types occurs in less number of documents than the others, we set a threshold $U$ and any entity type that occurs in less than $U$ documents are forced to be one of the novel classes.
B.2.2 Single N-way K-shot Task Simulation
Each individual task $\mathcal{T}=\{S,Q,\mathcal{E}\}$ in either $\mathcal{D}_{meta}^{trn}$ or $\mathcal{D}_{meta}^{tst}$ can be generated by the following steps (summarized in Algorithm 1).
Personalized Class Sampling.
The target classes of task $\mathcal{E}$ is generated by randomly sampling $N$ entity types from either $\mathcal{C}_{base}$ (for the training task) or $\mathcal{C}_{novel}$ (for the testing task).
Document Sampling.
Given the $N$ target classes, we then collect document images that satisfies the few-shot setting defined in Section 2. However, one problem of document sampling from the original corpus is the inefficiency. It is because, for each task, only a small number of documents that contain the corresponding classes can be the candidate documents of the task. For example, if each document contains only a small number of entity types, the majority of documents would be rejected. To improve sampling efficiency, one strategy is to count entities in each document in advance and, for each entity type, all the candidate documents that contain this type are temporally stored in a new dataset. We only look at the task-specific candidate datasets $\mathcal{D}^{\mathcal{E}}=\{\mathcal{D}^{e}|∀ e∈\mathcal{E}\}$ , where $\mathcal{D}^{e}=\{(X,Y)|∀(X,Y)∈\mathcal{D}\text{ if }e∈ Y\}$ . We proposed Cross-document Rejection sampling in Algorithm 1, which randomly sample $M_{s}$ documents such that the total number of entity instances is satisfied–that is, $K\sim\rho K$ shots per entity type. Likewise, we sample $M_{q}$ documents for $Q$ , such that there are $K_{q}\sim\rho K_{q}$ shots per entity type. We keep track a table to record the current count of occurrences of each type of entity types in the task.
Label Conversion.
In the few-shot setting, the majority region of an document does not follow the in-task distribution (ITD) of $\mathcal{E}$ . These regions’ tokens are treated as either background or the other types of entities from the out-of-task distribution (OTD), whose original labels should be arbitrarily converted into O label. In addition, we map the original labels of ITD tokens to relative labels. For example, if we use I/O schema, the relative labels should range from label id 0 to label id $(N-1)$ .
Algorithm 1 Cross-document Rejection (XDR) Sampling for Few-shot VDER Task Simulation
1: Require: $N,K,K_{q},\rho$ , $\mathcal{C}_{base}$ , $\mathcal{C}_{novel}$ , $\mathcal{D}$ .
2: Randomly sample $N$ entity types from either $\mathcal{C}_{base}$ or $\mathcal{C}_{novel}$ and obtain $\mathcal{E}$ .
3: Initialize: $S=\emptyset$ , $Q=\emptyset$
4: Initialize: $\mathcal{D}^{\mathcal{E}}=\{\mathcal{D}^{e}|∀ e∈\mathcal{E}\}$ from $\mathcal{D}$ .
5: Initialize: $N$ integers $train\_count[e]=0$ for $∀ e∈\mathcal{E}$ .
6: Initialize: $N$ integers $test\_count[e]=0$ for $∀ e∈\mathcal{E}$ .
7: // Document sampling for $S$
8: while $\min_{e∈\mathcal{E}}train\_count[e]<K$ do
9: Find the least frequent entity type in the current task, i.e., $\hat{e}=\text{argmin }_{e∈\mathcal{E}}train\_count[e]$ .
10: Sample a document $(X_{j},Y_{j})$ from $\mathcal{D}^{\hat{e}}$
11: Add $(X_{j},Y_{j})$ to $S$
12: for $e∈\mathcal{E}$ do
13: Remove the selected document from candidate dataset $\mathcal{D}^{e}\xleftarrow[]{}\mathcal{D}^{e}\setminus\{(X,Y)\}$
14: Update $train\_count[e]$ if $Y$ contains entity type $e$ .
15: if $train\_count[e]>\rho K$ then
16: Mask $(train\_count[e]-\rho K)$ instances of type- $e$ by setting token labels to -1
17: end if
18: end for
19: end while
20: // Document sampling for $Q$
21: while $\min_{e∈\mathcal{E}}test\_count[e]<K_{q}$ do
22: Find the least frequent entity type in the current task, i.e., $\hat{e}=\text{argmin }_{e∈\mathcal{E}}test\_count[e]$ .
23: Sample a document $(X_{j},Y_{j})$ from $\mathcal{D}^{\hat{e}}$
24: Add $(X_{j},Y_{j})$ to $Q$
25: for $e∈\mathcal{E}$ do
26: Remove the selected document from candidate dataset $\mathcal{D}^{e}\xleftarrow[]{}\mathcal{D}^{e}\setminus\{(X,Y)\}$
27: Update $test\_count[e]$ if $Y$ contains entity type $e$ .
28: if $test\_count[e]>\rho K_{q}$ then
29: Mask $(test\_count[e]-\rho K_{q})$ instances of type- $e$ by setting token labels to -1
30: end if
31: end for
32: end while
33: Label conversion for $∀(X_{j},Y_{j})∈ S\cup Q$ .
34: return: $\mathcal{T}=\{S,Q,\mathcal{E}\}$
B.3 Dataset Variants
We fix the testing shot as $K_{q}$ =4. We propose two variants of meta-dataset, each of which pay attention to different challenges in few-shot learning. The statistics is summarized in Table 2: FewVEX(S) focuses on single-domain receipt understanding under N-way K-shot setting. The training and testing classes are both from CORD. The goal is to learn domain-invariant meta-parameters. FewVEX(M) focuses on learning domain-agnostic meta-parameters from a combination of receipt and form understanding. Receipt and form documents may appear in the same task.
Appendix C Experimental Setups
C.1 LLM-based Multimodal Encoder
We pre-train the multimodal Transformer on the IIT-CDIP dataset Harley et al. (2015). It should be noting that this paper does not focus on the pre-training technique. In fact, our framework does not require a well pre-trained encoder, since the meta-learning will further meta-tune the pre-trained encoder to capture the domain knowledge of $P(\mathcal{T})$ . Thus, we stop the pre-training until an $81.5\%$ token classification accuracy.
C.2 Training Parallelism
We employ the episodic training pipeline to learn the meta-parameters from training tasks (i.e., episodes). At each meta-training step, a total of $\tau$ episodes are trained and then validated to obtain the meta-gradients used for updating meta-parameters.
Both meta-training and meta-testing were run in a multi-process manner. Each of our experiments was run on a total of 4 machines and on each machine there are 8 local TPU devices. Since the parameter size of the Transformer-based encoder is large, we use the 8 devices of each machine to train one single episode in parallel. That is saying, at each meta-training step, a total of $4$ tasks are used to compute the meta-gradients.
Both the support (train) and query (test) documents in one task are divided and assigned to 8 devices. The prototypes, the nearest neighbors of data points, or the adapted parameters trained on the local support set, are computed on each local device. For validation on the query set, however, we should consider, the scope of the entire task over different local devices. Therefore, we employ Federated Learning techniques Zinkevich et al. (2010); Pillutla et al. (2022); Chen and Zhang (2022a); Tian et al. (2022) operating on multiple devices for a distributed within-task adaptation, where we collect the locally adapted parameters (at each inner-loop step) or the prototypes from the 8 devices of a single episode and average their parameters. Specifically, for training parallelism of each episode/task, there are 4 steps: (1) on each device, we first adapt a model based on the partial support documents located on the device; (2) then, we collect the adapted knowledge from each of the 8 local devices and aggregate them; (3) on each device, we apply the collected adapted knowledge to the partial query documents; (4) the validation loss on the query subset on each devices are collected and we take an average of them.
C.3 Baselines
There are mainly two families of approaches for Few-shot VDER. (1) Meta-learning based Approaches. Our proposed strategies can improve both metric-based and gradient-based meta-learning methods. To validate our arguments, we compare ContrastProtoNet with its metric-based meta-learning baseline ProtoNet Snell et al. (2017). We compare ANIL+HC with its gradient-based meta-learning baseline ANIL Raghu et al. (2019), etc. Extending our work to SOTA meta-learning methods could be one of our future works. (2) Non-Meta-learning based Approaches. We did not present a comparison with existing Non-Meta-learning based Few-shot VDER techniques Wang and Shang (2022); Wang et al. (2023b) due to the following reasons:
- Existing Non-Meta-learning based Few-shot VDER techniques primarily address document-level scenarios ("Entity occurrences do NOT vary from one document to another"). In contrast, our paper focuses on the entity level ("Entity occurrences vary from one document to another"). It is thus not fair to compare methods which were designed under dissimilar problem settings.
- We have conducted comparative experiments by applying Wang and Shang (2022) to our BERT-based LLM, a Non-Meta-learning based Few-shot multimodal NER technique, to our specific problem setting. With the same fine-tuning steps (T=15) and learning rate, the F1 results of Wang and Shang (2022) on the 4-way 4-shot setting is only 0.115, while the F1 results of all gradient-based meta-learning methods (including our proposed method) is over 0.5 and that of all metric-based methods (including our proposed method) is over 0.23. However, it may be not fair to compare with Wang and Shang (2022) since our paper studies a different setup. Despite this nuance, we intend to incorporate these results into the revised version of our paper.
C.4 Hyperparameters
We summarize the hyperparameters in Table 5.
| Hyperparameters $\rho$ $\gamma$ | Value 3 0.6 |
| --- | --- |
| $U$ | 20 |
| $K_{q}$ | 4 |
Table 5: Hyperparameters.
<details>
<summary>2311.00693v2/extracted/5283733/figures/viz-train-test.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Token Embeddings Visualization
## Figure Caption
(a) token embeddings of a training task
(b) token embeddings of a testing task (N=4 novel classes)
---
### Plot (a) - Training Task Token Embeddings
#### Axes
- **X-axis**: tsne-2d-1 (range: -40 to 40)
- **Y-axis**: tsne-2d-2 (range: -20 to 40)
#### Legend (Right-Aligned)
| Color/Symbol | Label | Data Type |
|--------------|---------------------------|--------------------------|
| Red | MENU.NUM | |
| Green | TOTAL.TOTAL_PRICE | |
| Blue | MENU.SUB_PRICE | |
| Purple | SUB_TOTAL.TAX_PRICE | |
| Black Circle | support(train) | |
| Black Star | query(val) | |
| Black Triangle | prototype | |
#### Data Points & Trends
1. **Red (MENU.NUM)**: Clustered in the top-right quadrant (x: 10–30, y: 20–35).
2. **Green (TOTAL.TOTAL_PRICE)**: Scattered across the plot, with a dense cluster near x: 10–20, y: 10–20.
3. **Blue (MENU.SUB_PRICE)**: Concentrated in the bottom-right quadrant (x: 10–20, y: -10–0).
4. **Purple (SUB_TOTAL.TAX_PRICE)**: Located in the bottom-left quadrant (x: -30–-10, y: -20–-5).
5. **Black Symbols**:
- Circles (support(train)): Interspersed among red/green clusters.
- Stars (query(val)): Scattered near purple cluster.
- Triangles (prototype): Clustered near purple data points.
---
### Plot (b) - Testing Task Token Embeddings (N=4 Novel Classes)
#### Axes
- **X-axis**: tsne-2d-1 (range: -80 to 80)
- **Y-axis**: tsne-2d-2 (range: -80 to 60)
#### Legend (Right-Aligned)
| Color/Symbol | Label | Data Type |
|--------------|---------------------------|--------------------------|
| Red | IND | |
| Blue | OOD | |
| Black Circle | support(train) | |
| Black Star | query(val) | |
#### Data Points & Trends
1. **Red (IND)**: Clustered in the top-left quadrant (x: -60–-20, y: 20–40).
2. **Blue (OOD)**: Dominates the plot, with dense clusters in:
- Top-right (x: 20–60, y: 20–40)
- Bottom-left (x: -60–-20, y: -40–-20)
3. **Black Symbols**:
- Circles (support(train)): Interspersed among red/blue clusters.
- Stars (query(val)): Scattered near blue clusters.
---
### Plot (a) - Training Task (Alternative Dataset)
#### Axes
- **X-axis**: tsne-2d-1 (range: -20 to 40)
- **Y-axis**: tsne-2d-2 (range: -15 to 10)
#### Legend (Right-Aligned)
| Color/Symbol | Label | Data Type |
|--------------|---------------------------|--------------------------|
| Red | SUB_TOTAL.ETC | |
| Green | TOTAL.TOTAL_ETC | |
| Blue | SUB_TOTAL.DISCOUNT_PRICE | |
| Purple | MENU.SUB_UNITPRICE | |
| Black Circle | support(train) | |
| Black Star | query(val) | |
| Black Triangle | prototype | |
#### Data Points & Trends
1. **Red (SUB_TOTAL.ETC)**: Clustered in the top-right quadrant (x: 0–20, y: 5–10).
2. **Green (TOTAL.TOTAL_ETC)**: Scattered near x: -10–10, y: 0–5.
3. **Blue (SUB_TOTAL.DISCOUNT_PRICE)**: Located in the bottom-left quadrant (x: -20–0, y: -10–0).
4. **Purple (MENU.SUB_UNITPRICE)**: Clustered in the bottom-right quadrant (x: 10–30, y: -10–-5).
5. **Black Symbols**:
- Circles (support(train)): Interspersed among red/green clusters.
- Stars (query(val)): Scattered near purple cluster.
- Triangles (prototype): Clustered near purple data points.
---
### Plot (b) - Testing Task (Alternative Dataset)
#### Axes
- **X-axis**: tsne-2d-1 (range: -75 to 75)
- **Y-axis**: tsne-2d-2 (range: -80 to 60)
#### Legend (Right-Aligned)
| Color/Symbol | Label | Data Type |
|--------------|---------------------------|--------------------------|
| Red | IND | |
| Blue | OOD | |
| Green | SUB_TOTAL.ETC | |
| Purple | TOTAL.TOTAL_ETC | |
| Black Circle | support(train) | |
| Black Star | query(val) | |
#### Data Points & Trends
1. **Red (IND)**: Clustered in the top-left quadrant (x: -50–-10, y: 20–40).
2. **Blue (OOD)**: Dominates the plot, with dense clusters in:
- Top-right (x: 20–60, y: 20–40)
- Bottom-left (x: -60–-20, y: -40–-20)
3. **Green (SUB_TOTAL.ETC)**: Scattered near x: -10–10, y: 0–5.
4. **Purple (TOTAL.TOTAL_ETC)**: Clustered in the bottom-right quadrant (x: 10–30, y: -10–-5).
5. **Black Symbols**:
- Circles (support(train)): Interspersed among red/blue clusters.
- Stars (query(val)): Scattered near blue clusters.
---
### Cross-Reference Validation
- **Color Consistency**:
- Red in all plots corresponds to "IND" or "MENU.NUM" (context-dependent).
- Blue consistently represents "OOD" or "MENU.SUB_PRICE".
- **Spatial Grounding**:
- Legends are positioned on the right side of each plot.
- Axes labels (tsne-2d-1, tsne-2d-2) are consistent across all plots.
### Notes
- No non-English text detected.
- No data tables present; all information is conveyed via scatter plots and legends.
- Trends verified visually (e.g., cluster locations, symbol distributions).
</details>
Figure 4: Learned class distribution of a training task and a testing task of 4-way 4-shot setting. The meta-parameters are trained using ContrastProtoNet on FewVEX(S). Solid points represent train (support) tokens, cross points represent val/test (query) tokens, and the triangle points represent prototypes.
<details>
<summary>2311.00693v2/extracted/5283733/figures/viz_shots.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Image Analysis
## Section (a): ANIL (4-way 1-shot)
### Line Chart (Left)
- **Axes**:
- X-axis: Labeled "x", range [0, 1]
- Y-axis: Labeled "y", range [0, 1]
- **Legend** (Top-right corner):
- **IND**: Red lines
- **OOD**: Blue lines
- **data_type (train)**: Green lines
- **support(train)**: Purple lines
- **query(val)**: Orange lines
- **Diagonal reference**: Dashed black line (y = x)
- **Trends**:
- All lines start at the bottom-left corner (0,0) and curve upward.
- Lines representing "IND" and "OOD" reach the top-right corner (1,1) with minimal deviation.
- "data_type (train)" and "support(train)" lines plateau near y = 1 after x ≈ 0.2.
- "query(val)" lines show stepwise increases, peaking at x ≈ 0.8.
### Scatter Plot (Middle)
- **Axes**:
- X-axis: Range [-60, 60]
- Y-axis: Range [-6, 10]
- **Legend** (Top-right corner):
- **IND**: Red stars
- **OOD**: Blue circles
- **support(train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **IND** (red stars): Clustered in the lower-left quadrant (x ≈ -50 to -20, y ≈ -2 to 2).
- **OOD** (blue circles): Concentrated in the upper-right quadrant (x ≈ 10 to 50, y ≈ 4 to 8).
- **support(train)** (black dots): Scattered across the plot, with density near x ≈ 0, y ≈ 0.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
### Scatter Plot (Right)
- **Axes**:
- X-axis: Range [-60, 60]
- Y-axis: Range [-10, 10]
- **Legend** (Top-right corner):
- **IND**: Red stars
- **OOD**: Blue circles
- **data_type (train)**: Green circles
- **support(train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **IND** (red stars): Clustered near x ≈ -30, y ≈ -5.
- **OOD** (blue circles): Spread across the upper-right quadrant (x ≈ 10 to 40, y ≈ 2 to 8).
- **data_type (train)** (green circles): Concentrated near x ≈ 0, y ≈ 0.
- **support(train)** (black dots): Scattered, with density near x ≈ -20, y ≈ -2.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
---
## Section (b): ANIL + HC (4-way 1-shot)
### Line Chart (Left)
- **Axes**:
- X-axis: Labeled "x", range [0, 1]
- Y-axis: Labeled "y", range [0, 1]
- **Legend** (Top-right corner):
- **IND**: Red lines
- **OOD**: Blue lines
- **data_type (train)**: Green lines
- **support(train)**: Purple lines
- **query(val)**: Orange lines
- **Diagonal reference**: Dashed black line (y = x)
- **Trends**:
- Lines for "IND" and "OOD" follow similar trajectories to Section (a), reaching (1,1).
- "data_type (train)" and "support(train)" lines plateau earlier (x ≈ 0.15).
- "query(val)" lines show sharper increases, peaking at x ≈ 0.7.
### Scatter Plot (Middle)
- **Axes**:
- X-axis: Range [-80, 80]
- Y-axis: Range [-10, 10]
- **Legend** (Top-right corner):
- **IND**: Red stars
- **OOD**: Blue circles
- **support(train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **IND** (red stars): Clustered in the lower-left quadrant (x ≈ -60 to -30, y ≈ -3 to 3).
- **OOD** (blue circles): Concentrated in the upper-right quadrant (x ≈ 20 to 60, y ≈ 5 to 9).
- **support(train)** (black dots): Scattered, with density near x ≈ 0, y ≈ 0.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
### Scatter Plot (Right)
- **Axes**:
- X-axis: Range [-15, 15]
- Y-axis: Range [-10, 10]
- **Legend** (Top-right corner):
- **SUB_TOTAL_ETC**: Red stars
- **TOTAL_TOTAL_ETC**: Green circles
- **MENU_SUB_UNITPRICE**: Blue circles
- **SUB_TOTAL_DISCOUNT_PRICE**: Purple circles
- **data_type (train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **SUB_TOTAL_ETC** (red stars): Clustered near x ≈ -10, y ≈ -8.
- **TOTAL_TOTAL_ETC** (green circles): Concentrated near x ≈ 5, y ≈ 8.
- **MENU_SUB_UNITPRICE** (blue circles): Spread across the upper-right quadrant (x ≈ 0 to 10, y ≈ 2 to 6).
- **SUB_TOTAL_DISCOUNT_PRICE** (purple circles): Clustered near x ≈ 10, y ≈ -5.
- **data_type (train)** (black dots): Scattered near x ≈ 0, y ≈ 0.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
---
## Section (c): ANIL + HC (4-way 4-shot)
### Line Chart (Left)
- **Axes**:
- X-axis: Labeled "x", range [0, 1]
- Y-axis: Labeled "y", range [0, 1]
- **Legend** (Top-right corner):
- **IND**: Red lines
- **OOD**: Blue lines
- **data_type (train)**: Green lines
- **support(train)**: Purple lines
- **query(val)**: Orange lines
- **Diagonal reference**: Dashed black line (y = x)
- **Trends**:
- "IND" and "OOD" lines plateau earlier (x ≈ 0.3), with y ≈ 0.95.
- "data_type (train)" and "support(train)" lines plateau at x ≈ 0.2.
- "query(val)" lines show minimal deviation from the diagonal, peaking at x ≈ 0.9.
### Scatter Plot (Middle)
- **Axes**:
- X-axis: Range [-80, 80]
- Y-axis: Range [-10, 10]
- **Legend** (Top-right corner):
- **IND**: Red stars
- **OOD**: Blue circles
- **support(train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **IND** (red stars): Clustered in the lower-left quadrant (x ≈ -70 to -40, y ≈ -4 to 4).
- **OOD** (blue circles): Concentrated in the upper-right quadrant (x ≈ 30 to 70, y ≈ 6 to 9).
- **support(train)** (black dots): Scattered, with density near x ≈ 0, y ≈ 0.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
### Scatter Plot (Right)
- **Axes**:
- X-axis: Range [-30, 30]
- Y-axis: Range [-20, 15]
- **Legend** (Top-right corner):
- **SUB_TOTAL_ETC**: Red stars
- **TOTAL_TOTAL_ETC**: Green circles
- **MENU_SUB_UNITPRICE**: Blue circles
- **SUB_TOTAL_DISCOUNT_PRICE**: Purple circles
- **data_type (train)**: Black dots
- **query(val)**: Black crosses
- **Data Distribution**:
- **SUB_TOTAL_ETC** (red stars): Clustered near x ≈ -20, y ≈ -12.
- **TOTAL_TOTAL_ETC** (green circles): Concentrated near x ≈ 10, y ≈ 12.
- **MENU_SUB_UNITPRICE** (blue circles): Spread across the upper-right quadrant (x ≈ 0 to 20, y ≈ 4 to 10).
- **SUB_TOTAL_DISCOUNT_PRICE** (purple circles): Clustered near x ≈ 20, y ≈ -8.
- **data_type (train)** (black dots): Scattered near x ≈ 0, y ≈ 0.
- **query(val)** (black crosses): Sparse, primarily in the lower-left quadrant.
---
## Key Observations
1. **Line Charts**:
- All lines originate at (0,0) and curve upward, with "IND" and "OOD" consistently reaching the top-right corner.
- "query(val)" lines show stepwise increases, while "data_type (train)" and "support(train)" plateau earlier.
- The diagonal reference line (y = x) serves as a baseline for performance metrics.
2. **Scatter Plots**:
- **IND** (red stars) and **OOD** (blue circles) consistently cluster in opposite quadrants, indicating distinct data distributions.
- **support(train)** (black dots) and **query(val)** (black crosses) are sparse and localized in the lower-left quadrant.
- Metrics like **SUB_TOTAL_ETC** and **TOTAL_TOTAL_ETC** show strong clustering, suggesting high correlation or similarity in their respective categories.
3. **Legend Consistency**:
- Colors in legends match data points exactly across all plots (e.g., red = IND, blue = OOD).
- Metric labels (e.g., SUB_TOTAL_ETC) are unique to the right scatter plots and do not overlap with line chart categories.
4. **Axis Ranges**:
- Line charts use normalized axes (0–1), while scatter plots use absolute ranges (e.g., x: -80 to 80, y: -10 to 10).
5. **Trend Verification**:
- Line charts show smooth curves, while scatter plots exhibit discrete clusters, confirming distinct data types and metrics.
</details>
Figure 5: Visualization under 4-way 4-shot and 4-way 1-shot settings of FewVEX(S), for ANIL and ANIL+HC.
<details>
<summary>2311.00693v2/extracted/5283733/figures/viz.jpg Details</summary>

### Visual Description
# Technical Document Extraction: Image Analysis
## Overview
The image contains four subplots (a-d), each comparing machine learning model performance across datasets. Each subplot includes:
1. A line chart (left) showing accuracy vs. false positive rate
2. A scatter plot (right) visualizing data point distributions
3. A legend explaining color coding
---
## Subplot (a): ProtoNet (4-way 4-shot)
### Line Chart
- **Title**: ProtoNet (4-way 4-shot)
- **Axes**:
- X-axis: False Positive Rate (0.0 to 1.0)
- Y-axis: Accuracy (0.0 to 1.0)
- **Legend**:
- IND (red circles)
- OOD (blue circles)
- support(train) (black squares)
- query(val) (black crosses)
- prototype (black triangles)
- **Key Trends**:
- All lines start at [0,0] baseline
- IND lines show steep upward slopes (e.g., Line A peaks at [0.8, 0.95])
- OOD lines plateau near 0.6-0.7 accuracy
- support(train) lines cluster tightly around [0.2, 0.4] FPR range
- query(val) lines show gradual increase to [0.6, 0.75] accuracy
### Scatter Plot
- **Axes**:
- X-axis: tsne-2d-1 (-100 to 80)
- Y-axis: tsne-2d-2 (-50 to 25)
- **Data Points**:
- IND: Red clusters near [20, 10]
- OOD: Blue clusters near [-30, -15]
- support(train): Black squares concentrated at [-10, 5]
- query(val): Black crosses scattered near [0, -5]
---
## Subplot (b): ContrastProtoNet (4-way 4-shot)
### Line Chart
- **Title**: ContrastProtoNet (4-way 4-shot)
- **Axes**:
- X-axis: False Positive Rate (0.0 to 1.0)
- Y-axis: Accuracy (0.0 to 1.0)
- **Legend**:
- IND (red circles)
- OOD (blue circles)
- support(train) (black squares)
- query(val) (black crosses)
- prototype (black triangles)
- **Key Trends**:
- IND lines show improved performance vs. ProtoNet (e.g., Line B peaks at [0.9, 0.98])
- OOD lines maintain similar plateau (~0.65 accuracy)
- support(train) lines show tighter clustering at [0.1, 0.3] FPR
- query(val) lines demonstrate sharper increases to [0.7, 0.85] accuracy
### Scatter Plot
- **Axes**:
- X-axis: tsne-2d-1 (-60 to 60)
- Y-axis: tsne-2d-2 (-20 to 40)
- **Data Points**:
- IND: Red clusters near [40, 20]
- OOD: Blue clusters near [-20, -10]
- support(train): Black squares concentrated at [0, 15]
- query(val): Black crosses scattered near [-10, 5]
---
## Subplot (c): Reptile (4-way 4-shot)
### Line Chart
- **Title**: Reptile (4-way 4-shot)
- **Axes**:
- X-axis: False Positive Rate (0.0 to 1.0)
- Y-axis: Accuracy (0.0 to 1.0)
- **Legend**:
- IND (red circles)
- OOD (blue circles)
- support(train) (black squares)
- query(val) (black crosses)
- prototype (black triangles)
- **Key Trends**:
- IND lines show moderate improvement (Line C peaks at [0.7, 0.88])
- OOD lines decline slightly compared to ProtoNet (~0.55 accuracy)
- support(train) lines cluster at [0.25, 0.45] FPR
- query(val) lines show gradual increase to [0.6, 0.72] accuracy
### Scatter Plot
- **Axes**:
- X-axis: tsne-2d-1 (-80 to 80)
- Y-axis: tsne-2d-2 (-30 to 10)
- **Data Points**:
- IND: Red clusters near [60, 15]
- OOD: Blue clusters near [-40, -20]
- support(train): Black squares concentrated at [10, 5]
- query(val): Black crosses scattered near [0, -10]
---
## Subplot (d): Reptile + HC (4-way 4-shot)
### Line Chart
- **Title**: Reptile + HC (4-way 4-shot)
- **Axes**:
- X-axis: False Positive Rate (0.0 to 1.0)
- Y-axis: Accuracy (0.0 to 1.0)
- **Legend**:
- IND (red circles)
- OOD (blue circles)
- support(train) (black squares)
- query(val) (black crosses)
- prototype (black triangles)
- **Key Trends**:
- IND lines show significant improvement (Line D peaks at [0.95, 0.99])
- OOD lines maintain ~0.6 accuracy
- support(train) lines cluster tightly at [0.15, 0.35] FPR
- query(val) lines demonstrate sharp increases to [0.8, 0.92] accuracy
### Scatter Plot
- **Axes**:
- X-axis: tsne-2d-1 (-50 to 50)
- Y-axis: tsne-2d-2 (-25 to 25)
- **Data Points**:
- IND: Red clusters near [50, 20]
- OOD: Blue clusters near [-30, -15]
- support(train): Black squares concentrated at [5, 10]
- query(val): Black crosses scattered near [0, 5]
---
## Cross-Subplot Analysis
1. **Legend Consistency**:
- All subplots use identical legend structure
- Color coding remains consistent across all plots
- Spatial grounding: Legends positioned in upper right corner of each subplot
2. **Performance Trends**:
- IND accuracy improves across models: ProtoNet (0.95) → ContrastProtoNet (0.98) → Reptile (0.88) → Reptile+HC (0.99)
- OOD performance remains relatively stable (0.6-0.65 range)
- query(val) accuracy shows largest improvement in Reptile+HC (0.92)
3. **Data Point Distributions**:
- IND clusters consistently appear in upper right quadrant
- OOD clusters appear in lower left quadrant
- support(train) points cluster near origin
- query(val) points show increasing dispersion in Reptile+HC
---
## Critical Observations
1. **Model Improvements**:
- Reptile+HC shows strongest performance across all metrics
- IND accuracy reaches near-perfect levels (0.99)
- query(val) performance improves by 25% compared to ProtoNet
2. **Data Separation**:
- t-SNE visualizations show clear separation between IND/OOD clusters
- support(train) points maintain distinct separation from query(val) points
3. **False Positive Tradeoff**:
- All models show inverse relationship between FPR and accuracy
- Reptile+HC achieves highest accuracy at lowest FPR (0.15)
---
## Limitations
- No explicit error bars or confidence intervals shown
- No temporal or sequential data represented
- No comparative performance metrics between models
---
## Conclusion
The visualization demonstrates progressive improvements in model performance across different architectures, with Reptile+HC showing the most significant gains in both accuracy and data separation. The consistent legend structure across subplots facilitates direct comparison between different model implementations.
</details>
Figure 6: Visualization and ROC curves of different methods on the 4-way 4-shot setting of FewVEX(S). For each method, the left subfigure is the tSNE visualization of the learned embeddings of in-task distribution (ITD) entities of a randomly chosen meta-testing task, where different colors indicate different entity types); the middle subfigure shows the tSNE visualization of the learned embeddings of all tokens in the same meta-testing task, where the ITD entities are represented as red points and the out-of-task distribution (OTD) entities or background are represented as blue points; the right subfigure shows the ROC curves of all meta-testing tasks, where each colored line corresponds to one task, representing how ITD is distinguished from OTD based on the model’s output logits.
Appendix D Evaluation Methods
D.1 Quantitative Metrics
We consider two types of quantitative metrics.
Overall Performance.
Following Park and Kim (2020); Xu et al. (2020), we use the precision (P), recall (R) and micro F1 -score over meta-testing tasks. We use the I/O tagging schema and the "seqeval" Nakayama (2018) tool to compute the P/R/F1.
Task Specificity (TS).
In the proposed framework, we solve out-of-distribution (OOD) detection as a subtask to improve task personalization and avoid spurious features. We calculate a ITD score for each data point representing how likely it belongs to the task-specific distribution. To evaluate how well the learned meta-learners can distinguish in-task distribution (ITD) from the out-of-task distribution (OTD), we calculate AUROC Xiao et al. (2020) using the ITD scores over all test episodes. A higher AUROC value indicate better TS performance, and a random guessing detector corresponds to an AUROC of 50%. We use the "sklearn.metrics" Varoquaux et al. (2015) tool to compute the AUROC and plot ROC curves.
D.2 Visualization
To visualize the TS, we plot the ROC curves of all the meta-testing tasks, where each curve represent one task. Another visualization for TS is to show how ITD and OOD are distinguished against each other. We randomly select a testing task and exploit tSNE van der Maaten and Hinton (2008) to visualize the learned embeddings of all the tokens in the task, where ITD tokens are denoted as red points and OTD tokens are blue points.
Furthermore, we use tSNE to visualize the learned embeddings of only the ITD token instances in the task, where different colors represent different entity types.
Appendix E Additional Results
We present additional visualization results in Figure 6, Figure 4, and Figure 5.