## Diagram: LLM Prompting Strategies for ConvFinQA
### Overview
This image presents three distinct flowcharts, each illustrating a different prompting strategy for Large Language Models (LLMs) in the context of ConvFinQA (Conversational Financial Question Answering). Each flowchart details a multi-step process involving an LLM to either extract reasoning, generate programs, or provide answers, often chaining the output of one LLM step as input to another. The diagrams are titled "Figure 7: ZS-FinDSL prompt for ConvFinQA", "Figure 8: ZS-STD prompt for ConvFinQA", and a third unnamed diagram that follows a similar two-stage structure.
### Components/Axes
The image is composed of three vertically stacked diagrams. Each diagram features:
* **Process Headers (Light Brown/Gold Rectangles)**: These labels describe the overall task of a specific flow, such as "Reasoning extraction", "Program extraction", "LLM answering", and "Answer extraction". They are positioned at the top of each processing column.
* **Input Prompt Boxes (Light Blue Rectangles)**: These boxes contain the textual prompts given to the LLM. They are positioned below the process headers.
* **LLM Component (Multi-colored Brain-like Icon)**: Represented by a stylized brain icon with multiple colored spheres (purple, blue, green, orange, red, yellow), labeled "LLM". This signifies the Large Language Model processing the input. It is positioned below the input prompt boxes.
* **LLM Output Boxes (Pink Rectangles)**: These boxes contain the textual output generated by the LLM. They are positioned below the LLM component.
* **Flow Arrows (Dark Gray)**: Straight arrows indicate a direct downward flow from input to LLM, and from LLM to output. Curved arrows indicate the output of one LLM process serving as input to another LLM process, flowing from the bottom-left output box to the top-right input box.
* **Legends (Bottom-center of each Figure)**: Each figure includes a legend defining different types of prompt elements by color.
* **Figure 7 Legend (from left to right)**:
* Black square: "Signifier"
* Orange square: "Memetic proxy"
* Magenta square: "Constraining behavior"
* Dark Green square: "Meta prompt"
* Dark Blue square: "Input"
* **Figure 8 Legend (from left to right)**:
* Black square: "Signifier"
* Orange square: "Memetic proxy"
* Dark Blue square: "Input"
* **Bottom Diagram Legend (from left to right)**:
* Black square: "Signifier"
* Orange square: "Memetic proxy"
* Dark Green square: "Meta prompt"
* Dark Blue square: "Input"
### Detailed Analysis
#### Figure 7: ZS-FinDSL prompt for ConvFinQA
This diagram illustrates a two-stage process: "Reasoning extraction" followed by "Program extraction".
1. **Reasoning extraction (Left Flow)**:
* **Input Prompt (Light Blue Box, top-left)**:
* "Read the following passage and then answer the questions:" (Black text, Signifier)
* "**Passage**: text + table" (Black text, Signifier)
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer**: Answer the questions by finding the relevant values and performing" (Black text, Signifier)
* "step by step calculations." (Black text, Signifier)
* "Answer:" (Black text, Signifier)
* **LLM Component**: Processes the prompt.
* **Output (Pink Box, bottom-left)**: "Answer with reasoning from LLM."
2. **Program extraction (Right Flow)**:
* **Input Prompt (Light Blue Box, top-right)**: This prompt receives input from the "Answer with reasoning from LLM." output.
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer**: Answer with reasoning from LLM." (Black text, Signifier)
* "**Task**: From the above question-answer, extract the calculations that" (Black text, Signifier)
* "were performed to arrive at the answer to the last question. The" (Black text, Signifier)
* "calculations should be provided in the following format:" (Black text, Signifier)
* "[\"PROGRAM\":{\"#0\":{\"OPERATION\":[arithmetic/logic]," (Black text, Signifier)
* "ARG1:\"[float/int]\", ARG2:\"[float/int]\"}," (Black text, Signifier)
* "\"#1\":{\"OPERATION\":[arithmetic/logic]," (Black text, Signifier)
* "ARG1:\"[float/int]\", ARG2:\"[float/int]\"}, ...}," (Black text, Signifier)
* "\"ANSWER\": \"[numerical/boolean]\"}" (Black text, Signifier)
* "Operation should strictly be restricted to {add, subtract, multiply," (Magenta text, Constraining behavior)
* "divide, exponent, greater-than, max, min} only." (Magenta text, Constraining behavior)
* "When evaluated the program should only generate numerical or" (Magenta text, Constraining behavior)
* "boolean values." (Magenta text, Constraining behavior)
* "Solution:" (Black text, Signifier)
* **LLM Component**: Processes this prompt.
* **Output (Pink Box, bottom-right)**: "Program generated by the LLM."
* **Flow**: The output of the "Reasoning extraction" LLM ("Answer with reasoning from LLM.") feeds into the "Program extraction" LLM's input prompt via a curved dark gray arrow.
#### Figure 8: ZS-STD prompt for ConvFinQA
This diagram illustrates a two-stage process: "LLM answering" followed by "Answer extraction".
1. **LLM answering (Left Flow)**:
* **Input Prompt (Light Blue Box, top-left)**:
* "Read the following passage and then answer the questions:" (Black text, Signifier)
* "**Passage**: text + table" (Black text, Signifier)
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer:" (Black text, Signifier)
* **LLM Component**: Processes the prompt.
* **Output (Pink Box, bottom-left)**: "Answer from LLM."
2. **Answer extraction (Right Flow)**:
* **Input Prompt (Light Blue Box, top-right)**: This prompt receives input from the "Answer from LLM." output.
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer**: Answer from LLM." (Black text, Signifier)
* "The final answer (float/int/boolean) is:" (Black text, Signifier)
* **LLM Component**: Processes this prompt.
* **Output (Pink Box, bottom-right)**: "Final answer generated by the LLM."
* **Flow**: The output of the "LLM answering" LLM ("Answer from LLM.") feeds into the "Answer extraction" LLM's input prompt via a curved dark gray arrow.
#### Third Diagram (Bottom, Unnamed)
This diagram illustrates a two-stage process: "Reasoning extraction" followed by "Answer extraction".
1. **Reasoning extraction (Left Flow)**:
* **Input Prompt (Light Blue Box, top-left)**:
* "Read the following passage and then answer the questions:" (Black text, Signifier)
* "**Passage**: text + table" (Black text, Signifier)
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer**: Let us think step by step." (Black text, Signifier)
* "Answer:" (Black text, Signifier)
* **LLM Component**: Processes the prompt.
* **Output (Pink Box, bottom-left)**: "Answer with reasoning from LLM."
2. **Answer extraction (Right Flow)**:
* **Input Prompt (Light Blue Box, top-right)**: This prompt receives input from the "Answer with reasoning from LLM." output.
* "**Questions**: ask question?" (Black text, Signifier)
* "**Answer**: Answer with reasoning from LLM." (Black text, Signifier)
* "The final answer (float/int/boolean) is:" (Black text, Signifier)
* **LLM Component**: Processes this prompt.
* **Output (Pink Box, bottom-right)**: "Final answer generated by the LLM."
* **Flow**: The output of the "Reasoning extraction" LLM ("Answer with reasoning from LLM.") feeds into the "Answer extraction" LLM's input prompt via a curved dark gray arrow.