\n
## Diagram: Comparison of Prompting Techniques for AI Models
### Overview
The image is a technical diagram comparing three different methods for prompting AI models to solve math word problems. It is structured as a three-column layout, with each column representing a distinct prompting technique: "Standard Prompting," "Chain-of-Thought (CoT)," and "Contrastive Chain-of-Thought." Each column contains two example problems, showing the "Model Input" (the prompt given to the AI) and the resulting "Model Output." Visual icons (a person with a question mark, a robot, and a laptop) and colored highlights are used to differentiate elements and indicate correctness.
### Components/Axes
The diagram is organized into three vertical panels, each with a clear header:
1. **Left Panel:** "Standard Prompting"
2. **Center Panel:** "Chain-of-Thought (CoT)"
3. **Right Panel:** "Contrastive Chain-of-Thought"
Within each panel, the flow is vertical:
* **Top Section:** "Model Input" box containing the problem statement and, for CoT methods, an explanation.
* **Bottom Section:** "Model Output" box containing the final answer or explanation.
* **Icons:** A person icon (👤) precedes the user's question. A robot icon (🤖) precedes the model's explanation. A laptop icon (💻) precedes the model's final answer.
* **Correctness Indicators:** A red "X" (❌) marks incorrect model outputs. A green checkmark (✅) marks a correct model output.
* **Text Highlights:** In the "Contrastive Chain-of-Thought" column, specific numerical steps in explanations are highlighted with colored backgrounds (yellow, purple, blue, green) to draw attention to the reasoning process.
### Detailed Analysis
**Column 1: Standard Prompting**
* **Example 1 Input:**
* Question: "James writes a 3-page letter to 2 different friends twice a week. How many pages does he write a year?"
* **Example 1 Output:**
* Answer: "624"
* Status: Marked with a red X (❌).
* **Example 2 Input:**
* Question: "James has 30 teeth. His dentist drills 4 of them and caps 7 more teeth than he drills. What percentage of James' teeth does the dentist fix?"
* **Example 2 Output:**
* Answer: "37.5%"
* Status: Marked with a red X (❌).
**Column 2: Chain-of-Thought (CoT)**
* **Example 1 Input:**
* Question: (Identical to Standard Prompting Example 1)
* Explanation: "He writes each friend 3*2=6 pages a week. So he writes 6*2=12 pages every week. That means he writes 12*52=624 pages a year."
* **Example 1 Output:**
* Explanation: (Identical to the input explanation)
* Status: Marked with a red X (❌).
* **Example 2 Input:**
* Question: (Identical to Standard Prompting Example 2)
* **Example 2 Output:**
* Explanation: "The dentist fixes a total of 4 + 7 = 11 teeth. To find the percentage, we divide the number of teeth fixed by the total number of teeth and multiply by 100: 11/30 x 100 = 36.67%"
* Status: Marked with a red X (❌).
**Column 3: Contrastive Chain-of-Thought**
* **Example 1 Input:**
* Question: (Identical to previous examples)
* **Correct Explanation:** "He writes each friend **3*2=6** pages a week. So he writes **6*2=12** pages every week. That means he writes **12*52=624** pages a year." (Highlights: yellow, purple, blue)
* **Wrong Explanation:** "He writes each friend **12*52=624** pages a week. So he writes **3*2=6** pages every week. That means he writes **6*2=12** pages a year." (Highlights: blue, yellow, purple)
* **Example 1 Output:**
* (No separate output box shown for this example in the diagram).
* **Example 2 Input:**
* Question: (Identical to previous examples)
* **Example 2 Output:**
* Explanation: "The dentist drills 4 teeth, so there are 30 - 4 = 26 teeth left. The dentist caps 7 more teeth than he drills, so he caps 4 + 7 = 11 teeth. Therefore, the dentist fixes a total of 4 + 11 = 15 teeth. To find the percentage of teeth the dentist fixes, we divide the number of teeth fixed by the total number of teeth and multiply by 100: 15/30 x 100 = 50%"
* Status: Marked with a green checkmark (✅).
### Key Observations
1. **Progression of Complexity:** The prompting techniques evolve from simple question-answer (Standard), to question-reasoning-answer (CoT), to question-reasoning_with_contrastive_examples-answer (Contrastive CoT).
2. **Error Patterns:** The "Standard Prompting" and basic "Chain-of-Thought" methods produce incorrect answers for the teeth percentage problem (37.5% and 36.67% vs. the correct 50%). The error in the CoT explanation for the teeth problem is a logical flaw: it incorrectly sums the drilled and "more than drilled" caps (4 + 7) instead of correctly calculating the caps (4 + 7 = 11) and then summing all fixed teeth (4 drilled + 11 capped = 15).
3. **Visual Learning Cue:** The "Contrastive Chain-of-Thought" column uses colored highlights to visually link numerical steps between a correct and an incorrect explanation for the first problem, demonstrating the method's core idea of learning from contrast.
4. **Correctness Outcome:** Only the final output in the "Contrastive Chain-of-Thought" column (for the teeth problem) is marked as correct (✅), suggesting this method is presented as the most effective.
### Interpretation
This diagram serves as a pedagogical or research illustration comparing AI prompting strategies. It argues visually that:
* **Standard Prompting** is prone to errors as the model jumps to an answer without showing work.
* **Chain-of-Thought Prompting** improves transparency by revealing the model's reasoning process but does not guarantee correctness, as the model can still make logical errors in its step-by-step explanation.
* **Contrastive Chain-of-Thought Prompting** is presented as a superior method. By providing the model with both a correct and an incorrect reasoning path (a "contrastive pair"), it appears to help the model avoid common pitfalls and arrive at the correct solution. The green checkmark on the final output implies this method successfully guides the model to the right answer (50%) where the others failed.
The underlying message is that how you frame a problem for an AI—especially by including examples of both right and wrong reasoning—significantly impacts its performance on tasks requiring multi-step logic. The diagram is likely from a paper or presentation advocating for the adoption of contrastive techniques in prompt engineering.