Prompt engineering for text generation: Difference between revisions

no edit summary
No edit summary
Line 207: Line 207:
==Chain of Thought Prompting==
==Chain of Thought Prompting==
{{see also|Chain of Thought Prompting}}
{{see also|Chain of Thought Prompting}}
==Chain-of-Thought Prompting==
Chain of Thought (CoT) Prompting is a technique introduced by Wei et al. (2022) to generate a sequence of short sentences describing step-by-step reasoning, known as reasoning chains or rationales, leading to the final answer. CoT prompting is particularly useful for complex reasoning tasks when applied to large language models (e.g., those with over 50 billion parameters), while simpler tasks may benefit only marginally.
Chain-of-Thought (CoT) prompting is a technique introduced by Wei et al. (2022) to generate a sequence of short sentences describing step-by-step reasoning, known as reasoning chains or rationales, leading to the final answer. CoT prompting is particularly useful for complex reasoning tasks when applied to large language models (e.g., those with over 50 billion parameters), while simpler tasks may benefit only marginally.


==Types of CoT Prompts==
===Types of CoT Prompts===
There are two main types of CoT prompting:
There are two main types of CoT prompting:


===Few-shot CoT===
====Few-shot CoT====
Few-shot CoT prompting involves providing the model with a limited number of demonstrations, each containing either manually written or model-generated high-quality reasoning chains. Examples of such demonstrations are provided in the original article, showcasing how this type of prompting is used to solve various mathematical reasoning problems.
Few-shot CoT prompting involves providing the model with a limited number of demonstrations, each containing either manually written or model-generated high-quality reasoning chains. Examples of such demonstrations are provided in the original article, showcasing how this type of prompting is used to solve various mathematical reasoning problems.


===Zero-shot CoT===
====Zero-shot CoT====
Zero-shot CoT prompting uses natural language statements, such as "Let's think step by step" or "Let's work this out step by step to be sure we have the right answer," to explicitly encourage the model to generate reasoning chains. Following this, a statement like "Therefore, the answer is" is used to prompt the model to produce the final answer (Kojima et al. 2022; Zhou et al. 2022).
Zero-shot CoT prompting uses natural language statements, such as "Let's think step by step" or "Let's work this out step by step to be sure we have the right answer," to explicitly encourage the model to generate reasoning chains. Following this, a statement like "Therefore, the answer is" is used to prompt the model to produce the final answer (Kojima et al. 2022; Zhou et al. 2022).


==Tips and Extensions==
===Tips and Extensions===
Several techniques have been proposed to improve the accuracy and effectiveness of CoT prompting:
Several techniques have been proposed to improve the accuracy and effectiveness of CoT prompting:


370

edits