370
edits
Line 206: | Line 206: | ||
==Chain of Thought Prompting== | ==Chain of Thought Prompting== | ||
[[Chain of Thought Prompting]] (CoT prompting) is a technique introduced by Wei et al. (2022) to generate a sequence of short sentences describing step-by-step reasoning, known as [[reasoning chains]] or [[rationales]], leading to the final answer. [[CoT prompting]] is particularly useful for complex reasoning tasks when applied to large language models (e.g., those with over 50 billion parameters), while simpler tasks may benefit only marginally. | [[Chain of Thought Prompting]] (CoT prompting) is a technique introduced by Wei et al. (2022) to generate a sequence of short sentences describing step-by-step reasoning, known as [[reasoning chains]] or [[rationales]], leading to the final answer. [[CoT prompting]] is particularly useful for complex reasoning tasks when applied to large language models (e.g., those with over 50 billion parameters), while simpler tasks may benefit only marginally.<ref name="”119”">Wei et al. (2022) Chain-of-Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/abs/2201.11903</ref> | ||
===Types of CoT Prompts=== | ===Types of CoT Prompts=== |
edits