Jump to content

Prompt engineering for text generation: Difference between revisions

Line 202: Line 202:
*'''[[Trustworthy]]''' and '''[[Professional]]''' - business proposals, executive summaries, investor pitches
*'''[[Trustworthy]]''' and '''[[Professional]]''' - business proposals, executive summaries, investor pitches


==Chain of Thought Prompting==
==Self-Consistency Sampling==
Self-consistency sampling is a method for generating multiple outputs using a [[temperature]] greater than 0 and selecting the best candidate from the generated outputs. The criteria for choosing the best candidate may vary according to the task. A common approach is to use [[majority vote]]. In tasks that are easy to validate, such as programming questions with unit tests, the outputs can be run through an interpreter and their correctness can be verified using unit tests.
 
==Chain of Thought (CoT) Prompting==
{{see also|Chain of Thought Prompting}}
{{see also|Chain of Thought Prompting}}
Chain-of-thought (CoT) prompting, proposed by Wei et al. (2022), is a technique that generates a sequence of short sentences to describe reasoning logic step by step. These sequences, also known as reasoning chains or rationales, eventually lead to the final answer. CoT is particularly beneficial for complicated reasoning tasks and is more effective when used with large language models (e.g., models with over 50 billion parameters). For simpler tasks, CoT prompting provides only slight improvements.
===Types of CoT Prompts===
There are two primary types of CoT prompting:
'''Few-shot CoT:''' This type of prompting provides the model with a few demonstrations, each containing manually written or model-generated high-quality reasoning chains.
'''Zero-shot CoT:''' This type of prompting uses natural language statements, such as "Let's think step by step" or "Let's work this out step by step to be sure we have the right answer," to explicitly encourage the model to first generate reasoning chains and then produce answers (Kojima et al. 2022; Zhou et al. 2022).
===Tips and Extensions===
Self-consistency sampling can enhance reasoning accuracy by generating diverse answers and selecting the majority vote (Wang et al. 2022a).
Altering example order or using model-generated rationales instead of human-written ones introduces randomness during multiple sample trials. The final answer can be obtained by aggregating model outputs using majority vote (Wang et al. 2022b).
The STaR (Self-Taught Reasoner) method, proposed by Zelikman et al. (2022), can be used when training examples have true answers but no rationales. The method involves asking the language model to generate reasoning chains, keeping only those that lead to correct answers, and fine-tuning the model with generated rationales until convergence.
Using prompts with demonstrations of higher reasoning complexity, as measured by the number of reasoning steps in the chains, can achieve better performance (Fu et al. 2023).
Complexity-based consistency involves selecting complex chains from all generations by taking the majority vote among only the top complex chains (Fu et al. 2023).
Shum et al. (2023) found that CoT prompts with only complex examples improve the accuracy of complex questions but perform poorly on simple questions.
Changing "Q:" to "Question:" has been shown to be helpful (Fu et al. 2023).
Ye & Durrett (2022) observed that the benefit of including explanations in the prompt is small to moderate for NLP tasks involving reasoning over text (e.g., QA and NLI). They also found that nonfactual explanations are more likely to lead to incorrect predictions.
===Iterative Methods with External Queries===
Methods such as Self-Ask (Press et al. 2022), IRCoT (Interleaving Retrieval CoT; Trivedi et al. 2022), and ReAct (Reason + Act; Yao et al. 2023) involve prompting the model to ask follow-up questions, constructing the thought process iteratively.


==Prompt Engineering for Code Generation Models==
==Prompt Engineering for Code Generation Models==
370

edits