Prompt engineering for text generation: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
{{see also|Prompt engineering|Prompts|Prompt engineering for image generation}}
{{see also|Prompt engineering|Prompts|Prompt engineering for image generation}}
==Text-to-text models==
==Introduction==
[[Prompt engineering]] is not limited to [[text-to-image generation]] and has found a fitting application in [[AI-generated art]]. Various [[templates]] and "[[recipes]]" have been created to optimize the process of providing the most effective textual inputs to the model. OpenAI has published such "recipes" for their language model that can be adapted to different downstream tasks, including [[grammar correction]], [[text summarization]], [[answering questions]], [[generating product names]], and functioning as a [[chatbot]]. <ref name="”2”">Oppenlaender, J (2022). A Taxonomy of Prompt Modifiers for Text-To-Image Generation. arXiv:2204.13988v2</ref>
[[Prompt engineering]] is not limited to [[text-to-image generation]] and has found a fitting application in [[AI-generated art]]. Various [[templates]] and "[[recipes]]" have been created to optimize the process of providing the most effective textual inputs to the model. OpenAI has published such "recipes" for their language model that can be adapted to different downstream tasks, including [[grammar correction]], [[text summarization]], [[answering questions]], [[generating product names]], and functioning as a [[chatbot]]. <ref name="”2”">Oppenlaender, J (2022). A Taxonomy of Prompt Modifiers for Text-To-Image Generation. arXiv:2204.13988v2</ref>


Line 16: Line 16:
Prompt Engineering or ChatGPT should be avoided in certain scenarios. Firstly, when 100% reliability is required. Secondly, when the accuracy of the model's output cannot be evaluated. Finally, when generating content that is not in the model's training data, these techniques may not be the best approach to use. <ref name="”11”"></ref>
Prompt Engineering or ChatGPT should be avoided in certain scenarios. Firstly, when 100% reliability is required. Secondly, when the accuracy of the model's output cannot be evaluated. Finally, when generating content that is not in the model's training data, these techniques may not be the best approach to use. <ref name="”11”"></ref>


===Building Prompts===
==Building Prompts==


In a text-to-text model, the user can insert diferent parameters in the prompt to modulate its response. The following parameter and prompt examples are taken from Matt Night's GitHub:
In a text-to-text model, the user can insert diferent parameters in the prompt to modulate its response. The following parameter and prompt examples are taken from Matt Night's GitHub:
Line 32: Line 32:
The model can also be asked to act as a technical advisor, mentor, quality assurance, code reviewer, debugging assistant, compliance checker, code optimization specialist, accessibility expert, search engine optimization specialist, and performance analyst. Examples of prompts for the use cases are available [https://github.com/mattnigh/ChatGPT3-Free-Prompt-List here].
The model can also be asked to act as a technical advisor, mentor, quality assurance, code reviewer, debugging assistant, compliance checker, code optimization specialist, accessibility expert, search engine optimization specialist, and performance analyst. Examples of prompts for the use cases are available [https://github.com/mattnigh/ChatGPT3-Free-Prompt-List here].


===Prompt Engineering for Code Generation Models===
==Prompt Engineering for Code Generation Models==
[[File:Coding_model_diagram1.png|alt=Figure 2. Prompt to completion.|thumb|400x400px|Figure 2. From prompt to completion.]]
[[File:Coding_model_diagram1.png|alt=Figure 2. Prompt to completion.|thumb|400x400px|Figure 2. From prompt to completion.]]
Genearte [[code]] using [[models]] like the [[OpenAI Codex]].
Genearte [[code]] using [[models]] like the [[OpenAI Codex]].
Line 40: Line 40:
#Show examples - show the model examples of what you want.
#Show examples - show the model examples of what you want.


====Task====
===Task===
Give the coding model a high-level task description. To improve the quality of the generated code, it's recommended to start the prompt with a broad description of the task at hand. For example, if you want to generate Python code to plot data from a standard [[dataset]], you can provide a prompt like this:  
Give the coding model a high-level task description. To improve the quality of the generated code, it's recommended to start the prompt with a broad description of the task at hand. For example, if you want to generate Python code to plot data from a standard [[dataset]], you can provide a prompt like this:  


Line 47: Line 47:
However, sometimes the generated code may not be optimal, in which case you can provide more specific instructions such as importing libraries before using them. By combining a high-level task description with detailed user instructions, you can create a more effective prompt for coding model to generate code.
However, sometimes the generated code may not be optimal, in which case you can provide more specific instructions such as importing libraries before using them. By combining a high-level task description with detailed user instructions, you can create a more effective prompt for coding model to generate code.


==== Examples====
===Examples===
Gives the coding model examples. Imagine you prefer a unique style of writing Python code that differs from what model produces. Take, for instance, when adding two numbers, you prefer to label the arguments differently. The key to working with models like Codex is to clearly communicate what you want it to do. One effective way to do this is to provide examples for Codex to learn from and strive to match its output to your preferred style. If you give the model a longer prompt that includes the example mentioned, it will then name the arguments in the same manner as in the example.
Gives the coding model examples. Imagine you prefer a unique style of writing Python code that differs from what model produces. Take, for instance, when adding two numbers, you prefer to label the arguments differently. The key to working with models like Codex is to clearly communicate what you want it to do. One effective way to do this is to provide examples for Codex to learn from and strive to match its output to your preferred style. If you give the model a longer prompt that includes the example mentioned, it will then name the arguments in the same manner as in the example.


See also [[zero shot, one shot and few shot learning]]
See also [[zero shot, one shot and few shot learning]]


====Context====
===Context===
If you want to use a library that the coding model is not familiar with, you can guide it by describing the API library beforehand.
If you want to use a library that the coding model is not familiar with, you can guide it by describing the API library beforehand.


Line 59: Line 59:
To correct this, you can show the model model the API definition, including function signatures and examples, so that it can generate code that follows the API correctly. As demonstrated in the example, by providing high-level context in the form of the API definition and examples, the model can understand what you want it to do and generate more accurate code.
To correct this, you can show the model model the API definition, including function signatures and examples, so that it can generate code that follows the API correctly. As demonstrated in the example, by providing high-level context in the form of the API definition and examples, the model can understand what you want it to do and generate more accurate code.


===How to Create Descriptive, Poetic Text===
==How to Create Descriptive, Poetic Text==
====Tips====
===Tips===
*Choose a topic and narrow down the scope.
*Choose a topic and narrow down the scope.
*Select a point-of-view like third, second or first person.
*Select a point-of-view like third, second or first person.
Line 70: Line 70:
*Edit and revise. Don't be afraid of revising and editing the generated text.
*Edit and revise. Don't be afraid of revising and editing the generated text.
*You can ask the chatbot for assistance. The chatbot will explain why it selected a specific detail or phrase in a reply. The chatbot can also help you create a better prompt. You can point out individual phrases and ask the chatbot for alternatives or suggestions.
*You can ask the chatbot for assistance. The chatbot will explain why it selected a specific detail or phrase in a reply. The chatbot can also help you create a better prompt. You can point out individual phrases and ask the chatbot for alternatives or suggestions.
==== Template====
===Template===
<blockquote>
<blockquote>
Describe ''YOUR SCENE''. Use sensory language and detail to describe the ''OBJECTS IN THE SCENE vividly''. Describe ''SPECIFIC DETAILS'' and any other sensory details that come to mind. Vary the sentence structure and use figurative language as appropriate. Avoid telling the reader how to feel or think about the scene.
Describe ''YOUR SCENE''. Use sensory language and detail to describe the ''OBJECTS IN THE SCENE vividly''. Describe ''SPECIFIC DETAILS'' and any other sensory details that come to mind. Vary the sentence structure and use figurative language as appropriate. Avoid telling the reader how to feel or think about the scene.
Line 76: Line 76:


==Overview of Tones==
==Overview of Tones==
===Suggested Tones ===
===Suggested Tones===
*'''[[Authoritative]]''' - confident, knowledgeable,
*'''[[Authoritative]]''' - confident, knowledgeable,
*'''[[Casual]]''' - relaxed, friendly, playful
*'''[[Casual]]''' - relaxed, friendly, playful
Line 110: Line 110:
*'''[[Trustworthy]]''' and '''[[Professional]]''' - business proposals, executive summaries, investor pitches
*'''[[Trustworthy]]''' and '''[[Professional]]''' - business proposals, executive summaries, investor pitches


== Parameters==
==Parameters==
===Common Parameters===
===Common Parameters===
* Temperature
* Temperature
* Perplexity
* Perplexity