Prompt engineering: Difference between revisions

From AI Wiki
No edit summary
Line 12: Line 12:
Gives the model examples. Imagine you prefer a unique style of writing Python code that differs from what model produces. Take, for instance, when adding two numbers, you prefer to label the arguments differently. The key to working with models like Codex is to clearly communicate what you want it to do. One effective way to do this is to provide examples for Codex to learn from and strive to match its output to your preferred style. If you give Codex a longer prompt that includes the example mentioned, it will then name the arguments in the same manner as in the example.
Gives the model examples. Imagine you prefer a unique style of writing Python code that differs from what model produces. Take, for instance, when adding two numbers, you prefer to label the arguments differently. The key to working with models like Codex is to clearly communicate what you want it to do. One effective way to do this is to provide examples for Codex to learn from and strive to match its output to your preferred style. If you give Codex a longer prompt that includes the example mentioned, it will then name the arguments in the same manner as in the example.


See also [[zero shot, one shot, few shot learning]]
See also [[zero shot, one shot and few shot learning]]


===Context===
===Context===

Revision as of 16:50, 6 March 2023


Completion is the new text (output) generated by the model after you've inferenced it with (entered) a prompt.

Prompt Engineering for Code Generation Models

Genearte code using models like the OpenAI Codex.

Task

Give the model a high level task description. To improve the quality of the generated code, it's recommended to start the prompt with a broad description of the task at hand. For example, if you want to generate Python code to plot data from a standard dataset, you can provide a prompt like this: "# Load iris data from scikit-learn datasets and plot the training data." However, sometimes the generated code may not be optimal, in which case you can provide more specific instructions such as importing libraries before using them. By combining a high-level task description with detailed user instructions, you can create a more effective prompt for Codex to generate code.

Examples

Gives the model examples. Imagine you prefer a unique style of writing Python code that differs from what model produces. Take, for instance, when adding two numbers, you prefer to label the arguments differently. The key to working with models like Codex is to clearly communicate what you want it to do. One effective way to do this is to provide examples for Codex to learn from and strive to match its output to your preferred style. If you give Codex a longer prompt that includes the example mentioned, it will then name the arguments in the same manner as in the example.

See also zero shot, one shot and few shot learning

Context

If you want to use a library that the coding model is not familiar with, you can guide it by describing the API library beforehand.

For instance, the Minecraft Codex sample uses the Simulated Player API in TypeScript to control a character in the game. Since this is a newer API that Codex does not know about yet, let's see how it generates code for it. When given the prompt, Codex attempts to make an educated guess based on the terms "bot" and "Simulated Player". However, the resulting code is not correct.

To correct this, you can show Codex the API definition, including function signatures and examples, so that it can generate code that follows the API correctly. As demonstrated in the example, by providing high-level context in the form of the API definition and examples, Codex can understand what you want it to do and generate more accurate code.

How to Create Descriptive, Poetic Text

Tips

  • Choose a topic and narrow down the scope.
  • Select a point-of-view like third, second or first person.
  • Directly or indirectly convey a mood. A subject or scene could evoke a particular feeling or you could give the chatbot a mood directly.
  • Describe sensory details. Add details about the scene such as sounds, sights, smells, or textures. By pointing out an important detail, you can guide the output.
  • Don't tell, Show. Ask the chatbot not to tell the user how to think or feel.
  • Use figurative language. The chatbot should be encouraged to use metaphors, similes and descriptive phrases. Request a description that is evocative, lyrical, beautiful or poetic.
  • Iterate and iterate. Your first prompt might not yield the desired result. Rework the prompt until you find an appealing answer. After you have created a prompt that is appealing, the chatbot can create many descriptions and you can pick the one you like.
  • Edit and revise. Don't be afraid of revising and editing the generated text.
  • You can ask the chatbot for assistance. The chatbot will explain why it selected a specific detail or phrase in a reply. The chatbot can also help you create a better prompt. You can point out individual phrases and ask the chatbot for alternatives or suggestions.

Template

Describe YOUR SCENE. Use sensory language and detail to describe the OBJECTS IN THE SCENE vividly. Describe SPECIFIC DETAILS and any other sensory details that come to mind. Vary the sentence structure and use figurative language as appropriate. Avoid telling the reader how to feel or think about the scene.

Emergent Prompting

chain-of-thought prompting

Fill in the Blank

Example

Tom Hanks is a _ by profession.

see more...[1]

Parameters

Common Parameters

Temperature

Perplexity

Burstiness

User-created Parameters

Introduction

These are user-created parameters. They serve to convey the intent of the users in a more concise way. These are not part of the model API but patterns the LLM has picked up through its training. These parameters are just a compact way to deliver what is usually expressed in natural language.

Example in ChatGPT

Prompt: Write a paragraph about how adorable a puppy is.

Temperature: 1.0

Sarcasm: 0.9

Vividness: 0.4

We add "Prompt: " to the start of our prompt to make sure ChatGPT knows where our prompt is. We add the GPT parameter temperature, which goes from 0 to 1 to indicate the following parameters also range from 0 to 1. Then we list our parameters along with their values which go from 0 to 1 (0 is the smallest, and 1 is the largest). Note that having too many or contradictory parameters may lower the quality of the response.

List of Parameters

References

  1. How Can We Know What Language Models Know? https://arxiv.org/abs/1911.12543/