Prompt engineering: Difference between revisions
No edit summary |
No edit summary |
||
Line 6: | Line 6: | ||
<blockquote> | <blockquote> | ||
Tom Hanks is a _ by profession. | Tom Hanks is a _ by profession. | ||
</blockquote><ref name="”1”">How Can We Know What Language Models Know? https://arxiv.org/abs/1911.12543/</ref> | </blockquote> | ||
see more...<ref name="”1”">How Can We Know What Language Models Know? https://arxiv.org/abs/1911.12543/</ref> | |||
==Parameters== | ==Parameters== |
Revision as of 17:13, 5 March 2023
Fill in the Blank
Example
Tom Hanks is a _ by profession.
see more...[1]
Parameters
Common Parameters
Temperature
Perplexity
Burstiness
User-created Parameters
Introduction
These are user-created parameters. They serve to convey the intent of the users in a more concise way. These are not part of the model API but patterns the LLM has picked up through its training. These parameters are just a compact way to deliver what is usually expressed in natural language.
Example in ChatGPT
Prompt: Write a paragraph about how adorable a puppy is.
Temperature: 1.0
Sarcasm: 0.9
Vividness: 0.4
We add "Prompt: " to the start of our prompt to make sure ChatGPT knows where our prompt is. We add the GPT parameter temperature, which goes from 0 to 1 to indicate the following parameters also range from 0 to 1. Then we list our parameters along with their values which go from 0 to 1 (0 is the smallest, and 1 is the largest). Note that having too many or contradictory parameters may lower the quality of the response.
List of Parameters
- Professionalism -
- Randomness -
- Sentimentality -
- Sesquipedalianism -
- Sarcasm -
- Laconic -
- Asyndetic -
- Vividness -
- Ecclesiastical -
References
- ↑ How Can We Know What Language Models Know? https://arxiv.org/abs/1911.12543/