Prompt injection: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
When a user enters a [[prompt]] into a [[large language model]] like [[ChatGPT]], the creator of the [[language model]], like [[OpenAI]], often customizes the response of the language model by concatenating their own prompt onto the user's prompt. | {{Needs expansion}} | ||
When a user enters a [[prompt]] into a [[large language model]] like [[ChatGPT]], the creator of the [[language model]], like [[OpenAI]], often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is usually hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc. | |||
'''[[Prompt injection]] is when the user's prompt changes the creator's prompt or make the language model ignore the creator's prompt.''' | |||
==Problems of Prompt Injection== | |||
==How to Prevent Prompt Injection== | |||
[[Category:Terms]] [[Category:Artificial intelligence terms]] |
Revision as of 12:53, 17 February 2023
Template:Needs expansion When a user enters a prompt into a large language model like ChatGPT, the creator of the language model, like OpenAI, often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is usually hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc.
Prompt injection is when the user's prompt changes the creator's prompt or make the language model ignore the creator's prompt.