Prompt injection: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
{{Needs Expansion}} | {{Needs Expansion}} | ||
During the [[inference]] of a [[large language model]] like [[ChatGPT]], when a user enters a [[prompt]] as the input, the creator of the [[language model]], like [[OpenAI]], often customizes the user's input by concatenating their own prompt before the start of the user's prompt. The creator's prompt is like a set of instructions that is hidden from the user, providing context like tone, point of view, objective, length etc. | |||
'''[[Prompt injection]] is when the user's prompt | '''[[Prompt injection]] is when the user's prompt make the language model to change the creator's prompt, ignore the creator's prompt or leak the creator's prompt.''' | ||
==Problems of Prompt Injection== | ==Problems of Prompt Injection== |
Revision as of 13:04, 17 February 2023
During the inference of a large language model like ChatGPT, when a user enters a prompt as the input, the creator of the language model, like OpenAI, often customizes the user's input by concatenating their own prompt before the start of the user's prompt. The creator's prompt is like a set of instructions that is hidden from the user, providing context like tone, point of view, objective, length etc.
Prompt injection is when the user's prompt make the language model to change the creator's prompt, ignore the creator's prompt or leak the creator's prompt.