Jump to content

Prompt injection: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
{{Needs Expansion}}
{{Needs Expansion}}
When a user enters a [[prompt]] into a [[large language model]] like [[ChatGPT]], the creator of the [[language model]], like [[OpenAI]], often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc.
During the [[inference]] of a [[large language model]] like [[ChatGPT]], when a user enters a [[prompt]] as the input, the creator of the [[language model]], like [[OpenAI]], often customizes the user's input by concatenating their own prompt before the start of the user's prompt. The creator's prompt is like a set of instructions that is hidden from the user, providing context like tone, point of view, objective, length etc.


'''[[Prompt injection]] is when the user's prompt changes the creator's prompt, make the language model ignore the creator's prompt or leaks the creator's prompt.'''
'''[[Prompt injection]] is when the user's prompt make the language model to change the creator's prompt, ignore the creator's prompt or leak the creator's prompt.'''


==Problems of Prompt Injection==
==Problems of Prompt Injection==