Prompt injection: Difference between revisions

From AI Wiki
No edit summary
No edit summary
Line 1: Line 1:
{{Needs expand}}
{{Needs Expansion}}
When a user enters a [[prompt]] into a [[large language model]] like [[ChatGPT]], the creator of the [[language model]], like [[OpenAI]], often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is usually hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc.
When a user enters a [[prompt]] into a [[large language model]] like [[ChatGPT]], the creator of the [[language model]], like [[OpenAI]], often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is usually hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc.



Revision as of 12:54, 17 February 2023

When a user enters a prompt into a large language model like ChatGPT, the creator of the language model, like OpenAI, often customizes the response of the language model by concatenating their own prompt onto the user's prompt. The creator's prompt is like a set of instructions that is concatenated before the start of the user's prompt and is usually hidden from the user. The creator's prompt provides context like tone, point of view, objective, length etc.

Prompt injection is when the user's prompt changes the creator's prompt or make the language model ignore the creator's prompt.

Problems of Prompt Injection

How to Prevent Prompt Injection