Hallucination: Difference between revisions

no edit summary
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{see also|Machine learning terms}}
{{see also|Machine learning terms}}
Hallucinations in large language models (LLMs) like GPT and Bing Chat are a fascinating and critical aspect of artificial intelligence research. These instances, where an LLM generates information that is misleading, irrelevant, or downright false, present significant challenges and opportunities for the development of more reliable and accurate AI systems.
[[Hallucinations]] in large language models ([[LLMs]]) like [[GPT]] and [[Bing Chat]] are a fascinating and critical aspect of [[artificial intelligence]] research. These instances, where an LLM generates information that is misleading, irrelevant, or downright false, present significant challenges and opportunities for the development of more reliable and accurate [[AI systems]].


== Definition and Overview ==
== Definition and Overview ==
Hallucinations in LLMs refer to the phenomenon where the model generates text that deviates from factual accuracy or logical coherence. These can range from minor inaccuracies to complete fabrications or contradictory statements, impacting the reliability and trustworthiness of AI-generated content.
[[Hallucinations]] in LLMs refer to the phenomenon where the model generates text that deviates from factual accuracy or logical coherence. These can range from minor inaccuracies to complete fabrications or contradictory statements, impacting the reliability and trustworthiness of AI-generated content.


=== Types of Hallucinations ===
=== Types of Hallucinations ===
There are various forms of hallucinations that can manifest in LLM outputs:
There are various forms of hallucinations that can manifest in LLM outputs:
* **Sentence Contradiction:** When a generated sentence contradicts a previous one within the same context.
*'''Sentence Contradiction:''' When a generated sentence contradicts a previous one within the same context.
* **Prompt Contradiction:** Occurs when the response directly opposes the initial prompt's intent.
*'''Prompt Contradiction:''' Occurs when the response directly opposes the initial prompt's intent.
* **Factual Errors:** These are outright inaccuracies or misrepresentations of verifiable information.
*'''Factual Errors:''' These are outright inaccuracies or misrepresentations of verifiable information.
* **Nonsensical Outputs:** Responses that, while possibly grammatically correct, are irrelevant or absurd in the given context.
*'''Nonsensical Outputs:''' Responses that, while possibly grammatically correct, are irrelevant or absurd in the given context.


== Causes of Hallucinations ==
== Causes of Hallucinations ==
Line 16: Line 16:


=== Data Quality Issues ===
=== Data Quality Issues ===
LLMs are trained on vast corpora of text sourced from the internet, including sites like Wikipedia and Reddit. The quality of this data varies, with inaccuracies, biases, and inconsistencies being inadvertently learned by the model.
LLMs are trained on vast corpora of text sourced from the internet, including sites like [[Wikipedia]] and [[Reddit]]. The quality of this data varies, with inaccuracies, biases, and inconsistencies being inadvertently learned by the model.


=== Generation Methods ===
=== Generation Methods ===
Line 35: Line 35:
=== Multi-Shot Prompting ===
=== Multi-Shot Prompting ===
Providing multiple examples of the desired output or context can help the model better understand and adhere to the user's expectations. This approach is particularly effective for tasks requiring specific formats or styles.
Providing multiple examples of the desired output or context can help the model better understand and adhere to the user's expectations. This approach is particularly effective for tasks requiring specific formats or styles.


==Hallucination in Machine Learning==
==Hallucination in Machine Learning==
223

edits