ChatGPT and Other LLMs Produce Bull Excrement, Not Hallucinations

In the communications surrounding LLMs and popular interfaces like ChatGPT the term ‘hallucination’ is often used to reference false statements made in the output of these models. This infers that there is some coherency and an attempt by the LLM to be both cognizant of the truth, while also suffering moments of (mild) insanity. The LLM thus effectively is treated like a young child or a person suffering from disorders like Alzheimer’s, giving it agency in the process. That this is utter nonsense and patently incorrect is the subject of a treatise by [Michael Townsen Hicks] and colleagues, as published in Ethics and Information Technology.


This is a companion discussion topic for the original entry at https://hackaday.com/2024/07/01/chatgpt-and-other-llms-produce-bull-excrement-not-hallucinations/