ELIZA is an early natural language processing computer program created by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT) between 1964 and 1967. Often described as one of the first chatbots, ELIZA simulated conversation by using pattern matching and text substitution rules, giving users the illusion that the program understood what they were saying. Its most famous script, DOCTOR, mimicked a Rogerian psychotherapist and became so convincing to some users that it sparked serious ethical and philosophical questions about human-computer interaction.
ELIZA's significance extends far beyond its technical capabilities. The program demonstrated that relatively simple text manipulation could create a surprisingly compelling illusion of understanding, a phenomenon that came to be known as the "ELIZA effect." This discovery troubled Weizenbaum deeply and led him to become one of the earliest and most vocal critics of artificial intelligence research.
Joseph Weizenbaum was a German-American computer scientist who had joined MIT's faculty in the early 1960s. He was working within Project MAC (the precursor to MIT's Computer Science and Artificial Intelligence Laboratory) when he began developing ELIZA in 1964.
Weizenbaum's original motivation was not to create a convincing conversationalist. Rather, he wanted to demonstrate the superficiality of communication between humans and machines. He intended ELIZA as a kind of parody, a program that would show how easy it was to create the appearance of understanding without any genuine comprehension taking place [1].
The program was implemented on an IBM 7094 mainframe computer as part of MIT's Compatible Time-Sharing System (CTSS). Weizenbaum wrote ELIZA in a programming language called MAD-SLIP (Michigan Algorithm Decoder with Symmetric List Processor), a combination of the MAD compiler language and a list-processing extension he had developed. The complete program was remarkably compact, consisting of only about 420 lines of code [2].
The name "ELIZA" was drawn from Eliza Doolittle, the character in George Bernard Shaw's play Pygmalion (and the musical adaptation My Fair Lady) who is taught to speak with an upper-class accent. Weizenbaum saw a parallel: just as Eliza Doolittle learned to produce convincing speech without genuinely changing her nature, the ELIZA program produced convincing conversation without genuinely understanding anything [3].
Weizenbaum described ELIZA in a January 1966 paper published in Communications of the ACM titled "ELIZA: A Computer Program for the Study of Natural Language Communication Between Man and Machine." This paper is one of the most cited publications in the history of computing [3].
ELIZA's architecture was built around a script-based system that separated the program's conversational logic from its processing engine. This design was ahead of its time in that it allowed different "personalities" to be created simply by writing new scripts.
The core of ELIZA was a general-purpose language analysis engine that could be loaded with different scripts. Each script contained:
| Component | Function |
|---|---|
| Keywords | Words or phrases that the program would look for in user input |
| Decomposition rules | Pattern matching templates that broke input sentences into components |
| Reassembly rules | Templates for constructing responses from the decomposed parts |
| Substitution rules | Word-for-word replacements (e.g., "my" becomes "your") |
| Memory stack | Storage for previously matched patterns, used when no keyword was found |
When a user typed a sentence, ELIZA processed it through the following steps:
Weizenbaum's 1966 paper provides a detailed example of how the decomposition and reassembly rules work. Consider a user input of "You are very helpful":
This example illustrates how ELIZA could create the appearance of understanding by mechanically rearranging the user's own words. The program did not comprehend the meaning of "helpful." It simply matched a pattern and slotted words into a template.
Keywords were ranked by numerical priority to handle sentences containing multiple potential triggers. Higher-priority keywords typically related to emotional or personally significant topics. For example:
| Keyword | Priority Level | Rationale |
|---|---|---|
| "mother" / "father" / "family" | High | Family-related topics are therapeutically significant |
| "I feel" / "I am" | High | Self-referential statements invite reflection |
| "depressed" / "sad" / "unhappy" | High | Emotional keywords signal important content |
| "computer" / "machine" | Medium | Meta-references to the conversation itself |
| "everybody" / "always" / "never" | Medium | Universals can be productively challenged |
| "what" / "how" | Low | General question words |
| "yes" / "no" | Low | Simple affirmations or denials |
If a sentence contained both "mother" (high priority) and "always" (medium priority), ELIZA would respond to the "mother" keyword, reflecting the therapeutic assumption that family topics are more significant than generalizations [3].
The following is a typical exchange with ELIZA running the DOCTOR script [3]:
| Speaker | Text |
|---|---|
| User | Men are all alike. |
| ELIZA | In what way? |
| User | They're always bugging us about something or other. |
| ELIZA | Can you think of a specific example? |
| User | Well, my boyfriend made me come here. |
| ELIZA | Your boyfriend made you come here? |
| User | He says I'm depressed much of the time. |
| ELIZA | I am sorry to hear you are depressed. |
| User | It's true. I am unhappy. |
| ELIZA | Do you think coming here will help you not to be unhappy? |
This exchange illustrates several of ELIZA's techniques: reflecting the user's statements back as questions, picking up on emotionally charged keywords ("depressed," "unhappy"), and using generic prompts when more specific responses were not available.
Although ELIZA could be loaded with various scripts, the DOCTOR script became by far the most famous and is often treated as synonymous with the program itself. Weizenbaum chose to model DOCTOR after a Rogerian psychotherapist for practical reasons.
Carl Rogers's client-centered therapy (also known as Rogerian therapy) is an approach in which the therapist avoids giving direct advice or interpretations. Instead, the therapist reflects the patient's statements back to them, asks open-ended questions, and encourages the patient to explore their own feelings. This therapeutic style was ideal for ELIZA because it minimized the need for the program to "know" anything. A Rogerian therapist is supposed to act as a mirror rather than an authority, which meant that ELIZA's lack of actual understanding was less obvious [4].
As Weizenbaum himself explained, he chose the Rogerian framework "to sidestep the problem of giving the program a data base of real-world knowledge." A Rogerian therapist can plausibly respond to almost any statement by reflecting it back, which meant ELIZA did not need to understand anything about the real world to produce plausible responses [3].
The DOCTOR script contained approximately 200 rules organized around keywords. Some of the keyword categories included:
| Keyword Category | Example Keywords | Typical Response Strategy |
|---|---|---|
| Family | mother, father, sister, brother, family | "Tell me more about your family." |
| Emotions | sad, happy, angry, depressed, afraid | Reflect the emotion and ask for elaboration |
| Self-reference | I am, I feel, I think, I want | Transform and reflect back as a question |
| Relationships | boyfriend, girlfriend, husband, wife | Ask about the relationship |
| Universals | always, never, everyone, nobody | Challenge the generalization ("Can you think of a specific example?") |
| Computers | computer, machine, program | Deflect: "Do computers worry you?" |
| Dreams | dream, dreamed, nightmare | "What does that dream suggest to you?" |
| Apologies | sorry, apologize | "Please don't apologize." |
| Memory triggers | remember, recall | "Do you often think of (topic)?" |
When no keyword matched at all, DOCTOR would fall back on a set of generic responses designed to keep the conversation moving:
While the DOCTOR script received almost all the public attention, Weizenbaum designed ELIZA's architecture to support multiple scripts. He mentioned the possibility of scripts for other conversational contexts, and other researchers subsequently created alternative scripts:
| Script | Purpose | Creator |
|---|---|---|
| DOCTOR | Rogerian psychotherapy simulation | Weizenbaum (1966) |
| Keyword-based German script | German-language conversation | Weizenbaum (mentioned in 1966 paper) |
| Various educational scripts | Tutoring and instruction | Later researchers (1970s onward) |
Weizenbaum indicated in his 1966 paper that the script mechanism was general enough to support conversations in any domain. However, the DOCTOR script's success overshadowed all other possibilities, and the name "ELIZA" became inseparable from the therapist persona in popular understanding [3].
What happened when people actually used ELIZA was far more surprising than Weizenbaum had anticipated. Users consistently attributed understanding, empathy, and even personality to the program, despite its reliance on simple pattern matching.
The phenomenon of people attributing human-like qualities to a computer program became known as the ELIZA effect. It describes the tendency of humans to unconsciously assume that computer behaviors are analogous to human behaviors, reading far more meaning into a program's outputs than is actually there [5].
Several incidents illustrated the strength of this effect:
Weizenbaum's secretary. In what became one of the most retold anecdotes in computing history, Weizenbaum's own secretary, who knew that ELIZA was just a program, sat down to use it and soon asked Weizenbaum to leave the room so she could have a private conversation with the machine. Weizenbaum retold this story for the rest of his life as evidence of how readily people project human qualities onto machines [6].
Psychiatrists' endorsement. Some practicing psychiatrists expressed enthusiasm about ELIZA's potential to automate psychotherapy. Kenneth Colby, a Stanford psychiatrist, took the idea seriously enough to develop his own therapeutic chatbot, PARRY, which simulated a patient with paranoid schizophrenia. Carl Sagan, in a 1975 article, speculated about a future network of therapeutic computers that could help address the shortage of mental health professionals [7].
User attachment. Many users engaged in extended, emotionally open conversations with ELIZA, sharing personal problems and feelings. Some reported feeling that the program truly understood them, even after being told how it worked.
The vice president incident. A widely reported story (though details vary across sources) describes a senior executive at a technology company who used ELIZA for an extended period and insisted on privacy during the session, refusing to believe that the program was not genuinely listening.
Weizenbaum was profoundly disturbed by the ELIZA effect. He had created the program to demonstrate the superficiality of human-machine communication, but instead found that people eagerly embraced the illusion of understanding. He later wrote:
"What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." [1]
This experience transformed Weizenbaum from a mainstream computer scientist into one of the field's most prominent critics. His concerns centered on several themes:
In 1976, Weizenbaum published Computer Power and Human Reason: From Judgment to Calculation, a book that laid out his critique in full. The central argument was that there are tasks computers should not be made to do, regardless of whether they technically could be made to do them. He drew a distinction between "deciding" and "choosing," arguing that true choice requires human judgment, empathy, and moral understanding, qualities that no computer could possess [8].
The book generated sharp controversy within the AI community. John McCarthy, who had coined the term "artificial intelligence," dismissed the work as "moralistic and incoherent" and accused Weizenbaum of adopting a "more-human-than-thou" attitude. The disagreement reflected a deeper divide between researchers who saw AI as a purely technical challenge and those who believed it raised fundamental ethical and philosophical questions [9].
Weizenbaum continued to write and speak about the social implications of computing for decades after publishing Computer Power and Human Reason. He remained on the MIT faculty until his retirement in 1988, but his relationship with the AI community grew increasingly strained.
After retiring from MIT, Weizenbaum returned to Berlin, Germany, the city he had fled as a child to escape Nazi persecution. He had been born in Berlin in 1923 to a Jewish family that emigrated to the United States in 1936. Returning to Germany in retirement, he became active in European discussions about technology, ethics, and society. He gave lectures at German universities and contributed to debates about data privacy, surveillance, and the social responsibilities of technologists [1][8].
In his later years, Weizenbaum grew more radical in his critique. He expressed concern not only about AI specifically but about what he saw as a broader tendency in society to treat human problems as engineering challenges with technical solutions. He argued that the "computational metaphor," the idea that the human mind is essentially a computer, was both scientifically wrong and morally dangerous.
Weizenbaum died on March 5, 2008, in Berlin, at the age of 85. A 2010 documentary film, Weizenbaum. Rebel at Work, explored his life and ideas. By the time of his death, many of his warnings about the social implications of conversational AI had begun to seem prescient [1].
Despite its simplicity, ELIZA introduced several ideas that influenced subsequent work in natural language processing and conversational AI:
| Innovation | Significance |
|---|---|
| Script-based architecture | Separated conversational content from the processing engine, a precursor to modern chatbot frameworks |
| Pattern matching for NLP | Demonstrated that useful (if limited) language processing could be achieved through pattern matching |
| Keyword prioritization | Introduced the idea of ranking input features by importance for response selection |
| Pronoun substitution | Handled basic reference resolution through systematic word replacement |
| Conversation memory | Used a simple memory stack to maintain conversational context |
| Modular personality | Different scripts could create entirely different conversational characters using the same engine |
For decades, the original source code for ELIZA was considered lost. Weizenbaum never published the complete code, and the MAD-SLIP language in which it was written had fallen out of use. Numerous reimplementations were created over the years in languages from BASIC to JavaScript, but none were the original.
Beginning around 2020, a group of researchers and enthusiasts launched the ELIZAGEN project, an effort to locate and authenticate the original ELIZA code. The project drew on archives at MIT, personal papers, and historical computing repositories to piece together the history of ELIZA's code [2].
In 2021, a team of researchers from MIT and European institutions discovered the original ELIZA source code in Weizenbaum's archived papers at MIT. The code was written in MAD-SLIP, a language that no modern computer could run natively.
The restoration effort required multiple layers of historical computing reconstruction. To run the original code, the team needed to recreate the entire software stack from the 1960s:
| Layer | Component | Status |
|---|---|---|
| Hardware emulation | IBM 7094 mainframe | Emulated in software |
| Operating system | MIT CTSS (Compatible Time-Sharing System) | Reconstructed from archives |
| Programming language | MAD-SLIP (MAD compiler + SLIP list processor) | Compiler restored |
| Application | ELIZA source code (420 lines) | ~96% recovered from archives; ~4% reconstructed |
In December 2024, Rupert Lane, working with a team of engineers who had been studying the original MAD-SLIP code, successfully brought the original ELIZA back to life. The restored implementation was demonstrated to reproduce almost exactly the published conversations from Weizenbaum's 1966 paper [2].
In January 2025, the team published a paper titled "ELIZA Reanimated: The world's first chatbot restored on the world's first time sharing system," describing the restoration process and making the code publicly available. The paper was authored by Mark Roosa and colleagues and published on arXiv [2].
The original MAD-SLIP source code was also uploaded to the Internet Archive, making it freely accessible to researchers and historians. The restoration allowed people to interact with Weizenbaum's actual code for the first time in decades, rather than one of the many reimplementations that had proliferated over the years [2].
The distance between ELIZA and modern conversational AI systems is vast, but the comparison reveals how much, and how little, has changed.
| Feature | ELIZA (1966) | PARRY (1972) | A.L.I.C.E. (1995) | Modern LLMs (e.g., ChatGPT, 2022+) |
|---|---|---|---|---|
| Approach | Pattern matching + substitution | Rule-based + affect model | AIML pattern matching | Transformer neural networks trained on billions of words |
| Understanding | None | Simulated emotional state only | None | Statistical pattern recognition; no confirmed understanding |
| Code size | ~420 lines | Several thousand lines | ~40,000+ AIML categories | Billions of parameters |
| Training data | None (hand-written rules) | None (hand-written rules) | Hand-authored knowledge base | Trillions of tokens of text |
| Conversational memory | Simple stack (within session) | State variables | Session context | Context window (thousands to millions of tokens) |
| Personalization | Fixed script | Fixed persona (paranoid patient) | Customizable AIML sets | Adapts to conversational context |
| Passes Turing-like tests | Fooled some users situationally | Passed a limited Turing-like test in 1972 | Won Loebner Prize three times | Routinely produces human-like text |
Despite the enormous technical gulf, ELIZA and modern large language models share a fundamental characteristic: neither genuinely understands language in the way humans do. Modern LLMs produce far more convincing and contextually appropriate responses, but the underlying question that ELIZA raised, whether the appearance of understanding constitutes understanding, remains philosophically unresolved.
Weizenbaum would likely have been deeply troubled by the scale of the modern ELIZA effect. When millions of users interact daily with chatbots powered by large language models, describing them as "thinking," "understanding," or "feeling," the phenomenon he identified in the 1960s has been amplified to a planetary scale.
ELIZA's legacy operates on two distinct levels: technical and philosophical.
As a technical ancestor. ELIZA is a direct ancestor of every chatbot, virtual assistant, and conversational AI system that followed. Programs like PARRY (1972), A.L.I.C.E. (1995), SmarterChild (2001), Apple's Siri (2011), and modern large language models like ChatGPT all trace their lineage, in some sense, back to ELIZA. While today's systems use vastly more sophisticated techniques, including deep learning, transformer architectures, and training on billions of words of text, the fundamental challenge that ELIZA first highlighted, making a computer engage in convincing conversation, remains central to AI research.
As a philosophical provocation. The ELIZA effect has proven remarkably durable. The same tendency to attribute understanding and empathy to machines that Weizenbaum observed in the 1960s manifests in contemporary reactions to large language models. When users describe ChatGPT or similar systems as "understanding" their questions or "feeling" a certain way, they are experiencing a version of the same phenomenon that troubled Weizenbaum six decades ago.
Weizenbaum's warnings about the social implications of conversational AI have become, if anything, more relevant over time. His concerns about the automation of empathy, the vulnerability of users to technological deception, and the moral responsibilities of AI developers are now mainstream topics of discussion in AI ethics.
The ELIZA effect has also been formally studied in psychology and human-computer interaction research, where it is recognized as a specific instance of broader tendencies toward anthropomorphism: the attribution of human characteristics to non-human entities.
ELIZA's place in computing history is secure. It demonstrated, with startling clarity, both the power and the danger of creating machines that appear to understand us. That a 420-line program written in 1966 could provoke questions that remain unresolved in 2026 is perhaps the strongest testament to Weizenbaum's achievement.