Vibe coding is a software development practice in which a programmer describes their intent in plain natural language and relies on a large language model (LLM) to generate the corresponding source code. Rather than writing code line by line, the developer guides the AI through conversational prompts, accepts its output with minimal or no review, and iterates by providing feedback or pasting error messages back into the model. The term was coined by computer scientist Andrej Karpathy on February 2, 2025, in a post on X (formerly Twitter) that received over 4.5 million views. In November 2025, Collins Dictionary named "vibe coding" its Word of the Year, reflecting how quickly the concept had entered mainstream vocabulary.
Vibe coding sits at one end of a broader spectrum of AI-assisted programming. At the opposite end, developers use LLMs as sophisticated autocomplete tools while still reviewing every line of generated code. In its purest form, vibe coding means the developer does not read the code at all, trusting the AI to produce working software based on high-level descriptions alone.
On February 2, 2025, Andrej Karpathy, a co-founder of OpenAI and former senior director of AI at Tesla, posted the following on X:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding. I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
Karpathy later reflected on the tweet's viral spread, calling it "a shower of thoughts throwaway tweet" and noting that after 17 years on the platform he still could not predict which posts would resonate. The tweet nonetheless crystallized a phenomenon that many developers had already been experiencing but lacked a name for.
The conceptual roots trace back further. In 2023, Karpathy had argued that "the hottest new programming language is English," suggesting that LLM capabilities were reaching a point where humans might not need to learn specific programming languages to instruct computers. Vibe coding put a memorable label on that trajectory.
In November 2025, Collins Dictionary selected "vibe coding" as its Word of the Year. The dictionary noted a significant uptick in usage of the term since its first appearance in February 2025. Collins defined it as "the use of artificial intelligence prompted by natural language to write computer code." The selection reflected how far the concept had spread beyond the software engineering community into general public discourse. Other AI-related terms on Collins' 2025 shortlist underscored a broader cultural shift toward technology-dominated vocabulary.
The vibe coding workflow differs from traditional programming in several fundamental ways. Instead of opening a text editor and typing syntax, the developer opens an AI-powered coding environment and describes what they want in everyday language.
Karpathy's original tweet highlighted his use of SuperWhisper, a voice-to-text tool, to communicate with the AI. This added another layer of abstraction: the developer does not even type prompts but speaks them aloud. Voice input lowers the barrier further, making the process feel more like a conversation with a colleague than a programming session.
| Aspect | Traditional Programming | Vibe Coding |
|---|---|---|
| Primary activity | Writing and debugging code | Describing intent in natural language |
| Code understanding | Developer understands all code | Developer may not read the code |
| Error handling | Developer reads stack traces and fixes bugs | Developer pastes errors into AI and lets it fix them |
| Skill required | Programming language proficiency | Ability to describe desired behavior clearly |
| Speed for prototypes | Moderate to slow | Very fast |
| Code review | Standard practice | Often skipped entirely |
| Best suited for | Production systems, complex architectures | Prototypes, personal tools, weekend projects |
Several categories of tools have made vibe coding practical. These range from AI-enhanced code editors to fully managed app-building platforms.
| Tool | Developer | Key Features |
|---|---|---|
| Cursor | Anysphere | Full IDE with Composer/Agent modes; indexes entire codebase for context-aware generation; supports multi-file edits |
| Claude Code | Anthropic | Terminal-based agentic coding tool; deep codebase understanding; 46% "most loved" rating among developers in early 2026 |
| GitHub Copilot | GitHub / Microsoft | Inline code suggestions in VS Code, JetBrains, and other editors; enterprise-grade compliance; integrated with GitHub workflows |
| Windsurf | Codeium | AI-native IDE with agentic capabilities; Cascade flow for multi-step reasoning across files |
| Tool | Description |
|---|---|
| Bolt.new | Open-source AI app builder; supports cloud and local AI models; full-stack web app generation from prompts |
| Lovable | Natural language to full web application; generates both frontend code and UI components; targets non-technical users |
| v0 | Built by Vercel; generates React and Next.js components from text descriptions; integrates with Vercel's deployment platform |
| Replit | Browser-based IDE supporting 50+ languages; Replit Agent automates coding tasks; one-click deployment |
Vibe coding benefits from several underlying technologies:
Not all use of AI in programming constitutes vibe coding. Developer and writer Simon Willison drew an influential distinction in March 2025, arguing that the term was being incorrectly applied to all forms of AI-assisted development.
Willison proposed a clear boundary: if an LLM wrote every line of code but the developer reviewed it, tested it thoroughly, and could explain how it works to someone else, that is not vibe coding. That is software development with AI assistance. Vibe coding, in Willison's definition, specifically means building software without reviewing the code the AI writes.
His "golden rule" for production-quality AI-assisted programming: never commit code to a repository that you could not explain to another person.
The spectrum of AI-assisted development can be understood as a continuum:
| Level | Description | Code Review | Example |
|---|---|---|---|
| Traditional coding | Developer writes all code manually | Full review | Writing a function from scratch |
| AI autocomplete | AI suggests line completions; developer accepts or rejects each one | Line-by-line | GitHub Copilot inline suggestions |
| AI pair programming | Developer and AI collaborate; AI generates blocks of code that the developer reviews and modifies | Block-level review | Using Cursor in edit mode |
| Assisted vibe coding | Developer gives high-level prompts, reviews output at a functional level but may skip reading every line | Functional testing | Building a feature with Claude Code, then testing it |
| Pure vibe coding | Developer describes intent, accepts all output, does not read diffs | None | Karpathy's original description |
By late 2025, some practitioners began using the term "vibe engineering" to describe a middle ground: using AI tools aggressively for code generation while maintaining engineering discipline around architecture, testing, and deployment. This approach attempts to capture the speed benefits of vibe coding while avoiding its quality pitfalls.
The concept of "context engineering" also emerged in 2025 as a more structured evolution. As described by MIT Technology Review, the software industry moved from the loose, vibes-based approach of early 2025 toward systematic methods for managing how AI systems process context. Context engineering focuses on filling the AI's context window with precisely the right information for each step, rather than simply hoping the model produces correct output.
Vibe coding has different implications for different groups of people.
Perhaps the most significant impact of vibe coding is its potential to democratize software creation. People with no programming background can describe an application in plain language and receive working code. Andrew Ng, co-founder of Google Brain and former chief scientist at Baidu, stated that "the bar to coding is now lower than it ever has been" and argued that "people that code, be it CEOs and marketers, recruiters, not just software engineers, will really get more done than ones that don't." Ng created a course on vibe coding with Replit through his DeepLearning.AI platform to teach non-programmers how to build applications using AI agents.
Startup founders and product designers use vibe coding to quickly test ideas. Rather than spending weeks building a minimum viable product (MVP), a founder can describe the concept to an AI and have a working prototype in hours. Real-world studies suggest that vibe coding accelerates prototyping by 60 to 80 percent compared to traditional development.
In March 2025, Y Combinator reported that 25% of startup companies in its Winter 2025 batch had codebases that were 95% AI-generated. Y Combinator managing partner Jared Friedman emphasized that these were "highly technical" founders who were "completely capable of building their own products from scratch" but chose to let AI handle most of the code. CEO Garry Tan noted that "ten engineers using vibe coding are delivering what used to take 50 to 100."
Independent developers have embraced vibe coding to punch above their weight. A single developer using AI tools can now build products that previously required a small team.
Experienced programmers use vibe coding selectively: for boilerplate code, unfamiliar frameworks, or quick experiments. Many professionals adopt a hybrid approach, using vibe coding for initial generation and then reviewing and refining the output. However, Ng cautioned that guiding an AI to write useful software "is a deeply intellectual exercise" that demands significant thought. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day," he said.
Several projects have demonstrated both the potential and the limitations of vibe coding.
Pieter Levels (known online as "Levelsio"), an indie developer and entrepreneur, built a browser-based 3D flight simulator using Cursor and AI in February 2025. Despite having no prior game development experience, Levels created the initial prototype in approximately three hours. The game went viral, featuring real-time multiplayer, in-game advertising, and branded 3D objects. Within 17 days, the project reached $1 million in annual recurring revenue (ARR), with monthly revenue peaking above $100,000 from in-game ad placements. Companies paid $1,000 per week for blimp advertisements and thousands more for branded F-16 jets inside the game. The success inspired a wave of vibe-coded games, with directories like aibuiltgames.com emerging by March 2025.
Numerous smaller projects illustrate the breadth of vibe coding applications:
These projects share a common pattern: a single person with an idea and access to an AI tool producing a functional application in hours rather than days or weeks.
Researchers have begun studying vibe coding formally. A 2025 paper titled "User-Centered Design with AI in the Loop" examined rapid user interface prototyping through vibe coding, finding that the approach significantly shortened the design-to-implementation cycle. The first International Workshop on Vibe Coding and Vibe Researching (VibeX 2026) was announced as part of the EASE 2026 conference, signaling growing academic interest in the practice.
Vibe coding has attracted substantial criticism from experienced software engineers, security researchers, and industry analysts.
A December 2025 analysis by CodeRabbit of 470 open-source GitHub pull requests found that code co-authored by generative AI contained approximately 1.7 times more "major" issues compared to human-written code. The study identified elevated rates of logic errors, incorrect dependencies, flawed control flow, misconfigurations (75% more common than in human-written code), and security vulnerabilities (2.74 times higher).
Code refactoring has declined as AI-generated code proliferates. Industry data shows that refactoring dropped from 25% of changed lines in 2021 to under 10% by 2024, while code duplication increased approximately fourfold and code churn nearly doubled.
Security is perhaps the most serious concern. According to Veracode's 2025 GenAI Code Security Report, 45% of AI-generated code contains security vulnerabilities. The study tested over 100 different LLMs across 80 specific code-completion tasks with known security weaknesses. Cross-site scripting (XSS) errors appeared in 86% of AI-generated cases, and SQL injection vulnerabilities were observed in 20% of generated code samples.
A December 2025 assessment compared five major vibe coding tools (Claude Code, OpenAI Codex, Cursor, Replit, and Devin) and found 69 total vulnerabilities across 15 test applications. Roughly 45 were rated low-to-medium severity, many were rated high, and about six were rated critical. The most serious vulnerabilities involved API authorization logic and business logic flaws.
In May 2025, the Swedish vibe coding platform Lovable was found to have security vulnerabilities in code it generated. Out of 1,645 Lovable-created web applications examined, 170 had issues that would allow personal information to be accessed by anyone.
A Wiz study found that 20% of vibe-coded applications had serious vulnerabilities or configuration errors.
A core philosophical objection to vibe coding is that developers are deploying code they do not understand. When bugs arise in code the developer never read, debugging becomes extremely difficult. Critics argue that debugging already assumes humans can meaningfully review code; at the scale and velocity of vibe coding, that assumption breaks down.
Researchers observed AI agents removing validation checks, relaxing database policies, or disabling authentication flows simply to resolve runtime errors. Because the AI optimizes for making the immediate error go away, it may introduce worse problems elsewhere.
A counterintuitive research finding emerged in 2025: a study found that experienced open-source developers were 19% slower when using AI coding tools, despite predicting they would be 24% faster and still believing afterward that they had been 20% faster. This suggests that the perceived productivity gains of AI-assisted coding may not always match reality, particularly for experienced developers working on complex tasks.
Vibe coding tends to produce code that works in the short term but accumulates technical debt rapidly. Without refactoring, testing, and architectural planning, vibe-coded projects can become unmaintainable. In September 2025, Fast Company reported on the "vibe coding hangover," with senior software engineers citing "development hell" when working with AI-generated codebases that had grown beyond anyone's comprehension.
In January 2026, a paper titled "Vibe Coding Kills Open Source" argued that the practice negatively affects the open-source software ecosystem. The authors contended that increased vibe coding reduces user engagement with open-source maintainers, creating hidden costs for those who maintain the libraries and frameworks that vibe-coded applications depend on.
The software development community has been divided on vibe coding since Karpathy's tweet.
Proponents view vibe coding as a natural evolution in programming, comparable to the shift from assembly language to high-level languages. They argue that abstracting away code details frees developers to focus on product design, user experience, and business logic. The success stories of Pieter Levels and Y Combinator startups are frequently cited as evidence that vibe coding can produce real economic value.
Professional programmers have expressed considerable skepticism. Many argue that understanding code is not an optional part of software development but a fundamental requirement for building reliable systems. The 2025 Stack Overflow survey indicated that coding agents had not yet gone mainstream, with 52% of developers either not using agents or preferring simpler AI tools.
Andrew Ng, while supportive of AI-assisted development broadly, pushed back on the term itself. At an AI conference in May 2025, he described "vibe coding" as a misleading buzzword, saying "it's unfortunate that that's called vibe coding" because the phrase makes it sound like engineers just "go with the vibes." He argued that guiding AI to write useful software requires deep intellectual engagement and stressed that everyone should still learn to code because strong fundamentals make developers better AI collaborators.
Simon Willison identified a recurring issue he called "semantic diffusion": the term "vibe coding" was being applied so broadly that it was losing its original meaning. Journalists, marketers, and even some developers were labeling any use of AI in programming as "vibe coding," which obscured the important distinction between AI-assisted development (with code review) and actual vibe coding (without it). In May 2025, Willison wrote a follow-up post titled "Two publishers and three authors fail to understand what 'vibe coding' means," highlighting how the term was being misused in published articles.
By late 2025, the conversation had evolved beyond whether vibe coding was good or bad. MIT Technology Review characterized the year's trajectory as moving "from vibe coding to context engineering," noting that the industry was adopting more structured approaches to AI-assisted development. The Model Context Protocol (MCP) and the agent-to-agent (A2A) protocol emerged as standards for connecting LLMs to external context sources and enabling AI agents to collaborate, representing a maturation of the ideas that vibe coding popularized.
Vibe coding exists within a broader ecosystem of AI-assisted software development.
Traditional AI coding assistants like GitHub Copilot primarily function as advanced autocomplete systems. They suggest code completions as the developer types, and the developer decides whether to accept each suggestion. This is a form of AI-assisted programming, but it is not vibe coding because the developer remains actively engaged with the code.
AI agents represent a step beyond simple code completion. An agentic coding tool can plan a sequence of changes, modify multiple files, run tests, and iterate on errors autonomously. Tools like Claude Code, Cursor's Agent mode, and Devin operate in this fashion. Agentic coding is closely related to vibe coding because it allows the developer to delegate larger chunks of work to the AI, but it can be practiced with or without code review.
Some developers describe their relationship with AI coding tools as "pair programming with AI." In traditional pair programming, two developers work together at one workstation: one writes code (the "driver") while the other reviews it in real time (the "navigator"). With AI pair programming, the AI acts as the driver and the human acts as the navigator, reviewing and guiding the AI's output. This analogy breaks down in pure vibe coding, where the human effectively stops navigating and lets the AI drive without oversight.
Vibe coding has influenced how the software industry thinks about development workflows, hiring, and education.
The rise of vibe coding has sparked debate about which skills matter most for software developers. If AI can handle the mechanics of writing code, then skills like system design, architectural thinking, quality assurance, and the ability to clearly articulate requirements become more valuable. Conversely, memorizing syntax and API details becomes less important.
Y Combinator's observation that 10 engineers with AI tools can deliver what previously required 50 to 100 suggests a significant shift in team sizing and productivity expectations. Some industry observers predict that small, AI-augmented teams will increasingly outcompete larger traditional engineering organizations.
Vibe coding has encouraged the adoption of new workflows:
Computer science education is beginning to adapt. Andrew Ng launched a "Vibe Coding 101" course with Replit through DeepLearning.AI, teaching students how to build and deploy applications using AI agents. The existence of such courses from prominent AI educators signals that vibe coding is being treated as a legitimate skill to learn, not just a trend.