AI coding is not going to replace us anytime soon
Introduction
The rapid proliferation of Artificial Intelligence (AI) has dominated the tech industry narrative, with promises of transformative productivity gains and revolutionary new capabilities. Technologies like large language models (LLMs) have given rise to specialized roles such as prompt engineering and a new class of tools, AI coding agents, which have been touted as game-changers for software development. This report provides a comprehensive analysis of AI's current position in the tech industry, critically examining whether these technologies live up to their promises. By digging deep into industry metrics, real-world use cases, and key performance indicators (KPIs), we aim to separate the hype from the reality and provide a data-driven perspective on the true state of AI in 2025.
Executive Summary
Our analysis reveals a significant and widening discrepancy between the hype surrounding AI and the reality of its implementation and impact. While AI adoption is widespread, its return of investment is still to be met across various big companies that praised its revolutionary impact in the automation of processes. The data overwhelmingly demonstrates that AI technologies, particularly prompt engineering and AI coding agents, have not lived up to their promises for most companies.
Key findings indicate that AI projects suffer from catastrophic failure rates, with 95% of generative AI pilots failing to deliver ROI [1]. The role of the prompt engineer, once hailed as the "job of 2024," has become largely obsolete, with an 85% collapse in job search interest as AI models have become more sophisticated [2]. Furthermore, the promised productivity gains from AI coding agents are highly questionable. While some studies report modest gains on repetitive tasks, the most rigorous research to date : a randomized controlled trial, found that AI tools actually slowed down experienced developers by 19% [3]. This report synthesizes data from multiple sources to present a clear-eyed view of the AI landscape. We will explore the key performance indicators that expose the hype-reality gap, analyze the specific performance of AI coding agents, deconstruct the rise and fall of prompt engineering, and examine the paradox of massive investment versus minimal returns. The conclusion is stark: for 95% of enterprises, the AI revolution is more promise than performance.
The Hype-Reality Gap: Adoption vs. Success
The most striking finding from our research is the vast delta between AI adoption and successful implementation. While nearly every organization is experimenting with AI, very few are translating these efforts into tangible, enterprise-wide value. This gap is the central theme of the current AI landscape. According to a 2025 McKinsey Global Survey, 88% of organizations report using AI in at least one business function [4]. However, this high adoption rate masks a grim reality. Only a third of these companies have begun to scale their AI programs, and a mere 5% have successfully deployed custom enterprise AI tools into production [5]. This creates an 83-percentage-point gap between trying AI and succeeding with it, as illustrated below.
the script that generated this visualization is here
This chasm is further evidenced by catastrophic project failure rates. A 2025 MIT report found that 95% of generative AI pilots fail to deliver a return on investment [1]. This is more than double the failure rate of traditional, non-AI IT projects, which stands at around 40% [6]. The situation is so dire that 42% of companies abandoned most of their AI initiatives in 2025, a 148% increase from the previous year [7].
the script that generated this visualization is here
For the few who do succeed, the rewards are significant. However, these "AI high performers" represent only 6% of organizations that achieve a meaningful (5%+) EBIT impact from AI [4]. The other 94% see minimal to zero financial returns, questioning the massive investments being poured into the technology.
AI Coding Agents: Overrated and Underperforming
AI coding agents like GitHub Copilot, Claude, and Windsurf have been marketed as revolutionary tools that will supercharge developer productivity. The data, however, tells a more nuanced and often contradictory story.
GitHub Copilot, the market leader with over 15 million users, claims to make developers up to 51% faster [8]. While our research confirms high engagement, with 67% of developers using it at least five days a week, the actual effectiveness is questionable. The average acceptance rate for Copilot's suggestions is only 30%, meaning developers reject 70% of what the AI generates [8]. More alarmingly, a significant portion of the accepted code may be flawed, with 29.1% of Python code generated by Copilot containing potential security vulnerabilities [8].
the script that generated this visualization is here
The most definitive evidence comes from a rigorous Randomized Controlled Trial (RCT) conducted by METR in 2025. The study, which focused on experienced open-source developers, produced a shocking result: AI tools made these developers 19% slower [3]. This stands in stark contrast to both expert forecasts and the developers' own self-reported perceptions. Even after being slowed down by the AI, developers believed the tools had made them 20% faster, highlighting a massive perception gap.
The script that generated this visualization is here
A 2025 Bain & Company report corroborates the limited impact, finding that while AI assistants can provide a 10-15% productivity boost on specific tasks, these gains often fail to translate into positive ROI because the time saved is not effectively redirected to higher-value work [9]. The report also notes that writing and testing code—the primary focus of these tools—only accounts for about 25-35% of the total software development lifecycle.
Prompt Engineering: The Rise and Fall of a Hyped Role
The trajectory of prompt engineering as a specialized profession serves as a powerful case study in AI hype. Touted as "the job of 2024," with salaries reaching as high as $375,000 [2], the role has become largely obsolete in just 18 months.
Job search interest for "prompt engineer" on Indeed surged from 2 per million searches in January 2023 to a peak of 144 in April 2023. By 2025, it had collapsed by 85%, plateauing at 20-30 searches per million [2]. A Microsoft survey confirmed this trend, ranking the prompt engineer role second to last among new positions companies are considering adding [2].
The script that generated this visualization is here
The primary reasons for this rapid decline are twofold: AI Model Maturity: Newer models like GPT-4 and Gemini 2.5 are far more intelligent and capable of understanding natural, imperfect language. The need for a human to meticulously craft the "perfect prompt" has been engineered away by the AI itself.
Democratization of Knowledge: Free resources, such as OpenAI's own academy, have made prompt engineering a basic literacy skill, not a specialized expertise. Companies have found it more practical to upskill their existing workforce rather than hire expensive specialists. As Nationwide CTO Jim Fowler stated, prompt engineering is becoming "a capability within a job title, not a job title to itself" [2]. The hype outpaced the sustainable need for the role, which was quickly commoditized.
Investment vs. Reality: The Great Disconnect
The disconnect between AI hype and reality is most apparent when examining investment data against success metrics. In 2024, U.S. companies poured $109.1 billion into AI [10]. Yet, this massive investment was met with a 95% failure rate for generative AI pilots [1].
This paradox highlights a fundamental misalignment of capital and strategy. The claimed industry-wide ROI of $3.70 per dollar invested is difficult to reconcile with the observed failure and abandonment rates. Furthermore, companies are misallocating their internal budgets. Over half of generative AI budgets are devoted to sales and marketing, yet the biggest ROI is consistently found in back-office automation [1]. This strategic failure is compounded by a decline in developer enthusiasm. Between 2021 and 2023, the percentage of developers excited about AI fell from 18% to 10%, while those concerned about it rose to 52% [11].
Conclusion: The Reality Check We Needed The evidence gathered and analyzed in this report leads to an unavoidable conclusion: for the vast majority of organizations, the promises of the current AI wave have not been met. The landscape in 2025 is one of widespread disillusionment, characterized by high adoption but low success, massive investment but minimal returns, and questionable productivity gains. The data is clear: 95% of generative AI pilots fail, experienced developers are slowed down by 19% when using AI tools, and prompt engineering has become an obsolete profession in just 18 months.
However, this does not mean AI coding tools are worthless. The problem is not the technology itself, but rather the misalignment between promises and actual use cases. While these tools have failed to deliver the revolutionary productivity gains marketed by vendors, they have proven valuable in specific, often overlooked scenarios.
Where AI Coding Tools Actually Shine
Despite the disappointing aggregate statistics, developers who use AI tools strategically report genuine value in several key areas that are rarely highlighted in marketing materials. Reverse engineering is one domain where AI coding assistants excel. When faced with unfamiliar codebases, legacy systems, or poorly documented APIs, tools like Claude, ChatGPT, and GitHub Copilot can rapidly explain complex code structures, identify patterns, and suggest how different components interact. This capability transforms what would be hours of manual code archaeology into minutes of guided exploration.
Brainstorming and ideation represent another underappreciated strength. When developers are stuck on architectural decisions, algorithm choices, or debugging strategies, AI tools serve as effective thought partners. They can generate multiple approaches to a problem, suggest alternative implementations, and help developers think through edge cases they might not have considered. This is not about accepting AI-generated code wholesale, but rather using the AI as a catalyst for human creativity and problem-solving. Additionally, AI tools prove useful for learning new languages or frameworks. Junior developers and those expanding their skill sets benefit from real-time explanations, syntax suggestions, and pattern recognition that accelerate the learning curve. The key difference is that these use cases do not rely on the AI being "right" all the time ,they rely on it being a useful conversational partner in the development process. The Path Forward: Realistic Expectations, Strategic Use
For AI to move from a hyped-up novelty to a genuinely useful tool, the industry must undergo a fundamental recalibration of expectations. Organizations should abandon the fantasy of 30-50% productivity gains and enterprise-wide transformation. Instead, they should focus on specific, measurable use cases where AI provides clear value: code explanation, rapid prototyping, learning assistance, and brainstorming.
This requires a shift from treating AI as a replacement for human expertise to viewing it as a specialized assistant for particular tasks. Developers should be trained not in "prompt engineering," but in critical evaluation—knowing when to use AI, when to ignore its suggestions, and how to verify its outputs. Companies must invest in rigorous measurement, moving away from self-reported productivity claims toward controlled studies that reveal actual impact.
The great AI disillusionment of 2025 is not the end of the story. It is a necessary correction, a reality check that separates genuine utility from inflated promises. AI coding tools have a place in the developer's toolkit, but it is a far more modest and nuanced place than the one promised by the $109 billion hype machine. The future belongs not to those who blindly adopt AI, but to those who use it strategically, skeptically, and with clear-eyed realism about what it can and cannot do.
References [1] MIT report: 95% of generative AI pilots at companies are failing[2] Prompt Engineering Jobs Are Obsolete in 2025 – Here’s Why[3] Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity[4] The State of AI: Global Survey 2025[5] The GenAI Divide: State of AI in Business 2025[6] Why AI Projects Fail and How They Can Succeed[7] The hidden tax: An 80% AI project failure rate[8] GitHub Copilot Statistics & Adoption Trends [2025][9] From Pilots to Payoff: Generative AI in Software Development[10] 50 AI Adoption Statistics in 2025[11] Between 70-85% of GenAI deployment efforts are failing to...
Read the full article here: https://ai.plainenglish.io/ai-coding-is-not-going-to-replace-us-anytime-soon-3c1c457c92f4

