For decades, the software industry has made its living translating between domain experts and machines. What happens when the machine no longer needs that translation?


Between late January and early February 2026, stock prices across the software sector collapsed. In a single trading session, roughly $285 billion in market value evaporated — and over the following days, the sector shed more than a trillion dollars in total. SAP fell 16 percent, ServiceNow 11, Salesforce nearly 7. Thomson Reuters suffered a drop of almost 16 percent, the largest single-day loss in the company's history. Hedge funds pocketed $24 billion from short bets against software companies.

The financial press dubbed it the "SaaSpocalypse." But behind the catchy label lies something more than a market correction. What unfolded in those days was the moment the financial markets priced in a realization that had been hanging in the air for months — one that hardly anyone was willing to articulate in its full scope: the business model of the software industry is not facing a cyclical downturn. It is facing a structural break.

Forty Years of Interpreting

To understand why this break runs so deep, it helps to take a step back. Since its earliest days, the software industry has followed a fundamental pattern that has changed remarkably little over four decades.

A domain expert — an accountant, a procurement officer, a lawyer — has a problem that needs to be solved by a machine. That problem must be translated, because the machine does not understand the language of domain experts. So it passes through a chain of abstraction layers. Subject-matter experts describe what they need. Business analysts formalize the requirements. Architects design systems. Developers write code. Testers verify that the result matches the specification. Operations teams keep everything running. Support staff deal with whatever goes wrong anyway.

Each of these layers exists for one single reason: the machine needed interpreters. It could not understand what the accountant meant when she said, "I need an overview of our outstanding receivables, broken down by maturity and customer risk." That sentence had to pass through half a dozen hands before a system produced the desired table.

This chain of translation gave rise to a vast industry. Software houses, consulting firms, systems integrators, cloud providers, tool vendors — they all earn their money because a gap yawns between the domain expert and the machine.

The journey through this chain has always been expensive, slow, and lossy, because every act of translation loses or distorts information. The history of software development is also a history of failed projects. The larger and more complex a project grew, the greater the likelihood that the end result would differ from what the client had envisioned — or that the project would never be completed at all. The entire process framework that is now standard practice, from requirements engineering to formalized acceptance testing to change management procedures, is at its core a reaction to this failure. It was meant to ensure that the client gets what they asked for.

It succeeded only partially. The real problem runs deeper than poorly documented requirements. At the outset, clients often do not know precisely what they want. Their vision matures only through engagement with the system — through seeing and trying out intermediate results. This is exactly why iterative development methods emerged: Scrum, agile frameworks, rapid prototyping. They attempt to involve the client earlier and more frequently, so that misunderstandings do not go undetected for months. But even these methods do not truly shorten the chain. They make the loops smaller, not fewer. The domain expert still does not speak directly to the machine. She speaks to a product owner, who speaks to a team, who speaks to the machine.

Over the past decades, the industry has repeatedly tried to close this gap. SQL was supposed to let business departments access data on their own. The so-called 4GL languages of the 1980s promised that end users could write their own software. RAD tools in the 1990s, then Visual Basic, then low-code platforms — each decade brought a new generation of tools that heralded the end of the translation chain.

None of them delivered on the promise. On the contrary: with every new technology layer, complexity grew, and with it the need for specialists who could master it. Cloud computing simplified the infrastructure but did not shorten the chain of abstraction. Microservices made architectures more flexible but increased rather than reduced the number of roles involved. Every attempt to close the gap between domain expert and machine ended up merely shifting it.

This pattern held for forty years. It held because every simplification to date generated new complexity, which in turn created new demand for translation.

Until now.

When the Interpreter Disappears

On January 12, 2026, Anthropic released Cowork as a research preview. On January 30, eleven industry-specific plugins followed. What looks at first glance like an incremental product update in fact marks a qualitative leap.

Cowork is not a chatbot with file access. It is an agent that plans autonomously, delegates subtasks to sub-agents, works in parallel, and delivers results in professional formats — spreadsheets with working formulas, formatted documents, structured analyses. The plugins go further still. They contain domain knowledge in structured form, enabling Claude to act as a subject-matter expert: a legal plugin encodes negotiation ranges and escalation thresholds for contract clauses. A finance plugin builds financial models and monitors key metrics. A sales plugin researches prospective customers and produces competitive analyses.

The decisive point is not that these tools exist. The decisive point is who they are built for. Cowork is not aimed at developers. Anthropic itself calls it "Claude Code for the rest of work" — for non-technical users, for the domain experts who previously stood at the beginning of the translation chain.

The accountant from earlier can now say directly what she needs. Under the old model, she would have filed a ticket, which a product owner prioritized, which an analyst specified, which a developer implemented, and which a tester signed off on. Three sprints later, she receives a report that almost shows what she wanted, but breaks down the risk classification differently from what she had in mind. So the cycle begins anew.

Under the new model, she describes to the agent in two sentences what she needs. The agent knows accounting terminology, accesses the ERP system, performs the analysis, and delivers a spreadsheet with working formulas. If the risk classification is off, she says so, and the agent corrects it in seconds. No ticket, no sprint, no information loss across six layers of translation.

Cowork is not the only product in this category. OpenAI, Google, and numerous startups are developing comparable agents. But Anthropic's product serves as a useful marker because it makes the break so visible: it is aimed not at developers but at the domain experts themselves.

The Market's Calculus

The SaaS industry lives on a business model premised on a simple assumption: the more people use a piece of software, the higher the revenue — per seat, per user, per license. This model has worked reliably for years, because headcounts in enterprises either held steady or grew.

Agentic AI — that is, AI that acts and executes tasks autonomously — turns this assumption on its head. If an agent does the work that fifty clerks used to do, the company no longer needs fifty licenses. It might need five, for the people who direct the agent and review its output. Industry experts like Jason Lemkin, founder of SaaStr and one of the most influential SaaS analysts, paint a stark picture: if AI agents can replace entire sales teams, a massive decline in user numbers looms — and with it, a collapse in license revenue.

The financial markets are reacting to this arithmetic. When Palantir's CTO Shyam Sankar declared during the quarterly earnings call that their AI product reduces complex SAP ERP migrations from years to a matter of weeks, the valuations of traditional SaaS providers fell by an average of twelve percent within sixty minutes. Not because the quarterly numbers were poor — ServiceNow actually beat expectations — but because investors recognized the license model as structurally vulnerable.

Hedge funds have significantly reduced their positions in software companies. The hyperscalers are investing roughly $600 billion in AI infrastructure in 2026, and a substantial share of that money is being reallocated from enterprise software budgets. CIOs want fewer vendors, not more. Consolidation, not point solutions.

Lemkin himself, however, puts the AI narrative in perspective. Growth rates in the SaaS sector have declined in every single quarter since the 2021 peak. The story of AI disruption, he argues, merely gives the market permission to carry out an overdue repricing. He is probably right about that. But his argument does not weaken the thesis; it strengthens it: the industry was already fragile. Agentic AI delivers the push that tips an already unstable system over the edge.

Inertia Buys Time, Not a Future

Anyone who takes the trouble to consider the counterarguments in good faith will find some worth engaging with.

SAP argues that AI agents will amplify the capabilities of SaaS solutions rather than replace them. Agents need clean, structured data and proven processes to deliver reliable results — and that, SAP contends, is precisely the advantage of incumbent providers. Rene Haas, CEO of chip designer Arm Holdings, speaks from a hardware perspective of "micro-hysteria" and points out that enterprise AI adoption is still in its infancy.

Bank of America identifies a logical contradiction: you cannot simultaneously believe that the massive AI investments will not pay off and that AI is powerful enough to destroy established software models. If AI is strong enough for the latter, the infrastructure behind it must justify the former.

In regulated industries, another objection carries significant weight: the processes in financial services, pharmaceuticals, or aviation are too complex and too consequential for a single requester to oversee. That is precisely why business analysts, compliance departments, and quality assurance exist. Closely related is the problem of data quality: many enterprises keep their data in silos, inconsistent and incomplete. An agent that accesses bad data delivers bad results — no matter how intelligent it is.

Experienced developers warn of the code quality crisis that will follow the "vibe coding" boom. AI-generated code from 2026 will create substantial cleanup work in 2027. Skeptics draw the comparison to autonomous driving: a disruption announced for years that has yet to materialize.

And there is the counterintuitive effect: in some cases, AI may actually increase the number of software products in use. A marketing team that, thanks to AI agents, reaches three times as many customers may end up using more tools than before.

These objections deserve a fair hearing. But none of them refutes the direction of the change.

SAP's argument conflates data with software. The fact that agents need structured data does not mean they need SaaS interfaces. The data is the real value. The applications built around that data today are precisely the abstraction layer that can fall away. The data quality problem is real, but it does not argue in favor of incumbent software — it argues that the data layer itself is becoming the strategic field of investment. Those who get their data in order no longer need the application layer on top.

Bank of America's contradiction dissolves once you allow for different time horizons: the AI infrastructure investments are justified by the productivity gains that agents are already delivering today. The disruption of software business models operates on a time delay. The market is pricing in both — one as opportunity, the other as risk.

The regulation objection underestimates what already exists in regulated industries: documentation requirements. Anyone operating under ISO 27001, GxP, or SOX has their processes documented in machine-readable form — because auditors demand it. An agent that understands not only the business requirement but also the process handbook can incorporate compliance rules autonomously during execution. The human auditor will be needed for the foreseeable future, but what they audit is then the work of an agent, not the work of six departments.

The autonomous driving comparison is misleading because the physical world is orders of magnitude more complex than the world of data and documents in which AI agents already operate today. The code quality crisis concerns the transition, not the end state. And the counterintuitive effect applies to individual teams, not to the overall model: if a marketing team produces three times the output, it may need more tools — but fewer people to operate them. The license model breaks regardless.

What remains is the argument from inertia. Companies have invested hundreds of millions in their ERP systems. Those will not be switched off overnight. That is true. But inertia buys time; it does not prevent the change. When Palantir reduces ERP migrations from years to weeks, the moat grows thinner, not deeper.

When Machines Commission Each Other

The argument so far has centered on the machine learning to communicate with domain experts. But that is only half the story.

The other half becomes visible when you observe how the technical infrastructure is changing. The Model Context Protocol enables AI agents to access external tools and data sources without a human manually configuring the interfaces. Early approaches demonstrate how agents can generate dynamic, context-dependent interfaces — software that materializes on demand and dissolves afterward.

What is emerging here is a closed loop in which no human mediates between the individual steps. A concrete scenario makes this tangible: an AI agent monitoring sales recognizes that an existing customer has increased their usage volume. It autonomously commissions a second agent to calculate a suitable upgrade offer. A third agent checks whether the offer complies with the regulatory requirements of the customer's market. A fourth drafts the offer text, tailored to the existing correspondence with that customer. Only the finished, verified offer reaches a human — the sales director, who decides whether to send it.

The obvious objection: what if one of these agents makes a wrong call? What if the second agent produces an offer below margin, or the compliance agent overlooks a regulatory requirement? The objection has weight, and it argues for keeping the human as a check in many cases for the time being. But it does not argue for keeping the chain as it is. Because the agents themselves are improving. And more importantly, guardrails can be built into the communication between them — rule-based safeguards that ensure certain thresholds are not exceeded, certain verification steps are not skipped, and certain decisions are not made without human approval. Control does not disappear. It shifts from the manual review of every single step to the definition of the rules within which the machines are permitted to operate.

The human no longer stands inside the chain of translation. They stand at the beginning, as principal and rule-setter, and at the end, as decision-maker. Everything in between is handled by machines.

For anyone familiar with the history of the software industry, this is a remarkable moment. For forty years, the central value creation of this industry has been the act of translation between human intent and machine execution. When that translation is automated, the industry does not lose a product or a market segment. It loses its reason for being.

The End of the Growth Model

It would be reckless to conclude from all this that Salesforce, SAP, and ServiceNow will cease to exist tomorrow. Large companies with deep customer relationships, proprietary datasets, and high switching-cost barriers do not vanish overnight. Some will transform into slower, dividend-oriented businesses that live off their installed base.

But the growth model is broken. New SaaS startups that were heading toward richly valued IPOs find themselves in a world where their product may be obsolete before it reaches the market. Private equity funds are already circling. Consolidation is coming.

The consequences of this upheaval must be stated plainly. The chain of abstraction employs millions of people worldwide: business analysts, project managers, developers, testers, consultants, support staff. When the chain grows shorter, links fall away. Not all at once, and not everywhere at the same speed. But the direction is unambiguous. Anyone who has built their livelihood on translating between domain experts and machines must sooner or later confront the question of what happens when that translation is no longer needed.

And the pace of development leaves the industry little room to adapt. Within a few months of its full launch, Claude Code reached an annualized revenue run rate of over $500 million — on its way to a billion. Cowork was built with that same Claude Code in roughly a week and a half. The plugin architecture allows domain knowledge to be captured in simple text files — not in millions of lines of application code. The cycle times of AI development no longer have anything in common with those of the software industry.

The software industry has its own patterns, its own planning horizons, its own assumptions about how fast markets change. Those assumptions are based on empirical data from a world where disruption took years. In the current world, it takes weeks. The industry cannot adapt its own patterns fast enough.

From IT System to Direct Solution

What we are witnessing is a transition between two fundamentally different mental models.

The old model asks: which IT system solves my problem? The answer requires requirements analysis, vendor selection, implementation, customization, training. A process that takes months to years and sustains an entire infrastructure of consultants, integrators, and support providers.

The new model asks: how do I describe my problem so that the machine solves it? The answer requires a precise formulation in natural language. The time required is measured in minutes.

Between these two models lies not merely an efficiency gain. There lies a paradigm shift that calls into question the entire value chain of the software industry. It is not software that becomes superfluous — but the industry that builds, sells, and maintains it loses the foundation of its business model. To be clear: infrastructure software — operating systems, databases, network protocols — will still be needed. What falls away is the application layer that mediates between that infrastructure and the domain experts.

In this dynamic lies a bitter irony. The software industry built the infrastructure on which AI runs. It erected the cloud platforms, filled the databases, standardized the APIs, created the development tools with which AI models are trained and deployed. Without forty years of software development, there would be no AI capable of making software development obsolete. The revolution devours its parents.

Whether this happens in two years or in ten, no one can credibly predict. But that it is happening — of that, the signals of recent weeks leave little doubt. The market has understood. The question is whether the rest of the industry will follow quickly enough.


As of February 2026. This text is based on publicly available sources, in particular reporting by CNBC, Fortune, Bloomberg, Financial Times, Yahoo Finance, CNN Business, VentureBeat, The Information, and SaaStr, as well as product announcements and financial disclosures from Anthropic, Palantir, SAP, and Arm Holdings. Market data is drawn from analyses by S3 Partners, Bank of America, CreditSights, and Goldman Sachs.