1. The Trillion-Dollar Question
In February 2026, Anthropic released a set of plugins for Claude Cowork. Not a new model. Not a chatbot upgrade. Plugins. Within 24 hours, software stocks lost $285 billion in market value.
A plugin marketplace announcement erased more wealth in a single day than most industries generate in a year. FactSet dropped from a $20B peak to under $8B. S&P Global lost 30% in weeks. Thomson Reuters shed almost half its market cap in a year. The S&P 500 Software & Services Index, 140 companies, fell 20% year to date. Wall Street is not scared of AI anymore. Wall Street is scared of what AI replaces.
The question everyone in the industry is now asking some version of: in a world where Anthropic, OpenAI, and Google are pouring hundreds of billions into building the most powerful general-purpose AI systems on earth, why does vertical software need to exist? How do companies like Bloomberg, FactSet, or Veeva earn a right to win? Why don't the foundation models just eat everything?
I have been reading every take on this debate since the February selloff. Three voices in particular stand out to me, not because they agree on everything, but because they arrive at the same structural insight from completely different vantage points. George Sivulka, the founder of Hebbia, argues that process engineering is the moat and it only gets stronger with better models. Nicolas Bustamante, the CEO of Fintool, has been on both sides of the disruption and maps out exactly which moats survive and which don't. And David Ondrej, who built and exited an AI startup in fourteen months, sees the biggest startup opportunity in a decade hiding inside the destruction. All three are right. And the synthesis of their arguments tells you something more important than any one of them alone.
2. The Ten Moats
Bustamante provides the most useful framework here. He identifies ten distinct moats that made vertical software defensible for decades, and then stress-tests each one against what LLMs actually do. The results are not uniform. Some moats are destroyed. Some are strengthened. And the distinction between the two is where all the alpha is.
| # | Moat | LLM Impact | Verdict |
|---|---|---|---|
| 01 | Learned Interfaces | Chat collapses all proprietary UIs into one | Destroyed |
| 02 | Custom Workflows | Business logic moves from code to markdown skills | Vaporized |
| 03 | Public Data Access | Frontier models parse SEC filings without custom parsers | Commoditized |
| 04 | Talent Scarcity | Domain experts encode expertise directly | Inverted |
| 05 | Bundling | Agents cherry-pick best providers | Weakened |
| 06 | Private / Proprietary Data | Truly unique data becomes even more valuable | Stronger |
| 07 | Regulatory Lock-in | HIPAA, FDA, SOX unchanged by LLMs | Structural |
| 08 | Network Effects | Communication networks remain self-reinforcing | Sticky |
| 09 | Transaction Embedding | Payment/loan/claims rails stay essential | Durable |
| 10 | System of Record | Agents build richer contextual memory over time | Threatened |
The pattern is clear. The moats that were built on friction—interface complexity, workflow lock-in, data access barriers, talent scarcity—are being systematically dissolved. The moats that were built on substance—proprietary data, regulatory compliance, network effects, transaction processing—are holding or getting stronger. The market is repricing software companies accordingly, and it is mostly right to do so. The question is whether it is overshooting.
"The market is finally distinguishing between companies that own something genuinely scarce and companies that were just middlemen with nice interfaces."
3. Process Is the Product
This is where Sivulka's argument becomes important, and where I think a lot of the bearish takes on vertical software go wrong. The bear case assumes that software value lives in the code: the interface, the business logic, the data access layer. And if LLMs can replicate all three, the value evaporates. That logic is clean and it is mostly wrong.
The value of enterprise software has never been in the code. It has always been in the process knowledge embedded in the code. Software is a stored process. It encodes how a specific group of people does their specific job, and that encoding becomes load-bearing over time. Not because the interface is hard to learn, but because the institutional knowledge stored inside is genuinely costly and difficult to reconstruct.
Take finance. From fifty thousand feet, every firm looks identical. They all do due diligence, form opinions on valuations, produce memos, run models, synthesize research. If you wrote one product spec for all of them, nobody would buy it. You need to zoom past the firm level, past the team level, all the way down to the desk and the individual MD. The private credit team at one firm uses a completely different set of compliance flags than the private equity team at the same firm. Two MDs at the same bank will have entirely different standards for what a good CIM summary looks like. One team runs diligence through a forty-page template; the team down the hall does it in a shared spreadsheet that gets emailed around.
That is the last mile. And the last mile is not the final configuration step before a product go-live. It is a recognition that what you are deploying is not just software but an embodiment of how a specific team of specific people does their specific job. The way they do their particular job might be 90% routine and 10% idiosyncratic. But the 10% is where deals get done and careers get made. That is where all the differentiated value resides. And that is where the moat lives.
"Software is a social contract. General-purpose AI tools cannot be opinionated in the way that load-bearing enterprise software must be."
Here is the critical point: better models do not make this less true. They make it more true. The o-series releases from OpenAI were supposed to crush legal AI. The reasoning was intuitive: dramatically better models mean a thinner application layer. The exact opposite happened. Better models compounded the orchestration layer's value, because the orchestration layer is where reliability lives. You can have the most capable model on earth and still produce garbage if you do not have the scaffolding to constrain, verify, and route that capability through a specific professional workflow.
An asset manager does not need a system that gets things right eighty percent of the time. In finance, 90% right is the same as 100% wrong. A model could produce a confident, well-formatted, mostly-correct answer about a deal you are about to execute. But if it gets one detail wrong or formats it in a way that differs from your desk's standards, the entire thing is useless. Building that level of reliability requires knowing what the work actually looks like. And that is precisely what general-purpose AI cannot do.
4. The Thin Middle Squeeze
If Sivulka tells you what survives and why, Ondrej tells you what dies and how. His framework is the clearest structural explanation of the market selloff I have seen, and it comes down to three layers.
The Three-Layer Stack
The AI Agent Layer
The thing that actually executes the work. Absorbing value from below.
The SaaS UI Layer
Dashboards, workflows, buttons. The thin middle. Getting crushed.
Systems of Record
Databases, CRMs, ERPs. The real data. Absorbing value from above.
Value is getting sucked upward into the agent layer and downward into the data layer. Everything in the thin middle gets crushed. That is why Adobe's forward P/E dropped from 30 to 12. ServiceNow went from 67 to 28. Not because people do not need what they do, but because investors realized the moat around nice UI plus integrations is paper-thin when an AI agent can bypass the UI entirely.
But here is the part that makes this more nuanced than the headlines suggest. If an AI agent can do the work of ten employees, you do not need ten software seats anymore. AI does not kill the software directly. It kills the headcount that uses the software. Which kills the per-seat revenue model. Which kills the business. The destruction is indirect, and that is why it took the market a while to price it in.
| Company | Peak P/E | Current P/E | Compression |
|---|---|---|---|
| FactSet | ~35x | ~12x | -66% |
| Adobe | ~30x | ~12x | -60% |
| ServiceNow | ~67x | ~28x | -58% |
| Thomson Reuters | ~40x | ~22x | -45% |
5. What Survives
All three voices converge on the same answer, even though they frame it differently. What survives is what cannot be replicated by a general-purpose model sitting on top of a chat interface. The question you have to ask about any vertical software company is simple: does it own something that an AI agent cannot route around?
Bustamante gives the clearest filter. Three questions: Does the company own proprietary data that is genuinely scarce? Is it embedded in a regulatory or compliance framework? Does it process transactions? If the answer to at least two is yes, the company is probably safe. If the answer to all three is no, the company is a shell waiting to be bypassed.
Risk Assessment Framework
UI wrappers on public data. Analytics dashboards, reporting tools, data visualization layers. No proprietary data, no regulatory lock-in, no transaction processing. Pure thin middle.
Deep workflow tools in regulated industries. Some proprietary data, some compliance embedding. Vulnerable to AI-native challengers but protected by procurement cycles and switching costs.
Transaction-embedded systems of record with proprietary data. Bloomberg (terminal + IB chat network + proprietary pricing). Epic (clinical data + HIPAA). Veeva (FDA-regulated content management). These are not software companies. They are infrastructure.
Sivulka makes the same argument from the builder's perspective. Bloomberg is not sticky because its interface is hard to learn. Bloomberg is sticky because a generation of finance professionals was trained on it, communicates through it, and has built an entire economy of social norms around it. That is a network effect. Shared tooling creates a shared language that becomes powerfully self-reinforcing. And that moat does not go away just because migrations become easy.
The new AI-native tools that win in finance will not win by being neutral substrates for generic work. They will win by becoming load-bearing. By knowing the workflows well enough, by having done the work of learning each firm and each desk's specific version of that workflow, so that replacing them becomes genuinely costly. Not because the interface is hard to learn, but because the institutional knowledge stored inside is genuinely costly and difficult to reconstruct.
6. Where the Money Goes
Here is the part that the doom narrative completely misses. Companies will spend more on software this year than ever before. Enterprise AI capital expenditure alone will exceed $470 billion in 2026. The money is not disappearing. It is moving. And most people are so busy panicking about the destruction that they are blind to where it lands.
The irony that nobody is talking about: Anthropic's Cowork, the product that supposedly killed SaaS, is itself a SaaS product. Sold via subscription. To organizations. On the internet. SaaS as a delivery model is fine. It was always fine. SaaS as a business strategy built on shallow moats and per-seat pricing for commodity workflows—that is what is over.
| Where Value Moves | Description |
|---|---|
| AI Platform Subscriptions | Usage-based, consumption-based. Not per-seat. The model providers and orchestration layers capture what per-seat software loses. |
| Systems of Record | The databases, CRMs, and ERPs that store the real data. Agents need them. They become more essential, not less. |
| Security & Compliance | AI agents operating across enterprise systems create entirely new attack surfaces and audit requirements. This spend is additive. |
| Outcome-Based Pricing | The winning vertical AI companies will charge based on value delivered, not seats occupied. Finance is the sector where this works best because the marginal value of a correct answer is enormous. |
| Services | Process engineering is services work. Firm-by-firm, team-by-team, MD-by-MD. The grinding institutional work that every great vertical software empire was built on. |
Sivulka makes a point here that I think is underappreciated. Finance is the sector where being right is worth the most money. The marginal value of an accurate answer in an M&A process or in a distressed debt situation is enormous. The cost of an inaccurate answer is also enormous. There is no other professional domain where the reward for getting things exactly right, and the penalty for getting things almost right but slightly wrong, is so consistently, measurably high. That means finance will consume AI capabilities faster than any other vertical. Not because finance professionals are technology enthusiasts, but because the economics of precision are simply that extreme.
7. Conclusion
The vertical SaaS reckoning is not about all vertical software dying. It is about the market finally distinguishing between companies that own something genuinely scarce and companies that were just middlemen with a pleasant user interface sitting on top of publicly accessible data. The selloff is structurally justified but temporally exaggerated. Multi-year enterprise contracts, procurement cycles, and regulatory inertia mean the actual revenue impact lags the multiple compression by years.
"The SaaS era is not ending. The easy SaaS era is."
The real threat is a pincer: AI-native startups attacking from below with 10x cost efficiency, and horizontal platforms going vertical from above with unlimited resources. The companies caught in between—too slow to adopt AI, too shallow on data and process depth to defend their position—are the ones that do not make it.
But the companies that do the grinding institutional work—learning each firm, each team, each desk's specific version of a workflow and encoding that understanding into software that runs reliably at scale—those companies are going to build something genuinely massive. Not because they won a feature race or threw more intelligence at the problem, but because they did the process engineering that every great vertical software empire was built on. And in a world where intelligence itself becomes a commodity, process engineering is the last defensible advantage.
Everyone is looking at the $285 billion wipeout and seeing destruction. I see a transfer. That value did not vanish. It is moving—from companies that captured value by being the middleman between humans and their tools, to companies that capture value through execution, data, and infrastructure. If you are building in this space right now, the worst thing you can do is panic. The second worst thing is keep building like it is 2019. The best thing you can do is understand where the value is moving and go stand where it is landing.