Research

SaaSvival: AI Disrupts Everything But Destroys Less

Memo on AI-driven SaaS disruption covering what gets commoditized, what survives, and how to trade the divergence.

On February 3, 2026, Anthropic launched Cowork and $285 billion in software market cap vanished in a single session. The consensus snapped to a simple conclusion: AI eats SaaS. But the consensus is almost always wrong about the "how" even when it's right about the "what."


The disruption is undeniable. Horizontal workflow tools with no proprietary data, no regulatory lock-in, and no network effects are in freefall. Asana is at all-time lows. monday.com lost 70% of its value. Some companies really are getting absorbed by the foundation models themselves.


But the bear case treats the entire sector as one trade, and the data says otherwise. Goldman Sachs' long/short spread for AI-exposed software hit 63% in twelve months. The moated names keep beating estimates while the market prices them at 22x forward earnings, the lowest since 2014. The question is whether those earnings hold, and for the companies with real defensibility, there's a growing case that they do.


What's missing from the conversation is a clear-eyed view of how the architecture advances to deal with competing interests. No vendor will share proprietary data through a competitor's AI infrastructure, and no AI provider can serve as a neutral broker. The ecosystem needs a clearinghouse, a governed trust layer modeled on financial market infrastructure, before any of this reaches equilibrium. The companies that plug into that structure as component suppliers survive and those that can’t get eaten.


This piece synthesizes three original Terminal X research reports covering 214 cited sources across equity, credit, and portfolio strategy to map the full picture.


In this article:


  • The moats that determine which SaaS companies survive and which get displaced
  • How the market is already pricing this re-sorting, and where the disconnect creates entry points
  • Why the disruption timeline is years, not months, and what gates the speed
  • The clearinghouse thesis: why the ecosystem needs neutral governance infrastructure before it resolves
  • A full equity long/short framework with 43 longs and 42 shorts across 99 sources
  • How AI disruption transmits into credit markets through CLOs, BDCs, and the Oracle fallen angel risk
  • Portfolio construction: how to express the long through equity, the short through credit, and hedge with HALO and geographic diversification

$285 Billion Disappeared in a Day

On February 3, 2026, Anthropic launched Cowork. Within hours, $285 billion in market value vanished. By close, the iShares Software ETF (IGV) had its worst session since 2008, and names across lenders were down double digits. Traders at Jefferies termed it the “SaaSpocalypse.” Over the coming days and weeks, it seemed the market wasn’t just saying that AI would disrupt the space, but that Anthropic and OpenAI wouldn’t leave much left for anyone else.

iShares Expanded Tech-Software Sector ETF (IGV) falls 17% YTD, underperforming the S&P 500 and Nasdaq amid a broader software sector correction, down 26% from 2025 highs.

Software earnings were still growing double digits with margins at record levels, yet the sector’s forward P/E hit the lowest since 2014 at 22x. The price action and the fundamentals were telling two completely different stories. So which one is “right?”


The bear case that AI eats all of SaaS is lazy. But the bull case that AI makes every incumbent more valuable, the “AI bubble” that drove multiples higher through early 2025, is equally wrong. Both assume the disruption is uniform. The truth is more specific: some companies die, some adapt or reimagine themselves, and some end up more entrenched than they were before. The sorting is already underway.


The disruption cuts along lines that aren’t obvious from the headlines: entrenched interests, human behavior, precedents like the iPhone era, and how companies act to protect their data. The deeper question is what kind of architecture has to exist before any of this resolves, and who controls it.

What Dies, What Survives, and Why

Three types of competitive advantage hold up against AI disruption, and everything else is exposed:

  • Proprietary data that can’t be scraped or replicated
  • Regulatory lock-in where compliance requirements create years of switching costs
  • Network effects where the product gets more valuable as more people use it.


If a company doesn’t have at least one of these, the question isn’t whether AI disrupts it but when.

AI exposure vs leveraged loan performance: higher automation exposure correlates with weaker YTD price performance across sectors.

CrowdStrike processes a trillion security events per day from physical sensors that can’t be replicated by a model. Veeva’s FDA compliance infrastructure takes 18 months to implement, and the regulatory certification itself is the switching cost. Bloomberg’s IB chat is Wall Street’s default messaging layer because everyone is already on it, and that network effect has nothing to do with the quality of the chat software.


A new entrant could rebuild any of these interfaces tomorrow, and AI makes that easier every quarter. But CrowdStrike’s moat is a sensor network generating proprietary telemetry at scale. Veeva’s is a compliance trail that regulators already trust. Bloomberg’s is the fact that the other side of every trade is already logged in.


In regulated environments, the AI also can’t just access this data and say “trust me.” It has to prove which sources it used, show that it was authorized to see them, and maintain an audit trail a regulator can review. That governance requirement reinforces the incumbents, because they’re the ones who can grant or deny access on terms that satisfy a compliance officer.


The consensus bear case is simple: moats built on interface collapse. UI-only products, seat-based pricing for workflows an agent can automate, feature bundles where users only need a fraction of what they’re paying for, these are all vulnerable. If the primary value proposition is “we organized your screen nicely,” that disappears. The AI can organize the screen itself, dynamically, per the user’s exact preferences instead of a one-size-fits-all model. There’s truth in that framing, but it overstates how quickly users actually abandon the tools they already trust and are comfortable paying for.


Opportunity cost is the part the bears keep missing. Replacing one SaaS tool is a project. Replacing a hundred is a full-time engineering operation. Race-to-bottom pricing doesn’t help if the product already costs $120 a year, because nobody is allocating engineering resources to save ten dollars a month. Even if we’re talking about new companies, a kid in Bangaluru spinning up a Calendly clone on Claude Code can undercut on price. But new entrants still have to offer something meaningfully better or it’s just not worth it to save $3 on something that costs even 10 minutes to switch to—most people just can’t be bothered.


Muscle memory is the most contentious part of this argument. It clearly matters, though the pushback is understandable. Agents don’t feel switching costs, but in the end agents aren’t the ones paying to switch. People still need to see, verify, and edit what the AI inferred, and they do that through familiar interfaces. The trust relationship between a user and a specific visual representation of their own data is real, and replicating it takes time and effort that the displacement models aren’t accounting for. Those interfaces can be rebuilt, but a clone at a lower price may not be enough to pull someone away from what they already use every day.


However, the iPhone didn’t kill the iPod by being a cheaper alternative. It compressed the tech stack. That’s how the moatless SaaS names die: scheduling becomes a feature inside something bigger, and one day you realize you haven’t opened Calendly in six months. But will users really stop using the UX’s that they’re comfortable with because Claude can now output an interactive table? And for the companies that have other moats, where regulation, network effects, proprietary data pipelines, or deep institutional trust create real switching costs, the disruption surely looks different. All of those companies will get courted, and the terms on which they share data without losing control of it will define the architecture that supplants what we have now.

The Market Is Already Pricing Most of This

The sorting isn’t a prediction. It’s already showing up in prices, and the divergence is accelerating.


Goldman Sachs created two thematic baskets for AI-exposed software: GSTMTSOL on the long side and GSTMTSOS on the short side. The spread between them hit 63% in twelve months. Revenue estimates for the long basket have roughly doubled since 2023 while growth estimates for the short basket have flatlined. The market is actively separating winners from losers in real time.


The HALO framework (Heavy Assets, Low Obsolescence) confirms the same pattern from a different angle. Capital-intensive companies with tangible infrastructure (utilities, energy, basic resources, telecoms, and industrials like transmission grids, pipelines, and long-cycle equipment) have outperformed capital-light peers by about 35% since early 2025. But when capital rotates out of software, it doesn’t all leave the sector. Some of it flows into the companies that own real infrastructure, while the rest moves to other stocks, fixed income, or sits on the sidelines.


But that rotation carries outsized weight because of where IT sits in the index. Information technology now represents roughly 40% of U.S. equity market cap, the highest concentration since the dot-com bubble and nearly double where it sat a decade ago. Software alone climbed from about 4% of the S&P 500 in 2015 to 12% at its 2025 peak before correcting to around 9%. That kind of concentration is historically unprecedented, which is what the “AI bubble” doomsayers were warning about.

TMT dominates S&P 500 sector weight, while capital-heavy sectors remain underrepresented.

The story is more varied than it appears, but even the moated names have been hit harder than most expected. CrowdStrike is down roughly 32% year to date, with Veeva and Intuit each off about 35%. Those are steep drawdowns, but the horizontal workflow tools are in freefall: Asana is at all-time lows around $6, monday.com has lost nearly 70% from its 52-week highs, and ZoomInfo (now trading as GTM) is down about 45% over the past twelve months. The sector’s forward P/E sits at 22x, the lowest since 2014, but consensus two-year forward earnings have climbed about 35% since late 2024. Stock prices over that same period are down about 20%.

Software stock performance lags rising earnings expectations, highlighting a growing valuation disconnect.

That gap closes one of two ways: either earnings estimates come down to meet the prices, or prices catch up to the earnings. If the next few quarters come in anywhere near current consensus, the companies still posting double-digit growth at 22x forward earnings will start to look meaningfully cheap. The selloff priced in a structural threat to the entire category, but defensible names keep beating estimates, and that disconnect creates a real entry window for anyone willing to sort through the wreckage.


The bear case assumes AI shrinks the software market, but the data says otherwise.

Goldman Sachs projects the combined software market growing 20-45% by 2030, with the majority of that growth coming from AI agent spending that barely exists today. A call center case study showed 30% operating cost reduction alongside a 130% increase in software budget. The spend doesn’t just flow to OpenAI and Anthropic, it flows through the entire stack. The pie is still getting bigger for everyone who survives.

Why the Timeline Is Slower Than Everyone Thinks

Technology moves fast, human trust moves slow, and regulation moves slowest. The disruption timeline is gated by whichever of those curves is furthest behind, and right now the gap between what AI can do and what institutions will allow it to do is wide.


According to MIT, 91% of bank boards have approved AI programs, but only 26% have built the capability to execute them, and 82% don’t measure ROI on what they’ve already deployed. The problem isn’t technology adoption, it’s institutional capability. And that isn’t something that changes overnight.


Decades of rational deferrals made sense individually, but collectively they created a foundation that can’t support what AI requires. 80% of the work to get from pilot to production is organizational: data engineering, governance, and workflow integration. Most enterprises are still using AI as a glorified search engine.


AI also costs more than the disruption narrative implies. Venture capital is subsidizing inference pricing the same way it subsidized ride-sharing, and when that normalizes, a lot of the replacement math stops working. For most enterprise use cases, existing SaaS tools and human workflows are still the cheaper option.


The leap from that to autonomous agents running financial workflows requires a level of trust that barely exists. Even practitioners who use AI in their own investment process every day don’t trust it to make decisions with their own money. That’s a psychological gap, not just a technological one, and closing it may take a generational shift as well as continued technological advancement.


Regulation compounds the delay. The EU AI Act takes effect in August 2026. South Korea implemented its framework in January 2026. More than 72 countries have over 1,000 regulatory initiatives in progress. You might be able to use an AI agent to open a crypto wallet, but not to open a bank account.


Protocol fragmentation adds another layer. While MCP has won out for how models talk to their tools, A2A, and ACP and others are still competing for how AI agents communicate with each other. In regulated industries that kind of fragmentation freezes progress. That could change quickly, but it’s still a drag on how and when this all hits the bottom line.


The thesis is directionally right, but early. The February selloff priced the threat immediately, but the reality unfolds over years, and is likely to take some unexpected turns before it reaches a new equilibrium.

Which AI SaaS Survives as Intelligence Commoditizes

Most of the conversation about AI disrupting software treats the technology as a single thing: a black box that does stuff. The architecture is more complicated than that, and understanding the plumbing matters because it explains why governance and trust infrastructure end up being the real bottleneck.


Outside of the foundational models, structured AI agent architecture is comprised of multiple specialized models connected to different data sources, passing outputs back and forth in loops, often with humans at checkpoints along the way. For a self-driving car, the intelligence isn’t a wrapper around a camera, it’s the architecture connecting dozens of sensors, models, and decision systems into something that can navigate a road. The model is one component. The orchestration, data routing, identity management, and audit logging around it are what make it usable in a business context.


In a mission-critical environment, “the model recommended this” isn’t good enough. Users need to see which data informed the output, what logic was applied, and how confident the system is in its answer. Banks won’t let an AI agent touch a trading workflow if they can’t explain to a regulator exactly what it did and why. Hospitals won’t let an agent modify a treatment plan if the reasoning is opaque. Regulated industries where accuracy is paramount cannot tolerate a black box that can’t be pried open. Agentic architecture must show its steps and cite its sources. Without that transparency, adoption stalls regardless of how good the model gets.


That demand for explainability creates a structural need for governance infrastructure that sits between the AI and the data it operates on: entitlement controls, audit trails, and containment ensuring that sensitive data never reaches a model that isn’t authorized to see it. As things stand, this can’t be done inside the model itself, because the model is probabilistic. A 99.9% compliance rate means a violation every thousand queries, and in regulated environments that’s a lawsuit.


As intelligence costs normalize and the subsidy era ends, open-source and on-prem inference will keep commoditizing the model layer. Frontier models can still charge a premium for advanced reasoning, but the long-term value accrues to whoever controls the data, the governance, and the trust layer around it.


That dynamic only intensifies as the models improve from here. Foundation models get more capable every quarter. Tasks that required multi-agent orchestration a year ago can now be handled with a single prompt. That compression will squeeze out smaller players and is likely to continue for the foreseeable future. It is likely that most of what’s being built in the orchestration layer today will eventually get absorbed. What doesn’t get absorbed is the accumulated knowledge of how specific clients think, the hyper-specific overlapping nuances of how particular industry standards work in concert, and how companies within those industries actually make decisions. The models commoditize, even as the client context compounds.

Tech’s share of US equities surges in innovation cycles, reaching new highs with the AI-driven rally.

The Clearinghouse Model: Why the Ecosystem Needs a Neutral Trust Layer

If trust is the bottleneck and governance has to happen before data reaches the model, someone has to operate that governance layer. The AI provider has a commercial incentive to route everything through its own infrastructure. Any single vendor would be asking competitors to hand over their data advantage. Neither can serve as a neutral operator, which is why the role has to be operated by a third party.


Vendors know this. And the natural inclination is to get defensive, locking down APIs, restricting what external AI systems can access, and sharing as little as possible. If they open everything up to an orchestration layer run by Anthropic or Google, they’re handing over the last competitive advantage they have. The data flows through someone else’s infrastructure, the user relationship migrates to someone else’s interface, and the SaaS company becomes a database that can be swapped out at any time.


But hoarding is also a dead end. A Workday that talks to Salesforce that talks to ServiceNow through a governed AI layer is exponentially more useful than any of them in isolation. The vendors know this too. They just won’t act to fix it if the orchestration layer is owned by a competitor.


This is the prisoner’s dilemma at the center of the AI transition. Everyone benefits from interoperability, but nobody moves first because the intermediary has competing interests. Bloomberg will never pipe proprietary data through Anthropic’s servers because the risk of training exposure, subpoena liability, and probabilistic leakage is too high.


The open-source movement offers an obvious counterargument: if models are commoditizing anyway, why not open-source the entire stack and let interoperability happen organically? For the model layer, that logic makes sense. Open-source inference is already viable for structured tasks, and the cost curve keeps compressing.


But the data layer is a fundamentally different business. FactSet, S&P Global, and Moody’s exist because they clean, structure, and maintain proprietary datasets that the market treats as sources of truth. Their revenue model depends on controlled access. Open-sourcing that data destroys the economic incentive to curate it in the first place. Newspapers that dropped their paywalls learned this the hard way: they wanted to democratize journalism, but they ended up defunding it. Data providers will participate in a governed structure that attributes revenue back to them, but they won’t participate in one that gives their secret sauce away for free.


The clearinghouse breaks the deadlock. The model is the same one that works in financial markets: ICE, LSEG, and CME operate as neutral infrastructure that no single participant controls. They guarantee data containment, enforce entitlement controls, maintain audit trails, and attribute revenue back to the data owner. Participants share more because the structure makes sharing safe.


This kind of infrastructure isn’t entirely hypothetical. Here.io (formerly OpenFin) already operates as an enterprise browser container deployed across the majority of global financial institutions, enabling cross-application context sharing through the FDC3 protocol. The plumbing for governed interoperability exists. What’s missing are the intelligence layers sitting on top of it.


The ecosystem that forms around this kind of clearinghouse looks like airline alliances. Star Alliance works because no single airline controls it, and that neutrality is why competitors are willing to join. Domain-specific alliances will form around regulatory environments: financial services, healthcare, legal, general productivity. SaaS companies become component suppliers within their alliance, contributing their UX, their data connections, and their domain expertise to a governed ecosystem that is collectively stronger than any of them individually. Specialized AI companies offer services that overlay entire stacks at once. Those who don’t join an alliance are flying solo without the route network.

Equity Deep Dive: Winners, Losers, and the 50/50 Layer

The clearinghouse thesis restructures how to think about the software long/short. The highest-conviction longs aren’t the companies with the best AI features, they’re the ones positioned as essential components inside governed ecosystems. They are the names any alliance would need regardless of which foundation model wins or how the frontier models advance.


Identity and orchestration infrastructure sit at the top. Okta and SailPoint handle identity governance for non-human agents, which is the new firewall for agentic workflows. Snowflake operates as a model-neutral data layer across 53 cloud regions, processing secure data products for 790 Forbes Global 2000 customers. Palantir’s Ontology framework maps siloed enterprise data into a single governed model, driving 137% year-over-year US commercial revenue growth. ServiceNow is the orchestration control tower, with audit trails and kill switches built into the workflow layer.


Systems of record form the next tier because switching them breaks compliance. Veeva in life sciences, where FDA certification creates years of switching costs. Procore in construction, with 2,700 enterprise customers above $100K in ARR and growing network effects. Workday in HR and finance, with 75 million users on the Flex Credits consumption model. Intuit in tax and accounting, with AI capabilities layered on top of proprietary tax and financial data that can’t be scraped.


Cybersecurity splits along the HALO framework. Companies with physical telemetry moats are defensible: CrowdStrike processes a trillion events per day from hardware sensors, and Palo Alto Networks ingests 15 petabytes of daily telemetry. These data streams come from physical infrastructure and can’t be replicated by a language model. Software-only security products face the same disruption pressure as any other software.


The shorts are names with little or no ecosystem role to play. ZoomInfo lacks lock-in, with growth decelerating to 1% and the worst net revenue retention in its category. monday.com has no proprietary data advantage, and net new enterprise logos fell from 20,000 to 5,000 in a single year. Asana is growing at 9% with NRR flat at 96%. Anecdotally, its new AI features have been called more nuisance than breakthrough. Dropbox has been losing tens of thousands of paying users per quarter, with management guiding for further declines in Q1 2026. H&R Block does tax preparation that AI can handle natively. Robert Half runs a staffing business that AI agents replace directly, with contract revenue down 11% year over year. These are the solo airlines with no route network. These are spaces where an alliance’s AI partners can easily substitute for these SaaS front-ends.

Software valuations reset to ~22x NTM P/E, below ~51x Covid-era peak.

CRM is the textbook in-between case. Salesforce scaled its Agentforce platform to an $800 million ARR run rate, and that’s real momentum. But front-office software is the most exposed category to AI agents, because the interface between a salesperson and their CRM is exactly the kind of workflow a language model can absorb. No top analyst has CRM as a conviction buy. The 50/50 names require the most careful monitoring because the thesis could break in either direction. It’s possible that the gains outweigh the losses, or vice versa.

Software still trades at a premium to the S&P 500 despite the recent selloff.

Giving Terminal-X the thesis above, it synthesized 99 sources across broker research, regulatory filings, earnings calls, and real-time market data, and outputted us with reports detailing 43 longs and 42 shorts. This is the type of AI that’s embedded in the decision process, adding depth and speed to human judgment rather than trying to replace it. The results are below, with the full reports available upon request.

Top 10 IGV holdings shed ~$800B in market cap YTD, led by major declines in mega-cap software names.
Top 10 longs vs shorts: AI-native leaders with strong moats vs commoditized, disruption-exposed laggards.

Full 43 longs / 42 shorts with 99 cited sources cited in the TX equity report, available upon request.

Credit Transmission: How AI Disruption Hits Debt Markets

While the SaaSpocalypse scenarios have mostly played out on the equity side, it’s the credit world where the doomsayers have been most vocal lately. The exposure here is concentrated, the mechanics are less forgiving, and not enough has yet been accurately marked to market vs how much software debt is sitting in the lowest-quality tiers.


Software is 26% of the CCC-rated leveraged loan universe, the lowest quality tier before default. Much of that debt is concentrated in CLOs (pools of corporate loans sliced into tranches for investors) and BDCs (public lending funds that make direct loans to private companies). Though there may be even more in shadowy non-banks, the BDC’s publicly disclose and so we can analyze that data further.

Software exposure is modest in IG and HY credit but significantly higher in leveraged loans.

The concentration within individual BDCs tells the story: Morgan Stanley Direct Lending holds 41% of its portfolio in software, Hercules Capital is at 35%, and Ares Capital is at 24%. US technology loans have returned negative 5.02% year to date compared to negative 0.46% for all loans. The average software loan price sits at 88.5 cents on the dollar.

Direct lending to software-oriented portfolio companies accounts for ~4% of firmwide base management fees on average.

The stress is no longer theoretical. In March 2026, a wave of redemption requests hit the largest private credit funds marketed to wealthy individuals. Apollo’s ADS BDC saw redemption requests reach 11.2% of outstanding shares and gated withdrawals at the 5% quarterly cap. Blackstone’s BCRED, the world’s largest private credit fund at $82 billion in assets, met record 7.9% redemptions by expanding a tender offer and having employees step in to cover the gap. Ares capped its Strategic Income Fund at 5% after requests hit 11.6%. BlackRock and Blue Owl are confronting similar pressure. These are the structural gates working as designed, but their activation confirms that the pain in software-heavy loan books is real and that retail investors are heading for the exits.


The exposure extends well beyond BDCs. Software represents roughly a third of all private equity deal activity over the past decade, and Scott Goodwin of credit investment firm Diameter Capital estimates an “AI risk factor” affects over half of all deals financed by private equity and private credit during that period. 


Nearly $4 trillion in private equity deals struck at peak valuations have proved hard to exit amid higher rates and geopolitical turmoil. John Beil, head of private equity at Partners Capital, predicts a wave of asset writedowns for software deals when first-quarter numbers are finalized in April. 


Banks have their own upstream exposure: Moody’s estimates that banks have lent $300 billion to private credit funds and another $285 billion to private equity funds, with the US Treasury’s Office of Financial Research putting total bank lending to private credit as high as $540 billion.


UBS chair Colm Kelleher has warned that “ratings shopping,” where firms seek better ratings from smaller agencies than established groups would provide, could create a “systemic risk” to global finance. The Bank of England is conducting a stress test this year specifically to examine whether a private credit crisis could spill into the wider financial system.


Lenders are asking the same moat questions equity investors are asking. The Qualtrics $5.3 billion refinancing deal getting pulled is proof of selectivity, not systemic panic. Companies with defensible positions can still access the debt markets. Companies without them are finding the window closed.


The tail risk is structural. CLOs enforce a 7.5% cap on CCC-rated loans within the pool. When a software loan gets downgraded past that threshold, the CLO manager has to sell regardless of what the loan is actually worth. If enough downgrades hit in a compressed window, the result is a wave of forced sellers with no natural buyers, and prices overshoot where fundamentals would put them. That’s the mechanism that could turn an orderly sorting into something uglier.


Four years of refinancing runway keeps this orderly unless something external compresses the timeline. A recession, a rate spike, the Iran conflict driving oil prices higher, or a concentrated downgrade cycle could trigger the CLO cap simultaneously across multiple pools. Over 10% of private equity-owned companies are already choosing to increase their debt rather than making interest payments in cash, a practice known as payment-in-kind (PIK) that is now at a one-year high. That’s how lenders quietly absorb stress without forcing a default. Without a macro catalyst, the credit market sorts through this gradually, name by name, moat by moat, mirroring the equity side on a longer timeline. That could turn the bears into the boy who cried wolf. But there’s a lot of dry firewood, and the right spark could yet turn it into an inferno.


53 cited sources from, including Goldman Sachs and Bloomberg, available in the full TX credit report upon request.


Portfolio Positioning: How to Play the Thesis

The equity and credit markets are telling the same story at different speeds, and there’s a trade to be had in that gap. The iShares Software ETF (IGV) is down roughly 25% year to date, with the pain concentrated in smaller names that lack the ecosystem defensibility of the mega-caps. Technology loans have returned negative 5% year to date, but that headline masks the lag: the average software loan still prices at 88.5 cents on the dollar despite the fundamental deterioration underneath. That gap between equity repricing and credit repricing creates a particular portfolio construction opportunity.


The long side is straightforward: own the winners through equity, where the repricing has already created entry points for names with real moats. The short side is where it gets more interesting. Single-name equity shorts carry unbounded upside risk and squeeze exposure, which makes them a difficult way to express the thesis. Credit default swaps let you buy protection on individual names, and the BDCs most concentrated in software debt (covered in the prior section) offer a publicly traded way to short the same exposure through equity. When the underlying loans reprice, BDC equity absorbs the loss first.


Software debt has no physical collateral, so recovery rates in an obsolescence scenario approach zero. B-rated technology loans already trade 140 basis points wider than their non-technology counterparts. And unlike equity shorts, the maturity wall is a forcing function with a date on it.


Oracle represents the biggest risk to the space overall. The company carries $124.4 billion in investment-grade debt sitting two notches above speculative grade, with a 3.27x debt-to-EBITDA ratio after its $50 billion financing plan partially eased investor concerns. But credit default swaps hit 198 basis points on March 28, a new all-time record surpassing even December 2008. The CDS briefly dropped to 152 bps after a strong earnings report on March 11, then ripped back to new highs within weeks. If Oracle gets downgraded, that’s roughly 8.5% of the entire $1.45 trillion US high-yield market arriving at once. Paramount fell into high yield with $12 billion in debt. Oracle would be roughly ten times that size.


The downgrade doesn’t even have to happen for the risk to matter. The possibility alone reprices every software credit sitting near the investment-grade boundary.


The debate over whether this becomes systemic is live. Alan Schwartz, executive chair of Guggenheim Partners and the man who led Bear Stearns through its 2008 collapse, says there are “cracks in the foundation of the private debt market” and warns that “increased selling in illiquid assets that don’t have transparent valuations can cause significant spasms in financial markets.” 


Former Goldman Sachs CEO Lloyd Blankfein told the Financial Times that a liquidity mismatch has become more likely: “When something goes off you’re going to find all the assets that have been carried at prices that can’t be realised in the market.” On the other side, Blackstone’s Jon Gray says he has “never seen something so disconnected from reality in finance” than the comparison to 2008, pointing out that private market leverage ratios of one to two times are a fraction of the fifteen times that brought down the banks. 


Both sides have a point. The leverage is lower, but the opacity is higher, the retail exposure is new, and the concentration in software makes this cycle’s risk profile different from anything that came before.


If the sorting takes longer than expected, the hedge is assets AI can’t touch. The HALO framework has worked. Utilities, energy, basic resources, telecoms, and industrials with transmission grids, pipelines, and long-cycle industrial capacity have outperformed capital-light software by 35% since early 2025. Goldman Sachs tracks this through its capital-intensive basket (GSSTCAPI). 


Geographic diversification adds a second buffer: non-US equity markets carry lower, more hardware-focused tech exposure. MSCI Asia ex-Japan is up over 28% year to date and MSCI Japan is up roughly 3%, while the S&P 500 is approximately flat over the same period.


Newspaper stocks lost 95% over five years and only bottomed when forward earnings estimates stabilized. Adobe dropped 25% during the 2012-2014 cloud transition before its P/E expanded from 21x to 139x as recurring revenue compounded. During the mobile era, disrupted stocks experienced a median 30% maximum drawdown, but survivors rallied 40%.

Historical tech transitions show deep drawdowns but strong recoveries, with AI SaaS still in early sorting phase.

Full transition analysis with recovery curves and survivorship metrics across 62 sources in the TX SaaS Portfolio Report, available upon request.


We’re three months into a transition that historically takes three to five years to resolve. The market priced the threat on day one, but for most names the actual estimate revisions haven’t started yet. That means the long entry points probably get better before they get worse, and the credit repricing is still ahead of us.

FAQ

Will AI completely replace enterprise software? No, Goldman Sachs projects the combined SaaS and agent TAM growing 20-45% by 2030. AI expands the software market while reshaping who captures value within it. Companies with proprietary data, regulatory lock-in, and network effects will adapt and likely grow. The ones built on interface and seat-based pricing face genuine displacement.


What does the software interface look like in five years? A hybrid of voice and text input coexisting with persistent visual components. AI will surface and arrange dashboard elements dynamically, but people still need familiar visual formats to verify what the AI inferred. Pure chat interfaces don’t work for repeated workflows, and pure legacy menus won’t survive either. The SaaS companies that designed the visual components people already trust will retain their role as long as they’re plugged into a governed ecosystem.


What does “governed ecosystem” actually mean? The AI transition has a prisoner’s dilemma at its center: every vendor benefits from interoperability, but nobody moves first because the intermediary has competing interests. No SaaS company will pipe proprietary data through an AI provider’s infrastructure, and no AI provider can serve as a neutral broker. The resolution looks like financial market infrastructure, a clearinghouse where a neutral third party enforces data containment and audit trails the same way ICE or LSEG does for capital markets. SaaS companies become component suppliers within that structure, contributing data and domain expertise to an ecosystem that’s collectively stronger than any of them individually.


Which SaaS companies are most at risk? Companies whose primary value is their interface and whose pricing is tied to human headcount. Horizontal workflow tools like task management, scheduling, and generic collaboration software are most exposed because AI agents can replicate those functions natively. The companies that survive will be the ones protected by at least one of three moats: regulatory entrenchment, proprietary data, or ecosystem lock-in. Companies without any of those get eaten by the foundation models themselves as they expand into the workflows those tools used to own.


Which AI companies will be best positioned as foundation models commoditize? The ones building deep domain mapping at structured nodes during this intermediate period. When the models become interchangeable, the company that owns the accumulated institutional context of a specific domain owns the relationship. Wrappers around foundation models will not survive the next generation of those same models.


Could this trigger a credit crisis? Not on its own, but the transmission channels are wider than most investors realize. Software is 26% of the lowest-rated loan universe and heavily concentrated in CLOs, BDCs, and increasingly in insurance portfolios. Banks have lent hundreds of billions upstream to private credit and PE funds. BDC redemption gates are already being triggered. Four years of refinancing runway keeps defaults orderly unless a macro shock compresses it, but the CLO CCC cap at 7.5% is the structural mechanism that could turn gradual sorting into forced selling if enough downgrades cluster together.


How should investors position? Own the survivors in equity, where the repricing has created entry points. Express the short thesis through credit, where software debt recovery rates approach zero in an obsolescence scenario and the forced repricing hasn’t started yet. Hedge timeline risk with HALO names and non-US equity markets with lower tech exposure.


This content is for informational purposes only. It does not constitute investment advice, a recommendation, or an offer to buy or sell any security. Terminal X is not a registered investment adviser or broker-dealer.


  • Authored by: Jacob Koenig, Former Head of Execution Services at Goldman Sachs
  • Powered by: Terminal X (AI Analyst for Investment Research & Report Automation)


Want to go deeper on any of this? Ask Terminal X