The Age of Speculative Bets: How Legal AI Is Hedging at Every Layer
By Anna Guo

You’ve probably seen the flurry of legal AI news over the past few weeks.
- Freshfields announced a firmwide Claude rollout and an Anthropic co-development partnership, while simultaneously reporting 5,000+ professionals already using Gemini-powered tools built with Google Cloud for due diligence, case management, and multi-jurisdictional analysis.
- Anthropic launched deeper product capabilities, including Claude for Word and agentic workflows through Claude.
- Thomson Reuters is advancing CoCounsel Legal using Anthropic’s latest capabilities.
- Legora acquired Qura, an AI-native legal research startup, to strengthen its position in structured legal data.
It looks like chaos.
It isn’t.

The 5 Layers of the Legal AI Ecosystem
Across the legal AI stack, every serious player is building optionality: placing multiple bets while avoiding being locked into the wrong one.
Layer 1: Infrastructure
The hyperscalers capture economics regardless of which model wins on top of them. AWS hosts Claude. Azure hosts OpenAI. Google Cloud hosts both Gemini and Claude on Vertex. The infrastructure layer is the only layer in the stack where "let the customer choose" is itself the winning strategy.
Take Google. It has:
- GCP and Vertex at the infrastructure layer, hosting Anthropic's Claude alongside its own models. Gemini at the foundational model layer
- Gemini models at the foundational models layer together with a stake in Anthropic
- Workspace at the knowledge layer, embedded in the daily document workflow of most in-house legal teams.
- Gemini at the application layer
- Lawhive at the legal services layer, the AI-native legal services firm Google Ventures has backed.

5 layers, 5 plays, all running in parallel.
Hyperscalers don't need to bet on a single model winning. They are paid by anyone running models on their infrastructure. The bet is that compute consumption keeps growing.
Layer 2: Foundational models
The model labs are not sitting at the bottom of the stack waiting to be wrapped either.
They are running multiple plays simultaneously, hedging on whether value will accrue to general-purpose models, vertical applications, agentic platforms, or direct partnerships with elite firms.
Anthropic is the clearest example. Since February, the company has launched plays that compete with each other.
- Claude Legal plugin: a vertical legal offering going direct into Harvey, CoCounsel, and Legora territory.
- Claude Marketplace: an enterprise marketplace where partners, including Harvey, plug into Claude.
- Claude for Word: a native Word add-in delivered via Microsoft AppSource, with legal contract review as the first listed use case, going direct into Spellbook and SimpleDocs territory.
- Cowork: Anthropic's agentic platform, competing with the workflow products of every legal AI vendor.
- Freshfields co-development: a direct law-firm partnership building AI-native legal workflows, bypassing the application vendor layer entirely.

These plays are not aligned. Claude Legal and Claude for Word undercut the Marketplace partner ecosystem. Cowork competes with the workflow products of vendors plugging into Marketplace.
Anthropic itself probably does not know whether the user-facing legal tool will look more like a plugin, a vertical app, an agentic platform, a Word add-in, or a co-developed law-firm-branded product. So they are building all it all.
Layer 3: Knowledge
The hedge at the knowledge layer isn't which model to run. It's whether to stay a content provider or move up the stack into applications.
- Thomson Reuter's answer is CoCounsel, rebuilt on Claude in Amazon Bedrock. Owning the workflow tool so the value of authoritative content doesn't get captured by whoever wraps it.
- Lexis's answer is Protégé, with Anthropic's legal plugin integrated within three weeks of launch.
Both are betting content moats hold, but only if they also control the interface sitting on top.
Layer 4: Applications
Harvey's hedging is the clearest single-vendor case study.
12 months ago, it was OpenAI-only, BigLaw-only, legal-only, single-platform distribution.
Today: routes across multiple models; serves the majority of AmLaw 100 plus a fast-growing mid-market book; co-builds Tax AI Assistant with PwC, expanding to 25+ territory models tailored to specific jurisdictions; accessible through its own platform and from inside Claude as a Marketplace connector.
Legora is doing the same.
Their recent acquisition of Qura, a Stockholm-based pioneer in AI-native legal research with its product live across 27 jurisdictions in competition law.
This is a knowledge-layer bet. Legora is buying its way into a data moat to compete with TR and Lexis on content, not just on workflow.
Layer 5: Services
The services layer is structurally different from every other layer in the stack. The moat is regulatory, not technical.
Only one category of actor can occupy it: regulated law firms. Everyone else is structurally locked out. What the elite firms are starting to understand is that alone is not enough.
- A&O Shearman is the clearest example. ContractMatrix, built with Harvey and Microsoft, is used daily by around 2,000 A&O Shearman lawyers and licensed to clients including life sciences, financial institutions, sovereign wealth funds and tech companies with more than 40 large organisations signed up. A&O has also rolled out Harvey-built agents focused on antitrust filing analysis, cybersecurity, fund formation, and loan review, available to corporations, law firms, and financial services via subscription or usage-based fees, with A&O Shearman sharing in the software revenue.
- Freshfields is doing the same with Anthropic. The firm contributes process expertise. Anthropic builds the applications and sells them to other law firms.
The costs of hedging
Hedging is rational, but it is not free. 3 costs are worth naming explicitly.
- Operational complexity. Running multiple foundational models, multiple applications, and multiple deployment platforms requires infrastructure that most legal teams do not have. Freshfields built an internal AI Academy with 260 in-house "AI Champions" to train the firm across its multi-vendor stack. That is a meaningful hidden cost. Most in-house teams of 5 to 50 people cannot replicate that operating model.
- Integration debt. The more tools running in parallel, the more integration surface area. Every additional tool added to the panel adds new APIs to maintain, new permission models to reconcile, new data flows to govern, and new failure modes to debug. Tool sprawl does not just multiply licence fees. It multiplies the work of keeping the stack coherent.
- Evaluation paralysis. When everything is potentially worth evaluating, nothing gets evaluated properly. Procurement cycles stretch. Internal stakeholders disagree about priorities. Pilots run in parallel without comparable evaluation criteria. The result is often a panel selected by inertia rather than by deliberate choice.
3 principles for hedging intelligently
If hedging is rational but expensive, the question is how to hedge well. 3 principles for legal teams making procurement decisions in this market:
- Buy for switchability. Data portability is the new evaluation criterion. The right question to ask every vendor is: "If we leave you in 18 months, what do we get to keep?" If the answer is "your raw documents," the answer is incomplete. You need to keep the prompts, the playbooks, the templates, the training data, and the structured workflows you have built inside the tool. If you cannot export these in a useful format, you do not have a hedge. You have a hostage.
- Do not hedge layers that do not need hedging. A panel of 4 contract review tools is not optionality. It is sprawl. The hedge is at the model layer. The application layer should be lean enough that your team can actually use the tools well.
- Reassess every 6 months, not every 3 years. The market moves quarterly. Build 6-month review cycles into your procurement process. Define performance benchmarks that trigger reassessment if missed. Negotiate contract terms that allow exit on those benchmarks. A 3-year contract without exit clauses is the opposite of a hedge. It is a gamble.
The Legal AI Evaluation Framework and Toolkit at legalbenchmarks.ai is built around exactly this logic: pre-demo research, demo scorecards, and pilot scorecards designed to surface the right questions on switchability, integration, and roadmap alignment before the contract gets signed.
What this all means
The hedging across every layer of the legal AI ecosystem right now is rational. Nobody knows where durable value sits, so everybody is preserving optionality.
But hedging buys time. It is not a strategy in itself. The window to build genuine differentiation above the model layer is closing faster than it looked 12 months ago. When buyers see what actually compounds value, consolidation will be fast and the cost of being on the wrong side of that consolidation will be steep.
About the Author
Anna Guo
Anna is the founder of Legal Benchmarks.