Applied AI for Enterpriseby Christophe Guerdoux
← All articles
AI RadarAI Strategy

Enterprise AI Radar Q1 2026

The first edition of the Enterprise AI Radar. 44 AI categories assessed across five strategic blocks. A structured view of what to standardise now, what to pilot, and where to hold. Built for enterprise leaders making AI architecture decisions at scale.

Christophe Guerdoux

Q1 2026 Baseline

Enterprise AI has moved past the proof-of-concept phase. The foundational infrastructure is largely in place. The intelligence layer is being actively built. The governance and control layer is still forming.

This first edition maps 44 AI categories across five strategic blocks, synthesised from 500+ signals collected through Q1 2026. Use it to prioritise: what to standardise, what to pilot, and where to hold.

How to read a category card

Foundation Models
Adopt85%

12 signals

Foundation ModelsAI capability assessed in the enterprise market
ADOPTEditorial recommendation for enterprise deployment
85%Estimated overall maturity level for Q1 2026
12 signalsNews, analyst and market signals processed to compute this score

AI Foundation

Foundation Models
Adopt85%

12 signals

LLM Providers
Adopt83%

10 signals

Model Hubs / Marketplaces
Trial61%

7 signals

Inference Providers
Trial64%

7 signals

Agent Platforms
Trial57%

9 signals

Fine-Tuning Platforms
Trial51%

6 signals

AI Data Layer

Data Lakes / Warehouses
Adopt86%

10 signals

ETL / ELT
Adopt80%

8 signals

Vector Databases
Trial65%

8 signals

RAG Infrastructure
Trial64%

9 signals

Knowledge Bases
Trial59%

7 signals

Data Quality
Trial65%

7 signals

Data Catalog / Lineage
Trial55%

6 signals

Feature Stores
Trial47%

5 signals

AI Enablement

APIs / Connectors
Adopt84%

8 signals

Middleware / iPaaS
Adopt78%

7 signals

Workflow Automation
Adopt69%

8 signals

MLOps
Adopt67%

8 signals

LLMOps
Trial54%

8 signals

CI/CD
Adopt81%

7 signals

Testing / Evaluation
Trial51%

7 signals

Deployment / Release Management
Trial62%

6 signals

Prompt Management
Assess40%

4 signals

Orchestration
Trial57%

8 signals

AI Trust & Control

AI Security
Trial59%

8 signals

Guardrails
Trial55%

7 signals

PII Protection
Adopt72%

8 signals

Access Control
Adopt88%

8 signals

Policy Enforcement
Trial55%

6 signals

Compliance
Adopt66%

9 signals

Governance
Trial57%

8 signals

Auditability
Trial48%

5 signals

Monitoring / Observability
Adopt68%

8 signals

Traceability
Assess44%

4 signals

Red-Teaming
Trial50%

6 signals

Model Risk Management
Assess40%

4 signals

AI Value Domains

Marketing & Sales
Adopt79%

10 signals

Customer Service
Adopt80%

10 signals

Operations & Supply Chain
Trial62%

7 signals

Finance & Risk
Trial64%

7 signals

HR & Workforce
Trial55%

6 signals

Product & R&D
Trial59%

8 signals

Legal & Compliance
Trial49%

6 signals

Digital Workplace / Employee AI
Adopt71%

10 signals


Executive Summary

The foundations are ready. Foundation models, APIs, CI/CD pipelines, and data warehouses have crossed the Adopt threshold. These are no longer experimental capabilities. Sixteen categories sit at Adopt. Enterprise procurement teams should be standardising them, not running new pilots.

The intelligence layer is being built. LLMOps, orchestration, agent platforms, RAG infrastructure: all in active enterprise deployment, but without standardised patterns yet. Twenty-five categories sit at Trial. These require structured programmes with clear success criteria and defined exit conditions.

The governance layer is the defining risk. AI Security, Governance, Auditability, and Red-Teaming all carry the highest control criticality in the radar. None are at Adopt. Enterprises are deploying AI faster than they are governing it. The EU AI Act compliance timeline is active. The tools are immature but the obligations are not.

Three capabilities to act on in Q2 2026. Agent Platforms are moving fast (Microsoft and Google platforms now in general availability). Governance programmes need to start before they are mandated. Testing and Evaluation is the most consequential gap in the Enablement block: enterprises are shipping LLM applications without systematic quality gates.


Five Questions for Your AI Steering Committee

Should we invest now, or wait for the market to mature?

Standardise immediately (at Adopt): Foundation Models, LLM Providers, APIs, CI/CD, Workflow Automation, Data Warehouses, PII Protection, Access Control, Compliance, Monitoring, Customer Service AI, Marketing AI, Digital Workplace. These have proven enterprise deployment patterns. The discussion is now about governance, not adoption.

Start structured pilots now (at Trial): Agent Platforms, RAG Infrastructure, LLMOps, Orchestration, Governance, Testing and Evaluation, AI Security, Finance AI, Operations AI. These have clear ROI potential but require defined success criteria and human oversight from day one.

Wait for consolidation (at Assess): Prompt Management, Traceability, Model Risk Management. Tools are fragmented and will consolidate over the next 6 to 12 months. Standardising today locks in the wrong architecture.

Which categories require board-level attention?

Seven categories in this edition carry the highest control criticality: AI Security, Guardrails, Governance, Auditability, Red-Teaming, Model Risk Management, and Traceability. Not one is at Adopt. The board question is not whether the tools are ready. It is whether governance programmes are running before the EU AI Act compliance window closes and before the next production incident.

Where is adoption running ahead of technology readiness?

Three visible mismatches in Q1 2026:

Customer Service and Marketing AI are at Adopt and deployed at scale. Testing and Evaluation (Trial) and Guardrails (Trial), which should govern those deployments, are not consistently in place. You are running live AI in production without systematic quality controls.

Digital Workplace AI (Microsoft 365 Copilot) is active in 70% of Fortune 500. Governance (Trial) and Auditability (Assess) are significantly behind. Most organisations cannot document what decisions their AI has influenced.

Compliance is at Adopt because of regulatory pressure. The technical infrastructure to operationalise it (Policy Enforcement, Auditability, Governance tooling) is at Trial. The obligation is ahead of the capability.

What should we pilot in the next six months?

Four highest-priority actions for Q2 2026:

  1. Agent Platforms. Run one controlled internal pilot with explicit human oversight checkpoints. Do not deploy customer-facing without a full evaluation cycle first.
  2. LLMOps. Implement before you scale your first LLM application beyond proof of concept. Costs, drift, and evaluation gaps compound quickly without it.
  3. Testing and Evaluation. Introduce evaluation gates in your LLM development workflow. Treat it as engineering infrastructure, not QA.
  4. Governance. Formalise your AI governance structure before your next deployment cycle. Retrofitting governance onto running AI systems is more expensive than building it in from the start.

Which categories are becoming table stakes vs. differentiators?

Table stakes today (standardise, do not seek to differentiate): Foundation Models, LLM Providers, APIs, CI/CD, Workflow Automation, Customer Service AI, Marketing AI, Digital Workplace productivity.

Where differentiation is still possible: RAG infrastructure quality and retrieval precision, LLMOps maturity and cost discipline, Agent Platform architecture design, and the credibility of your AI governance model. The organisations that will separate from peers are not the ones deploying the most AI. They are the ones deploying it more reliably, more safely, and at lower per-unit cost.


AI Foundation

AI Foundation — tier distribution

The AI Foundation block is split. The model and provider layer is at Adopt. The enabling layers (hubs, inference, agents, fine-tuning) are at Trial. Enterprises have clear access to frontier models but lack standardised patterns for managing model diversity, cost, and downstream customisation.

Foundation Models

Adopt

Foundation Models carry the highest strategic priority rating in the entire radar. This is not surprising: every enterprise AI programme depends on them. The category is at Adopt, driven by a mature and competitive vendor landscape (OpenAI, Anthropic, Google, Meta, Mistral) and broad enterprise deployment patterns.

The key risk here is velocity, not maturity. The category is evolving faster than enterprise procurement and governance cycles. Models deployed in Q1 2026 may be superseded before enterprise risk assessments are complete. CIOs should lock in contractual model stability SLAs alongside performance SLAs.

LLM Providers

Adopt

LLM Providers are the commercial face of foundation models: the API layer where enterprise contracts and SLAs live. The category is at Adopt. Azure OpenAI alone has penetrated 60% or more of Fortune 500, confirming this is now standard enterprise procurement. Data residency, GDPR compliance, and pricing structures are the primary enterprise negotiation axes.

Model Hubs / Marketplaces

Trial

At Trial, Model Hubs are a valuable tool for model discovery and evaluation but lack the enterprise governance controls (data residency, access audit, content filtering) needed for production standardisation. Hugging Face Hub passing 1M public models demonstrates ecosystem breadth, not enterprise readiness. The category is consolidating toward cloud-hosted marketplaces (AWS Bedrock Model Garden, Azure AI Foundry), which offer enterprise-grade access but reduce the differentiation of standalone hubs.

Inference Providers

Trial

Inference Providers are at Trial. Groq, Together AI, and Fireworks AI are driving significant price competition and latency improvements, making them attractive for cost-sensitive enterprise workloads. However, enterprise-grade SLAs, data residency, and regulatory compliance remain inconsistent across the market. Use for non-sensitive, cost-optimisation scenarios.

Agent Platforms

Trial

Agent Platforms are the highest-momentum category in the AI Foundation block. Strategic priority is high and enterprise urgency is genuine, but technology readiness is still maturing. The gap between aspiration and current reliability is large. Microsoft Copilot Studio reaching general availability is a significant milestone, but production use cases remain narrowly scoped. Non-deterministic behaviour, tool-use failures, and the absence of enterprise control frameworks make unguarded deployment high-risk. Run structured pilots in controlled environments. Do not deploy without explicit human oversight gates.

Fine-Tuning Platforms

Trial

Fine-Tuning is at Trial. It delivers measurable quality improvements for narrow tasks but carries meaningful data governance complexity: what data goes into fine-tuning, where it is stored, and who can access the resulting model. Managed services (OpenAI, Anyscale) lower the technical barrier but raise data residency concerns. Most enterprises should default to RAG and prompting before committing to fine-tuning.


AI Data Layer

AI Data Layer — tier distribution

The AI Data Layer is the most mature block overall, anchored by Adopt-tier Data Lakes/Warehouses and ETL/ELT that predate the GenAI wave. The six Trial categories (Vector Databases, RAG Infrastructure, Knowledge Bases, Data Quality, Data Catalog/Lineage, and Feature Stores) represent the AI-specific data layer that most enterprises are actively building in 2026.

Data Lakes / Warehouses

Adopt

The highest maturity rating in the data block. Data Lakes and Warehouses are the enterprise data substrate that pre-existed the AI wave, and they are now integrating AI natively (Snowflake Cortex, Databricks Lakehouse AI, BigQuery ML). No enterprise starting an AI programme should build an AI data layer before consolidating its data warehouse strategy.

ETL / ELT

Adopt

At Adopt, ETL/ELT is one of the most mature categories in the radar. The challenge for AI is not the tools — it is the discipline. AI models require fresh, clean, and contextually relevant data. Many enterprises have ETL pipelines but not AI-aware pipelines that manage freshness, semantic consistency, and structured versus unstructured data flows.

Vector Databases

Trial

Vector Databases are at Trial, just below the Adopt threshold. The category has high strategic relevance because it underpins RAG, semantic search, and recommendation. However, market consolidation is creating architectural uncertainty: native vector support in PostgreSQL (pgvector), Redis, and Elasticsearch threatens standalone vector databases. Evaluate both dedicated and native options before committing to a vector database strategy.

RAG Infrastructure

Trial

RAG Infrastructure has the highest strategic relevance in the data layer and is used in the majority of enterprise AI applications. It scores Trial because technology readiness is still inconsistent: retrieval quality, reranking, chunking strategies, and evaluation frameworks vary widely across implementations. The gap between a working RAG prototype and a production-grade RAG system is large. Invest in retrieval evaluation before scaling RAG deployments.

Knowledge Bases

Trial

Knowledge Bases are at Trial. The challenge is less technology and more content quality: enterprise knowledge repositories (SharePoint, Confluence, Notion) are often unstructured, outdated, and ungoverned. AI-powered retrieval makes the quality gap visible faster. The ROI of an AI knowledge assistant is directly proportional to the quality of the underlying knowledge asset. Treat knowledge curation as a prerequisite, not an afterthought.

Data Quality

Trial

Data Quality is at Trial. Gartner cites data quality issues in 60% of failed AI projects. Despite a mature tooling market, adoption is below the Adopt threshold because AI-specific data quality practices (semantic consistency, embedding drift, output quality feedback loops) are still developing. This is the most underinvested category relative to its impact on AI programme success.

Data Catalog / Lineage

Trial

At Trial with high control criticality, Data Catalog and Lineage is increasingly demanded by AI governance frameworks and the EU AI Act. Adoption is limited to governance-mature organisations. Tools exist (Alation, Collibra, Microsoft Purview) but enterprise-wide rollout remains partial. The EU AI Act's Article 12 documentation requirements will accelerate adoption for high-risk AI programmes.

Feature Stores

Trial

Feature Stores are at Trial, primarily relevant to enterprises with mature ML teams managing many models. Their role in the GenAI era is less clear: LLM applications rely more on context injection than on pre-computed features. The category is consolidating into data lakehouse platforms (Databricks Feature Store, SageMaker Feature Store). Organisations without dedicated ML teams can likely deprioritise standalone feature store investment.


AI Enablement

AI Enablement — tier distribution

AI Enablement is the most mature block for foundational infrastructure (APIs, CI/CD, Middleware all at Adopt) but has a significant immaturity gap in AI-specific tooling: LLMOps, Testing/Evaluation, and Orchestration are all at Trial, and Prompt Management is at Assess. The enterprise is good at connecting and delivering software. It is not yet good at managing LLM application lifecycles.

APIs / Connectors

Adopt

APIs and Connectors score among the highest technical maturity ratings in the radar. They are the foundational connectivity layer. No AI programme can operate without APIs connecting models to enterprise data and systems. This is not a category to evaluate. It is a category to standardise.

Middleware / iPaaS

Adopt

Middleware and iPaaS is at Adopt. Mature platforms (MuleSoft, Boomi, Azure Integration Services) are now embedding AI orchestration capabilities. For enterprises with an existing iPaaS investment, extending it to handle AI event flows is the path of least resistance.

Workflow Automation

Adopt

Workflow Automation is at Adopt. Microsoft Power Automate alone processes 1B or more AI actions per month across enterprise customers. AI-powered workflow automation is crossing into mainstream operational deployment. The category sits adjacent to Agent Platforms: the distinction is that workflow automation operates on predefined paths while agent platforms allow emergent task decomposition.

MLOps

Adopt

MLOps is at Adopt. The category has matured significantly over 2023 to 2025. MLflow, W&B, and SageMaker are now standard tools in ML-heavy organisations. The strategic challenge is that MLOps was designed for traditional ML models, not for LLM applications. LLMOps is the AI-native extension that covers the delta.

LLMOps

Trial

LLMOps is at Trial. The category is critical: as enterprises scale LLM applications, they need systematic management of prompts, outputs, costs, evaluations, and deployments. Tools (LangSmith, Arize Phoenix, Helicone) are maturing rapidly but the landscape is fragmented and standards are not yet established. Start trialling LLMOps tooling now before scaling LLM applications.

CI/CD

Adopt

CI/CD is universally adopted in any organisation with software engineering capability. The AI-specific challenge is extending CI/CD pipelines to include model evaluation gates, data validation checks, and prompt regression tests: capabilities that require LLMOps and Testing/Evaluation tooling to be in place first.

Testing / Evaluation

Trial

Testing and Evaluation is at Trial with limited enterprise adoption. This is the most consequential gap in the Enablement block. Organisations are deploying LLM applications without systematic evaluation frameworks. Tools exist (Braintrust, Ragas, Promptfoo) but lack enterprise standardisation. Weak evaluations create a direct path to production quality failures and reputational risk. Treat AI evals as a first-class engineering requirement, not an optional QA step.

Deployment / Release Management

Trial

At Trial, AI-specific deployment management (canary releases, shadow mode, A/B testing for models) is available but not yet standardised. Enterprises using cloud LLM APIs bypass much of this complexity, but self-hosted or fine-tuned models require production-grade release management. As model diversity increases, controlled release patterns will become essential.

Prompt Management

Assess

Prompt Management is the only Assess-tier category in AI Enablement. The category is fragmented and being absorbed by LLMOps platforms. Standalone prompt management tools (PromptLayer, Pezzo, Agenta) are useful but unlikely to remain independent categories. Enterprises should address prompt versioning and governance through their LLMOps platform of choice rather than deploying a separate tool.

Orchestration

Trial

Orchestration is at Trial with high strategic priority. As AI applications become more complex (multi-step, multi-model, multi-tool), orchestration frameworks become the connective tissue. LangGraph, Semantic Kernel, and LlamaIndex Workflows are maturing. The category is in a transitional phase: adoption is growing but production-grade standards are not yet established. Evaluate orchestration frameworks as a strategic architectural choice, not a commodity selection.


AI Trust & Control

AI Trust & Control — tier distribution

AI Trust and Control is the most strategically critical and operationally immature block in the radar. Four categories are at Adopt (Access Control, PII Protection, Compliance, Monitoring/Observability) but six are at Trial and two at Assess. The categories with the highest control criticality (Governance, Auditability, AI Security, Guardrails) are not yet at Adopt. This mismatch is the defining enterprise AI risk in 2026.

AI Security

Trial

AI Security is at Trial with maximum control criticality. The threat landscape is expanding fast: OWASP LLM Top 10 v1.1 formalises prompt injection as the top risk; MITRE ATLAS v4.2 adds 15 new GenAI-specific attack techniques. The tooling market (Protect AI, HiddenLayer, CalypsoAI) is nascent. Most enterprises are not running systematic AI security programmes. This is the most underprotected high-criticality category in the radar.

Guardrails

Trial

Guardrails are at Trial. Market maturity is low but control criticality is at maximum. No enterprise AI deployment should go to production without runtime output validation and content filtering. Managed options (AWS Bedrock Guardrails, Azure Content Safety) lower the barrier for basic deployments, but enterprise-grade configurable guardrails for complex multi-turn applications remain difficult to implement correctly.

PII Protection

Adopt

PII Protection is at Adopt. GDPR enforcement is creating non-negotiable demand. Microsoft Presidio, AWS Comprehend PII, and Nightfall provide mature detection and redaction capabilities. Every enterprise processing personal data through AI systems must have PII protection in place. This is a solved-enough problem for production deployment.

Access Control

Adopt

Access Control is at Adopt with the highest overall rating in the Trust and Control block. IAM is a solved enterprise problem; the AI-specific extension is governing agent identities and model endpoint permissions. Okta's AI Identity product extending IAM to AI agents reflects where the category is heading. Enterprises already managing strong IAM are well-positioned. The gap is in applying existing controls to AI-specific resources.

Policy Enforcement

Trial

Policy Enforcement is at Trial. Translating governance decisions into runtime technical controls is hard: it requires both a governance framework and a technical enforcement layer. OPA (Open Policy Agent) provides a mature general-purpose policy engine; AI-specific policy enforcement (usage caps, permitted use cases, prohibited topics) remains largely manual or point-solution-based.

Compliance

Adopt

Compliance is at Adopt. The EU AI Act enforcement timeline (February 2025 for prohibited practices, August 2025 for risk classification) has created urgent and non-negotiable compliance demand. The category crosses the Adopt threshold primarily because of regulatory pressure, not technology maturity. EU-operating enterprises with high-risk AI systems must be in active compliance programmes today.

Governance

Trial

Governance is at Trial with the highest strategic relevance and control criticality in the block, but low market maturity and adoption. Stanford AI Index 2025 finds that only 28% of large enterprises have a formal AI governance structure. McKinsey identifies governance as the top AI scaling bottleneck. This is the most critical and most underfunded capability in enterprise AI. A formal AI governance programme is not optional at enterprise scale. It is a prerequisite for scaling safely.

Auditability

Trial

Auditability is at Trial with among the lowest adoption in the Trust and Control block. EU AI Act Article 12 mandates complete logging for high-risk AI systems. Most enterprises cannot currently evidence their AI decisions to regulators. The tooling (MLflow, W&B audit trails, IBM OpenScale) provides components but not a complete auditability framework. Treat auditability as a compliance infrastructure investment, not an afterthought.

Monitoring / Observability

Adopt

Monitoring and Observability reaches Adopt. Datadog LLM Observability GA and Arize AI's $100M Series C signal that the category is crossing from specialist to standard enterprise tooling. Enterprises should have AI observability in production for any LLM application running at meaningful scale. This is no longer optional.

Traceability

Assess

Traceability is at Assess with the lowest adoption in the Trust and Control block. End-to-end tracing from AI output back to training data, retrieval context, and model version is extremely difficult today. OpenTelemetry's GenAI Semantic Conventions reaching a stable specification is an important foundation, but enterprise-grade AI traceability remains a 2027 capability for most organisations. Monitor the standard's evolution; do not invest heavily in bespoke traceability infrastructure today.

Red-Teaming

Trial

Red-Teaming is at Trial with limited adoption, used almost exclusively by security-conscious AI teams at large enterprises. The EU AI Act mandates adversarial testing for high-risk AI systems, which will create structural demand. Microsoft's open-sourcing of PyRIT and the maturation of Garak lower the barrier for enterprise practitioners. Any enterprise deploying AI in customer-facing, high-stakes, or regulated contexts should initiate red-teaming before production launch.

Model Risk Management

Assess

Model Risk Management is at Assess with the lowest market maturity in the entire radar. Established in financial services through SR 11-7 and TRIM, it is largely absent from other sectors. The Federal Reserve and OCC have issued guidance extending SR 11-7 principles to GenAI, which will drive adoption in FSI. Non-financial enterprises should monitor the frameworks developing here but need not invest immediately unless facing similar regulatory pressure.


AI Value Domains

AI Value Domains — tier distribution

AI Value Domains are where executive ROI lives. The split is clear: Customer Service, Marketing and Sales, and Digital Workplace have crossed Adopt (proven, measurable, widely deployed). The remaining five domains are at Trial: strategically attractive but requiring more careful programme management and controls.

Marketing & Sales

Adopt

Marketing and Sales is at Adopt. McKinsey reports 71% of enterprises have deployed AI in at least one marketing use case. Salesforce Einstein alone processes 200M or more AI actions per day. This is the fastest-proven ROI domain. CIOs should be helping business teams scale and govern existing AI deployments, not initiating new trials.

Customer Service

Adopt

Customer Service is at Adopt. Zendesk AI Suite adoption at 85% of its enterprise customer base is a defining data point: AI in customer service has crossed the mainstream threshold. Intercom Fin resolving 78% of queries autonomously is a production benchmark that enterprises should be measuring against.

Operations & Supply Chain

Trial

Operations and Supply Chain is at Trial. Strong ROI potential exists in demand forecasting, logistics optimisation, and procurement. SAP AI Core and Blue Yonder are mature in FSI and manufacturing sectors. The barrier is deep ERP integration: most AI supply chain projects live or die based on the quality of the data flowing from ERP systems. Sector-specific deployment patterns vary significantly.

Finance & Risk

Trial

Finance and Risk is at Trial. Finance leaders see clear value in AI for FP&A, fraud detection, and risk modelling. Technology readiness for GenAI in financial analysis requires careful hallucination controls. Bloomberg's LLM integration for financial analytics signals institutional validation. Control criticality is high, reflecting SR 11-7 and Basel III obligations in FSI. Proceed with structured pilots and model risk management from the outset.

HR & Workforce

Trial

HR and Workforce is at Trial. Adoption is constrained by EU AI Act high-risk classification for AI recruitment tools: mandatory human review is now legally required. Bias risk remains a significant concern in hiring and performance management AI. Technology is capable (Eightfold AI, Workday AI) but cultural and legal barriers slow enterprise rollout. A cautious, compliance-first approach is appropriate.

Product & R&D

Trial

Product and R&D is at Trial but is the fastest-growing domain by adoption momentum. GitHub Copilot at 1.8M paid subscribers and Cursor at $100M ARR in under two years reflect the velocity of developer AI adoption. Technology readiness for AI coding assistance is solid. The strategic challenge is intellectual property, code security scanning, and standardising AI use in software development workflows across teams.

Legal & Compliance

Trial

Legal and Compliance is at Trial with the lowest technology readiness in the Value Domains block. Harvey AI's $300M raise and CoCounsel's deployment at 30 or more AmLaw 100 firms demonstrate investor conviction and early enterprise credibility. However, hallucination risk in legal contexts is material: a wrong contract clause or missed regulatory obligation is a direct liability. Deploy AI legal tools for research and drafting assistance only. Require qualified legal review for any output with binding implications.

Digital Workplace / Employee AI

Adopt

Digital Workplace and Employee AI is at Adopt. Microsoft 365 Copilot active in 70% of Fortune 500 companies is the single most striking adoption data point in the Value Domains block. Google Workspace AI (Gemini) provides a competing platform. This category has the broadest end-user reach of any domain: already in production for tens of millions of knowledge workers. The enterprise challenge is not deployment but governance of AI-generated content, data residency for AI-processed documents, and ROI measurement.


Trends for Q2 2026

Agent Platforms: watch for Adopt movement. Microsoft Copilot Studio and Google Agentspace are maturing rapidly. Enterprise production use cases are multiplying. Q2 2026 is when structured pilots should be converting to programmes.

Governance: from optional to obligatory. EU AI Act risk classification deadlines are active. Enterprises without formal AI governance structures are accumulating regulatory risk with every deployment.

Consolidation underway. Prompt Management is being absorbed into LLMOps platforms. Vector Databases face pressure from native database capabilities (PostgreSQL, Redis). These categories will narrow in scope in Q2.

LLMOps and RAG Infrastructure: approaching Adopt. Both categories are converging around managed cloud services. Q2 signals will likely push them over the Adopt threshold.


This radar is built on 500+ signals weighted by source quality, freshness, and enterprise relevance. For a full explanation of the scoring model, read the Methodology.

Newsletter

Enterprise AI, without the noise

Monthly insights on AI market moves, tool maturity, and what matters for enterprise execution.

By subscribing you agree to receive monthly emails. You can unsubscribe at any time.