Q1 2026 Baseline
Enterprise AI has moved past the proof-of-concept phase. The foundational infrastructure is largely in place. The intelligence layer is being actively built. The governance and control layer is still forming.
This first edition maps 44 AI categories across five strategic blocks, synthesised from 500+ signals collected through Q1 2026. Use it to prioritise: what to standardise, what to pilot, and where to hold.
How to read a category card
12 signals
AI Foundation
12 signals
10 signals
7 signals
7 signals
9 signals
6 signals
AI Data Layer
10 signals
8 signals
8 signals
9 signals
7 signals
7 signals
6 signals
5 signals
AI Enablement
8 signals
7 signals
8 signals
8 signals
8 signals
7 signals
7 signals
6 signals
4 signals
8 signals
AI Trust & Control
8 signals
7 signals
8 signals
8 signals
6 signals
9 signals
8 signals
5 signals
8 signals
4 signals
6 signals
4 signals
AI Value Domains
10 signals
10 signals
7 signals
7 signals
6 signals
8 signals
6 signals
10 signals
Executive Summary
The foundations are ready. Foundation models, APIs, CI/CD pipelines, and data warehouses have crossed the Adopt threshold. These are no longer experimental capabilities. Sixteen categories sit at Adopt. Enterprise procurement teams should be standardising them, not running new pilots.
The intelligence layer is being built. LLMOps, orchestration, agent platforms, RAG infrastructure: all in active enterprise deployment, but without standardised patterns yet. Twenty-five categories sit at Trial. These require structured programmes with clear success criteria and defined exit conditions.
The governance layer is the defining risk. AI Security, Governance, Auditability, and Red-Teaming all carry the highest control criticality in the radar. None are at Adopt. Enterprises are deploying AI faster than they are governing it. The EU AI Act compliance timeline is active. The tools are immature but the obligations are not.
Three capabilities to act on in Q2 2026. Agent Platforms are moving fast (Microsoft and Google platforms now in general availability). Governance programmes need to start before they are mandated. Testing and Evaluation is the most consequential gap in the Enablement block: enterprises are shipping LLM applications without systematic quality gates.
Five Questions for Your AI Steering Committee
Should we invest now, or wait for the market to mature?
Standardise immediately (at Adopt): Foundation Models, LLM Providers, APIs, CI/CD, Workflow Automation, Data Warehouses, PII Protection, Access Control, Compliance, Monitoring, Customer Service AI, Marketing AI, Digital Workplace. These have proven enterprise deployment patterns. The discussion is now about governance, not adoption.
Start structured pilots now (at Trial): Agent Platforms, RAG Infrastructure, LLMOps, Orchestration, Governance, Testing and Evaluation, AI Security, Finance AI, Operations AI. These have clear ROI potential but require defined success criteria and human oversight from day one.
Wait for consolidation (at Assess): Prompt Management, Traceability, Model Risk Management. Tools are fragmented and will consolidate over the next 6 to 12 months. Standardising today locks in the wrong architecture.
Which categories require board-level attention?
Seven categories in this edition carry the highest control criticality: AI Security, Guardrails, Governance, Auditability, Red-Teaming, Model Risk Management, and Traceability. Not one is at Adopt. The board question is not whether the tools are ready. It is whether governance programmes are running before the EU AI Act compliance window closes and before the next production incident.
Where is adoption running ahead of technology readiness?
Three visible mismatches in Q1 2026:
Customer Service and Marketing AI are at Adopt and deployed at scale. Testing and Evaluation (Trial) and Guardrails (Trial), which should govern those deployments, are not consistently in place. You are running live AI in production without systematic quality controls.
Digital Workplace AI (Microsoft 365 Copilot) is active in 70% of Fortune 500. Governance (Trial) and Auditability (Assess) are significantly behind. Most organisations cannot document what decisions their AI has influenced.
Compliance is at Adopt because of regulatory pressure. The technical infrastructure to operationalise it (Policy Enforcement, Auditability, Governance tooling) is at Trial. The obligation is ahead of the capability.
What should we pilot in the next six months?
Four highest-priority actions for Q2 2026:
- Agent Platforms. Run one controlled internal pilot with explicit human oversight checkpoints. Do not deploy customer-facing without a full evaluation cycle first.
- LLMOps. Implement before you scale your first LLM application beyond proof of concept. Costs, drift, and evaluation gaps compound quickly without it.
- Testing and Evaluation. Introduce evaluation gates in your LLM development workflow. Treat it as engineering infrastructure, not QA.
- Governance. Formalise your AI governance structure before your next deployment cycle. Retrofitting governance onto running AI systems is more expensive than building it in from the start.
Which categories are becoming table stakes vs. differentiators?
Table stakes today (standardise, do not seek to differentiate): Foundation Models, LLM Providers, APIs, CI/CD, Workflow Automation, Customer Service AI, Marketing AI, Digital Workplace productivity.
Where differentiation is still possible: RAG infrastructure quality and retrieval precision, LLMOps maturity and cost discipline, Agent Platform architecture design, and the credibility of your AI governance model. The organisations that will separate from peers are not the ones deploying the most AI. They are the ones deploying it more reliably, more safely, and at lower per-unit cost.
AI Foundation
AI Foundation — tier distribution
The AI Foundation block is split. The model and provider layer is at Adopt. The enabling layers (hubs, inference, agents, fine-tuning) are at Trial. Enterprises have clear access to frontier models but lack standardised patterns for managing model diversity, cost, and downstream customisation.
Foundation Models
AdoptFoundation Models carry the highest strategic priority rating in the entire radar. This is not surprising: every enterprise AI programme depends on them. The category is at Adopt, driven by a mature and competitive vendor landscape (OpenAI, Anthropic, Google, Meta, Mistral) and broad enterprise deployment patterns.
The key risk here is velocity, not maturity. The category is evolving faster than enterprise procurement and governance cycles. Models deployed in Q1 2026 may be superseded before enterprise risk assessments are complete. CIOs should lock in contractual model stability SLAs alongside performance SLAs.
LLM Providers
AdoptLLM Providers are the commercial face of foundation models: the API layer where enterprise contracts and SLAs live. The category is at Adopt. Azure OpenAI alone has penetrated 60% or more of Fortune 500, confirming this is now standard enterprise procurement. Data residency, GDPR compliance, and pricing structures are the primary enterprise negotiation axes.
| Signal | Source |
|---|---|
| OpenAI reports $4B+ ARR with strong enterprise contract growth | OpenAI / Reuters |
| Azure OpenAI Service reaches 60% of Fortune 500 | Microsoft Azure |
| Anthropic raises $1.5B Series E to scale enterprise offering | Bloomberg |
Model Hubs / Marketplaces
TrialAt Trial, Model Hubs are a valuable tool for model discovery and evaluation but lack the enterprise governance controls (data residency, access audit, content filtering) needed for production standardisation. Hugging Face Hub passing 1M public models demonstrates ecosystem breadth, not enterprise readiness. The category is consolidating toward cloud-hosted marketplaces (AWS Bedrock Model Garden, Azure AI Foundry), which offer enterprise-grade access but reduce the differentiation of standalone hubs.
| Signal | Source |
|---|---|
| Hugging Face surpasses 1 million public models on Hub | Hugging Face |
| AWS Bedrock Model Garden expands to 100+ third-party models | AWS |
Inference Providers
TrialInference Providers are at Trial. Groq, Together AI, and Fireworks AI are driving significant price competition and latency improvements, making them attractive for cost-sensitive enterprise workloads. However, enterprise-grade SLAs, data residency, and regulatory compliance remain inconsistent across the market. Use for non-sensitive, cost-optimisation scenarios.
| Signal | Source |
|---|---|
| Groq raises $640M to scale LPU inference infrastructure | TechCrunch |
| Together AI launches inference cost benchmarks showing 5x–10x savings vs OpenAI | Together AI |
Agent Platforms
TrialAgent Platforms are the highest-momentum category in the AI Foundation block. Strategic priority is high and enterprise urgency is genuine, but technology readiness is still maturing. The gap between aspiration and current reliability is large. Microsoft Copilot Studio reaching general availability is a significant milestone, but production use cases remain narrowly scoped. Non-deterministic behaviour, tool-use failures, and the absence of enterprise control frameworks make unguarded deployment high-risk. Run structured pilots in controlled environments. Do not deploy without explicit human oversight gates.
Fine-Tuning Platforms
TrialFine-Tuning is at Trial. It delivers measurable quality improvements for narrow tasks but carries meaningful data governance complexity: what data goes into fine-tuning, where it is stored, and who can access the resulting model. Managed services (OpenAI, Anyscale) lower the technical barrier but raise data residency concerns. Most enterprises should default to RAG and prompting before committing to fine-tuning.
| Signal | Source |
|---|---|
| OpenAI Fine-tuning API adds GPT-4o customisation with data privacy controls | OpenAI |
| Anyscale launches enterprise fine-tuning managed service with SOC 2 compliance | Anyscale |
AI Data Layer
AI Data Layer — tier distribution
The AI Data Layer is the most mature block overall, anchored by Adopt-tier Data Lakes/Warehouses and ETL/ELT that predate the GenAI wave. The six Trial categories (Vector Databases, RAG Infrastructure, Knowledge Bases, Data Quality, Data Catalog/Lineage, and Feature Stores) represent the AI-specific data layer that most enterprises are actively building in 2026.
Data Lakes / Warehouses
AdoptThe highest maturity rating in the data block. Data Lakes and Warehouses are the enterprise data substrate that pre-existed the AI wave, and they are now integrating AI natively (Snowflake Cortex, Databricks Lakehouse AI, BigQuery ML). No enterprise starting an AI programme should build an AI data layer before consolidating its data warehouse strategy.
| Signal | Source |
|---|---|
| Snowflake Cortex AI natively integrates LLMs within the data warehouse | Snowflake |
| Databricks Unity Catalog adds AI governance and lineage for ML assets | Databricks |
ETL / ELT
AdoptAt Adopt, ETL/ELT is one of the most mature categories in the radar. The challenge for AI is not the tools — it is the discipline. AI models require fresh, clean, and contextually relevant data. Many enterprises have ETL pipelines but not AI-aware pipelines that manage freshness, semantic consistency, and structured versus unstructured data flows.
| Signal | Source |
|---|---|
| dbt adds AI-assisted lineage and documentation generation | dbt Labs |
| Fivetran reports 8,000+ enterprise customers across 500+ connectors | Fivetran |
Vector Databases
TrialVector Databases are at Trial, just below the Adopt threshold. The category has high strategic relevance because it underpins RAG, semantic search, and recommendation. However, market consolidation is creating architectural uncertainty: native vector support in PostgreSQL (pgvector), Redis, and Elasticsearch threatens standalone vector databases. Evaluate both dedicated and native options before committing to a vector database strategy.
| Signal | Source |
|---|---|
| Pinecone Serverless reaches GA with usage-based pricing | Pinecone |
| pgvector 0.7 adds HNSW index support natively in PostgreSQL | GitHub / pgvector |
RAG Infrastructure
TrialRAG Infrastructure has the highest strategic relevance in the data layer and is used in the majority of enterprise AI applications. It scores Trial because technology readiness is still inconsistent: retrieval quality, reranking, chunking strategies, and evaluation frameworks vary widely across implementations. The gap between a working RAG prototype and a production-grade RAG system is large. Invest in retrieval evaluation before scaling RAG deployments.
| Signal | Source |
|---|---|
| AWS Bedrock Knowledge Bases reaches GA with enterprise data source connectors | AWS |
| LlamaIndex 0.10 introduces modular retrieval pipeline with evaluation framework | LlamaIndex |
Knowledge Bases
TrialKnowledge Bases are at Trial. The challenge is less technology and more content quality: enterprise knowledge repositories (SharePoint, Confluence, Notion) are often unstructured, outdated, and ungoverned. AI-powered retrieval makes the quality gap visible faster. The ROI of an AI knowledge assistant is directly proportional to the quality of the underlying knowledge asset. Treat knowledge curation as a prerequisite, not an afterthought.
| Signal | Source |
|---|---|
| Microsoft SharePoint Copilot integrates enterprise knowledge retrieval at scale | Microsoft |
| Glean raises $260M to expand enterprise AI search and knowledge platform | TechCrunch |
Data Quality
TrialData Quality is at Trial. Gartner cites data quality issues in 60% of failed AI projects. Despite a mature tooling market, adoption is below the Adopt threshold because AI-specific data quality practices (semantic consistency, embedding drift, output quality feedback loops) are still developing. This is the most underinvested category relative to its impact on AI programme success.
| Signal | Source |
|---|---|
| Monte Carlo adds LLM output quality monitoring to data observability platform | Monte Carlo |
| Gartner: 60% of AI project failures trace back to poor data quality | Gartner |
Data Catalog / Lineage
TrialAt Trial with high control criticality, Data Catalog and Lineage is increasingly demanded by AI governance frameworks and the EU AI Act. Adoption is limited to governance-mature organisations. Tools exist (Alation, Collibra, Microsoft Purview) but enterprise-wide rollout remains partial. The EU AI Act's Article 12 documentation requirements will accelerate adoption for high-risk AI programmes.
| Signal | Source |
|---|---|
| Microsoft Purview adds AI data lineage for Azure OpenAI workloads | Microsoft |
| EU AI Act Article 12 mandates data governance documentation for high-risk AI | EU AI Office |
Feature Stores
TrialFeature Stores are at Trial, primarily relevant to enterprises with mature ML teams managing many models. Their role in the GenAI era is less clear: LLM applications rely more on context injection than on pre-computed features. The category is consolidating into data lakehouse platforms (Databricks Feature Store, SageMaker Feature Store). Organisations without dedicated ML teams can likely deprioritise standalone feature store investment.
| Signal | Source |
|---|---|
| Databricks Feature Store integrated into Unity Catalog for governance-driven feature management | Databricks |
AI Enablement
AI Enablement — tier distribution
AI Enablement is the most mature block for foundational infrastructure (APIs, CI/CD, Middleware all at Adopt) but has a significant immaturity gap in AI-specific tooling: LLMOps, Testing/Evaluation, and Orchestration are all at Trial, and Prompt Management is at Assess. The enterprise is good at connecting and delivering software. It is not yet good at managing LLM application lifecycles.
APIs / Connectors
AdoptAPIs and Connectors score among the highest technical maturity ratings in the radar. They are the foundational connectivity layer. No AI programme can operate without APIs connecting models to enterprise data and systems. This is not a category to evaluate. It is a category to standardise.
| Signal | Source |
|---|---|
| OpenAI launches Assistants API with function calling and file handling for enterprise apps | OpenAI |
| Kong API Gateway adds AI plugin layer for LLM traffic management and governance | Kong |
Middleware / iPaaS
AdoptMiddleware and iPaaS is at Adopt. Mature platforms (MuleSoft, Boomi, Azure Integration Services) are now embedding AI orchestration capabilities. For enterprises with an existing iPaaS investment, extending it to handle AI event flows is the path of least resistance.
| Signal | Source |
|---|---|
| MuleSoft Anypoint adds AI integration layer for LLM orchestration | MuleSoft |
Workflow Automation
AdoptWorkflow Automation is at Adopt. Microsoft Power Automate alone processes 1B or more AI actions per month across enterprise customers. AI-powered workflow automation is crossing into mainstream operational deployment. The category sits adjacent to Agent Platforms: the distinction is that workflow automation operates on predefined paths while agent platforms allow emergent task decomposition.
| Signal | Source |
|---|---|
| Microsoft Power Automate AI Builder processes 1B+ AI actions per month | Microsoft |
| n8n surpasses 60,000 GitHub stars and launches enterprise cloud offering | n8n |
MLOps
AdoptMLOps is at Adopt. The category has matured significantly over 2023 to 2025. MLflow, W&B, and SageMaker are now standard tools in ML-heavy organisations. The strategic challenge is that MLOps was designed for traditional ML models, not for LLM applications. LLMOps is the AI-native extension that covers the delta.
| Signal | Source |
|---|---|
| MLflow 2.10 adds LLM evaluation and AI gateway for unified model lifecycle | Databricks / MLflow |
| AWS SageMaker Unified Studio integrates ML lifecycle with GenAI development | AWS |
LLMOps
TrialLLMOps is at Trial. The category is critical: as enterprises scale LLM applications, they need systematic management of prompts, outputs, costs, evaluations, and deployments. Tools (LangSmith, Arize Phoenix, Helicone) are maturing rapidly but the landscape is fragmented and standards are not yet established. Start trialling LLMOps tooling now before scaling LLM applications.
| Signal | Source |
|---|---|
| LangSmith reaches GA with enterprise prompt management and eval features | LangChain |
| Gartner identifies LLMOps as a 'Technology Trigger' in 2025 Hype Cycle | Gartner |
CI/CD
AdoptCI/CD is universally adopted in any organisation with software engineering capability. The AI-specific challenge is extending CI/CD pipelines to include model evaluation gates, data validation checks, and prompt regression tests: capabilities that require LLMOps and Testing/Evaluation tooling to be in place first.
| Signal | Source |
|---|---|
| GitHub Actions surpasses 100M workflow runs per day with AI model deployment templates | GitHub |
Testing / Evaluation
TrialTesting and Evaluation is at Trial with limited enterprise adoption. This is the most consequential gap in the Enablement block. Organisations are deploying LLM applications without systematic evaluation frameworks. Tools exist (Braintrust, Ragas, Promptfoo) but lack enterprise standardisation. Weak evaluations create a direct path to production quality failures and reputational risk. Treat AI evals as a first-class engineering requirement, not an optional QA step.
| Signal | Source |
|---|---|
| Braintrust closes $36M Series A for AI evaluation platform | TechCrunch |
| NIST AI RMF 1.1 introduces systematic AI testing and evaluation guidance | NIST |
Deployment / Release Management
TrialAt Trial, AI-specific deployment management (canary releases, shadow mode, A/B testing for models) is available but not yet standardised. Enterprises using cloud LLM APIs bypass much of this complexity, but self-hosted or fine-tuned models require production-grade release management. As model diversity increases, controlled release patterns will become essential.
Prompt Management
AssessPrompt Management is the only Assess-tier category in AI Enablement. The category is fragmented and being absorbed by LLMOps platforms. Standalone prompt management tools (PromptLayer, Pezzo, Agenta) are useful but unlikely to remain independent categories. Enterprises should address prompt versioning and governance through their LLMOps platform of choice rather than deploying a separate tool.
| Signal | Source |
|---|---|
| Langfuse 2.0 merges prompt management into full LLMOps platform | Langfuse |
Orchestration
TrialOrchestration is at Trial with high strategic priority. As AI applications become more complex (multi-step, multi-model, multi-tool), orchestration frameworks become the connective tissue. LangGraph, Semantic Kernel, and LlamaIndex Workflows are maturing. The category is in a transitional phase: adoption is growing but production-grade standards are not yet established. Evaluate orchestration frameworks as a strategic architectural choice, not a commodity selection.
| Signal | Source |
|---|---|
| Microsoft Semantic Kernel 1.0 reaches GA with enterprise multi-agent support | Microsoft |
| LangGraph used in production by 500+ enterprise teams for stateful workflows | LangChain |
AI Trust & Control
AI Trust & Control — tier distribution
AI Trust and Control is the most strategically critical and operationally immature block in the radar. Four categories are at Adopt (Access Control, PII Protection, Compliance, Monitoring/Observability) but six are at Trial and two at Assess. The categories with the highest control criticality (Governance, Auditability, AI Security, Guardrails) are not yet at Adopt. This mismatch is the defining enterprise AI risk in 2026.
AI Security
TrialAI Security is at Trial with maximum control criticality. The threat landscape is expanding fast: OWASP LLM Top 10 v1.1 formalises prompt injection as the top risk; MITRE ATLAS v4.2 adds 15 new GenAI-specific attack techniques. The tooling market (Protect AI, HiddenLayer, CalypsoAI) is nascent. Most enterprises are not running systematic AI security programmes. This is the most underprotected high-criticality category in the radar.
Guardrails
TrialGuardrails are at Trial. Market maturity is low but control criticality is at maximum. No enterprise AI deployment should go to production without runtime output validation and content filtering. Managed options (AWS Bedrock Guardrails, Azure Content Safety) lower the barrier for basic deployments, but enterprise-grade configurable guardrails for complex multi-turn applications remain difficult to implement correctly.
| Signal | Source |
|---|---|
| Meta releases LlamaGuard 3 for multi-language content safety classification | Meta AI |
| AWS Bedrock Guardrails reaches GA with configurable content filters and PII redaction | AWS |
PII Protection
AdoptPII Protection is at Adopt. GDPR enforcement is creating non-negotiable demand. Microsoft Presidio, AWS Comprehend PII, and Nightfall provide mature detection and redaction capabilities. Every enterprise processing personal data through AI systems must have PII protection in place. This is a solved-enough problem for production deployment.
| Signal | Source |
|---|---|
| GDPR enforcement actions cite AI-processed personal data in 34% of 2024 fines | EU Data Protection Board |
| Microsoft Presidio 2.x deployed at 500+ organisations for AI data anonymisation | Microsoft / GitHub |
Access Control
AdoptAccess Control is at Adopt with the highest overall rating in the Trust and Control block. IAM is a solved enterprise problem; the AI-specific extension is governing agent identities and model endpoint permissions. Okta's AI Identity product extending IAM to AI agents reflects where the category is heading. Enterprises already managing strong IAM are well-positioned. The gap is in applying existing controls to AI-specific resources.
Policy Enforcement
TrialPolicy Enforcement is at Trial. Translating governance decisions into runtime technical controls is hard: it requires both a governance framework and a technical enforcement layer. OPA (Open Policy Agent) provides a mature general-purpose policy engine; AI-specific policy enforcement (usage caps, permitted use cases, prohibited topics) remains largely manual or point-solution-based.
| Signal | Source |
|---|---|
| EU AI Act Code of Practice mandates policy enforcement for General Purpose AI providers | EU AI Office |
Compliance
AdoptCompliance is at Adopt. The EU AI Act enforcement timeline (February 2025 for prohibited practices, August 2025 for risk classification) has created urgent and non-negotiable compliance demand. The category crosses the Adopt threshold primarily because of regulatory pressure, not technology maturity. EU-operating enterprises with high-risk AI systems must be in active compliance programmes today.
Governance
TrialGovernance is at Trial with the highest strategic relevance and control criticality in the block, but low market maturity and adoption. Stanford AI Index 2025 finds that only 28% of large enterprises have a formal AI governance structure. McKinsey identifies governance as the top AI scaling bottleneck. This is the most critical and most underfunded capability in enterprise AI. A formal AI governance programme is not optional at enterprise scale. It is a prerequisite for scaling safely.
| Signal | Source |
|---|---|
| Stanford AI Index 2025: only 28% of large enterprises have a formal AI governance structure | Stanford HAI |
| McKinsey: CIOs cite AI governance as their #1 scaling bottleneck | McKinsey |
Auditability
TrialAuditability is at Trial with among the lowest adoption in the Trust and Control block. EU AI Act Article 12 mandates complete logging for high-risk AI systems. Most enterprises cannot currently evidence their AI decisions to regulators. The tooling (MLflow, W&B audit trails, IBM OpenScale) provides components but not a complete auditability framework. Treat auditability as a compliance infrastructure investment, not an afterthought.
| Signal | Source |
|---|---|
| EU AI Act Article 12 requires complete logging for high-risk AI systems | EU AI Office |
Monitoring / Observability
AdoptMonitoring and Observability reaches Adopt. Datadog LLM Observability GA and Arize AI's $100M Series C signal that the category is crossing from specialist to standard enterprise tooling. Enterprises should have AI observability in production for any LLM application running at meaningful scale. This is no longer optional.
| Signal | Source |
|---|---|
| Datadog LLM Observability reaches GA with token cost tracking and quality scoring | Datadog |
| Arize AI raises $100M Series C for AI observability at enterprise scale | Arize AI / TechCrunch |
Traceability
AssessTraceability is at Assess with the lowest adoption in the Trust and Control block. End-to-end tracing from AI output back to training data, retrieval context, and model version is extremely difficult today. OpenTelemetry's GenAI Semantic Conventions reaching a stable specification is an important foundation, but enterprise-grade AI traceability remains a 2027 capability for most organisations. Monitor the standard's evolution; do not invest heavily in bespoke traceability infrastructure today.
| Signal | Source |
|---|---|
| OpenTelemetry GenAI Semantic Conventions reach stable specification | CNCF / OpenTelemetry |
Red-Teaming
TrialRed-Teaming is at Trial with limited adoption, used almost exclusively by security-conscious AI teams at large enterprises. The EU AI Act mandates adversarial testing for high-risk AI systems, which will create structural demand. Microsoft's open-sourcing of PyRIT and the maturation of Garak lower the barrier for enterprise practitioners. Any enterprise deploying AI in customer-facing, high-stakes, or regulated contexts should initiate red-teaming before production launch.
| Signal | Source |
|---|---|
| Microsoft releases PyRIT v0.3: open-source AI red-teaming framework for enterprise use | Microsoft / GitHub |
| EU AI Act mandates adversarial testing for high-risk AI before deployment | EU AI Office |
Model Risk Management
AssessModel Risk Management is at Assess with the lowest market maturity in the entire radar. Established in financial services through SR 11-7 and TRIM, it is largely absent from other sectors. The Federal Reserve and OCC have issued guidance extending SR 11-7 principles to GenAI, which will drive adoption in FSI. Non-financial enterprises should monitor the frameworks developing here but need not invest immediately unless facing similar regulatory pressure.
| Signal | Source |
|---|---|
| Federal Reserve and OCC issue guidance extending SR 11-7 model risk to generative AI | Federal Reserve / OCC |
AI Value Domains
AI Value Domains — tier distribution
AI Value Domains are where executive ROI lives. The split is clear: Customer Service, Marketing and Sales, and Digital Workplace have crossed Adopt (proven, measurable, widely deployed). The remaining five domains are at Trial: strategically attractive but requiring more careful programme management and controls.
Marketing & Sales
AdoptMarketing and Sales is at Adopt. McKinsey reports 71% of enterprises have deployed AI in at least one marketing use case. Salesforce Einstein alone processes 200M or more AI actions per day. This is the fastest-proven ROI domain. CIOs should be helping business teams scale and govern existing AI deployments, not initiating new trials.
Customer Service
AdoptCustomer Service is at Adopt. Zendesk AI Suite adoption at 85% of its enterprise customer base is a defining data point: AI in customer service has crossed the mainstream threshold. Intercom Fin resolving 78% of queries autonomously is a production benchmark that enterprises should be measuring against.
| Signal | Source |
|---|---|
| Intercom Fin resolves 78% of customer queries autonomously with GPT-4 | Intercom |
| Zendesk AI Suite reports 85% adoption across enterprise customer base | Zendesk |
Operations & Supply Chain
TrialOperations and Supply Chain is at Trial. Strong ROI potential exists in demand forecasting, logistics optimisation, and procurement. SAP AI Core and Blue Yonder are mature in FSI and manufacturing sectors. The barrier is deep ERP integration: most AI supply chain projects live or die based on the quality of the data flowing from ERP systems. Sector-specific deployment patterns vary significantly.
Finance & Risk
TrialFinance and Risk is at Trial. Finance leaders see clear value in AI for FP&A, fraud detection, and risk modelling. Technology readiness for GenAI in financial analysis requires careful hallucination controls. Bloomberg's LLM integration for financial analytics signals institutional validation. Control criticality is high, reflecting SR 11-7 and Basel III obligations in FSI. Proceed with structured pilots and model risk management from the outset.
| Signal | Source |
|---|---|
| Bloomberg launches Bloomberg AI for FP&A and risk analytics with LLM integration | Bloomberg |
HR & Workforce
TrialHR and Workforce is at Trial. Adoption is constrained by EU AI Act high-risk classification for AI recruitment tools: mandatory human review is now legally required. Bias risk remains a significant concern in hiring and performance management AI. Technology is capable (Eightfold AI, Workday AI) but cultural and legal barriers slow enterprise rollout. A cautious, compliance-first approach is appropriate.
| Signal | Source |
|---|---|
| EU AI Act classifies AI recruitment tools as high-risk; mandatory human review required | EU AI Office |
Product & R&D
TrialProduct and R&D is at Trial but is the fastest-growing domain by adoption momentum. GitHub Copilot at 1.8M paid subscribers and Cursor at $100M ARR in under two years reflect the velocity of developer AI adoption. Technology readiness for AI coding assistance is solid. The strategic challenge is intellectual property, code security scanning, and standardising AI use in software development workflows across teams.
| Signal | Source |
|---|---|
| GitHub Copilot surpasses 1.8M paid subscribers with enterprise plan growth | GitHub / Microsoft |
| Cursor reaches $100M ARR in under 2 years as AI-native IDE | The Information |
Legal & Compliance
TrialLegal and Compliance is at Trial with the lowest technology readiness in the Value Domains block. Harvey AI's $300M raise and CoCounsel's deployment at 30 or more AmLaw 100 firms demonstrate investor conviction and early enterprise credibility. However, hallucination risk in legal contexts is material: a wrong contract clause or missed regulatory obligation is a direct liability. Deploy AI legal tools for research and drafting assistance only. Require qualified legal review for any output with binding implications.
| Signal | Source |
|---|---|
| Harvey AI raises $300M at $3B valuation for AI legal platform | Wall Street Journal |
| Thomson Reuters CoCounsel deployed at 30+ AmLaw 100 firms for contract review | Thomson Reuters |
Digital Workplace / Employee AI
AdoptDigital Workplace and Employee AI is at Adopt. Microsoft 365 Copilot active in 70% of Fortune 500 companies is the single most striking adoption data point in the Value Domains block. Google Workspace AI (Gemini) provides a competing platform. This category has the broadest end-user reach of any domain: already in production for tens of millions of knowledge workers. The enterprise challenge is not deployment but governance of AI-generated content, data residency for AI-processed documents, and ROI measurement.
| Signal | Source |
|---|---|
| Microsoft 365 Copilot active in 70% of Fortune 500 companies | Microsoft |
| Google Workspace AI (Gemini) integrated across all tiers including Enterprise |
Trends for Q2 2026
Agent Platforms: watch for Adopt movement. Microsoft Copilot Studio and Google Agentspace are maturing rapidly. Enterprise production use cases are multiplying. Q2 2026 is when structured pilots should be converting to programmes.
Governance: from optional to obligatory. EU AI Act risk classification deadlines are active. Enterprises without formal AI governance structures are accumulating regulatory risk with every deployment.
Consolidation underway. Prompt Management is being absorbed into LLMOps platforms. Vector Databases face pressure from native database capabilities (PostgreSQL, Redis). These categories will narrow in scope in Q2.
LLMOps and RAG Infrastructure: approaching Adopt. Both categories are converging around managed cloud services. Q2 signals will likely push them over the Adopt threshold.
This radar is built on 500+ signals weighted by source quality, freshness, and enterprise relevance. For a full explanation of the scoring model, read the Methodology.