Enterprise AI Diagnostic

Most leadership teams cannot answer three questions about AI in their own enterprise.

How mature it is. How the workforce feels about it. Where it is being deliberately avoided. The Global AI Adoption Benchmark answers all three, in three to four weeks, on the same scale as the global cohort.

Built onNIST AI RMF·ISO/IEC 42001·EU AI Act·ESRS
Free Assessment

Eleven questions. Three readings. Five minutes.

Drawn directly from the GAAB instrument. Capability, sentiment, and avoidance are scored separately and never folded into one. Your answers shape the readings in real time and place you against an illustrative cohort median.

Section 1 / 4 · Q1 / 11Capability
A2.0· Observability

How completely can your organisation account for the AI tools, platforms, and capabilities currently in active use across the business?

Live Reading
Capability, sentiment, and avoidance, never folded into one
0
Capability
ObservabilityAccountabilityRisk and ControlCapability and VelocityStrategy and ValueTrustworthiness
Your reading Cohort median
Propensity (personal + organisational)
Maturity (organisational confidence)
Avoidance intensity
Workforce reshape exposurePending
The Visibility Gap

Three claims most enterprises cannot disprove

Before any AI strategy, governance, or transformation programme can be sequenced honestly, the leadership team needs to be able to answer the questions below. In most enterprises today they cannot.

3 to 5x

The first run of the GAAB typically surfaces materially more AI in active use than leadership has on its current register, including third-party AI embedded in existing software.

1 quarter

Workforce sentiment moves before capability moves. The quarterly cadence converts that lead time into a 90-day early warning system the rest of the dashboard cannot give the board.

3 to 4 weeks

From kickoff to a defensible board-grade reading. Not a discovery exercise, not a survey, not a maturity model. A decision instrument with a 90-day action plan as the deliverable.

What the board receives

Three readings, never folded into one

Capability, sentiment, and avoidance are reported alongside one another. Folding them into a single composite would create the perverse incentive to optimise for the average. The variance between them is the diagnostic.

Reading 01

Capability score

0 to 100, banded against the live global cohort. How mature the organisation is across six dimensions of AI capability. Carries a Reliance Rating so the board knows what can be cited externally.

Reading 02

Sentiment index

0 to 100, in two facets: Propensity to innovate, and Maturity to do so safely. The leading indicator. Most boards do not have one for AI. The GAAB gives them one.

Reading 03

Avoidance index

Where AI is being deliberately avoided, by whom, and for what reason. Distinguishes ignorance from refusal. The intervention is in the reason, not the percentage.

Why this is different

Four things most assessments do not do

  1. 01

    Three-layer triangulation

    The Executive Team, every Department Lead, and every Individual Team Member answer the questions only their layer can credibly answer. Where the layers agree, the score is reliable. Where they disagree, the disagreement is the diagnostic.

  2. 02

    Sentiment as a leading indicator

    Capability moves slowly; sentiment moves first. The Quarterly Pulse gives the leadership team a 90-day early warning system the rest of the dashboard cannot give them.

  3. 03

    Avoidance, segmented by reason

    Most assessments ignore deliberate non-use. Two organisations with the same capability score and the same sentiment can have radically different avoidance profiles, and the reason mix indicates the right intervention: access, training, ethics, or fear.

  4. 04

    Direct transition to an AI Management System

    The Board Report is engineered to feed the Inaix AI Governance Framework or an existing AIMS programme. It is the first layer of the system, not another report.

How it works

Four steps, three to four weeks

01

Onboarding

Scope confirmation, anchor month set, communication approved.

Week 1
02

Three layers complete the instrument

Every Executive Team member, every Department Lead, and every Individual Team Member in scope responds. Anonymous at Layer 3.

Weeks 2 to 4
03

Scoring, variance, reliance

Dimension scores, Layer Variance Flags, Sentiment, Avoidance, Reliance Rating per output.

Week 4
04

Board Report and 90-day plan

Annual Baseline Report delivered. NIST Playbook Action List and Inaix Pillar Implementation Map, mapped to ESG disclosure.

Week 5

From there, the Quarterly Pulse runs in a 10-day window each quarter. After four quarters every Pulse-cadence question carries five readings, enough to classify the trajectory shape.

The six dimensions of AI capability

Equally weighted at one-sixth of the composite. Engineered to align with the functional structure of the NIST AI Risk Management Framework and to cover the seven trustworthy characteristics.

DIM1
Observability
Whether the organisation can see what AI is in use, where, and how reliably it can be surfaced.
DIM2
Accountability
Who owns AI-influenced outcomes, how incidents are handled, and whether external stakeholders can flag, contest, or appeal AI decisions.
DIM3
Risk and Control
Data sensitivity controls, decision authority boundaries, and pre-deployment evaluation of new AI capability.
DIM4
Capability and Velocity
Adoption pace, distribution of AI capability across the workforce, and the enablement of internal building.
DIM5
Strategy and Value
Whether AI is linked to defined business outcomes, governed at leadership and board level, and measured against value targets.
DIM6
Trustworthiness
Validity and reliability of AI systems, evaluation of bias and disparate impact, and the explainability of AI outputs.

Standards alignment

Every scored question carries five reference tags. The aggregate is a board-ready disclosure profile across the international standards your auditors and regulators are already using.

NIST AI RMF and Playbook
Current and target Profile across GOVERN, MAP, MEASURE, MANAGE, with a coverage view of the seven trustworthy characteristics, and a sequenced Action List for any dimension below 4.0.
ISO/IEC 42001
Operational evidence against Clauses 6.4 (risk treatment) and 8 (operation), with the Inaix Pillars providing the implementation pathway from current state to certifiable maturity.
EU AI Act
Direct cross-references to Articles 9, 10, 12, 13, 14, 15, 21, 25 and 26, and Annex III high-risk classification.
ESRS (CSRD) and GRI Standards
G1 governance, S1 own workforce including just transition and worker voice, and S4 consumers and end-users including privacy, non-discrimination, and remedy mechanisms. Direct mappings to GRI 2, 308, 401, 404, 405, 416, 417, and 418.
Two ways to engage

Same instrument. Same scale. Two depths.

Free Assessment

Five minutes. Twelve questions, drawn directly from the full instrument in original wording. A directional reading on the same 0 to 100 scale as the full assessment.

What you get back
  • Free GAAB Score banded against the live cohort.
  • Sentiment quadrant placement.
  • Avoidance flag and cohort comparison.
  • Three personalised takeaways, mapped to your highest-gap dimension.
Enterprise Benchmark

Three to four weeks. The full instrument, completed by every Executive Team member, every Department Lead in scope, and every Individual Team Member in participating departments.

What the board receives
  • Full Board Report with three-layer Gap Map and Layer Variance Flags.
  • NIST AI RMF Profile, NIST Playbook Action List, Inaix Pillar Implementation Map.
  • AI ESG Disclosure Profile and Lateral Signals Suite.
  • Quarterly Pulse Report, with trajectory shape and cohort-relative position.
Who it is for

Run this if any of the following is true

  • 01AI usage is increasing across the organisation but is not coordinated centrally.
  • 02The leadership team does not have a complete view of AI activity, particularly third-party AI exposure.
  • 03Governance frameworks exist on paper but have not been tested against operational reality.
  • 04Outcomes from AI initiatives are inconsistent or unmeasured against business value.
  • 05ESG, regulatory, or audit obligations require defensible evidence of AI governance posture.
Start your assessment

Three questions a CEO should be able to answer

If you can answer all three with evidence the board would accept, you have a baseline. If you cannot, you do not yet have one. The Free Assessment is the way to find out in five minutes; the Enterprise Benchmark is the way to close the gap in three to four weeks.

The Self-Test
  1. 01How mature is our AI capability against the global cohort, on a defensible 0 to 100 scale, by sector and by size.
  2. 02How does the workforce feel about it, separated into willingness and confidence, by department.
  3. 03Where is AI being deliberately avoided in our business, by what percentage of the workforce, and for what reason.

Most leadership teams cannot answer the three with evidence. The GAAB exists to give them that evidence.