The first run of the GAAB typically surfaces materially more AI in active use than leadership has on its current register, including third-party AI embedded in existing software.
Most leadership teams cannot answer three questions about AI in their own enterprise.
How mature it is. How the workforce feels about it. Where it is being deliberately avoided. The Global AI Adoption Benchmark answers all three, in three to four weeks, on the same scale as the global cohort.
Eleven questions. Three readings. Five minutes.
Drawn directly from the GAAB instrument. Capability, sentiment, and avoidance are scored separately and never folded into one. Your answers shape the readings in real time and place you against an illustrative cohort median.
How completely can your organisation account for the AI tools, platforms, and capabilities currently in active use across the business?
Three claims most enterprises cannot disprove
Before any AI strategy, governance, or transformation programme can be sequenced honestly, the leadership team needs to be able to answer the questions below. In most enterprises today they cannot.
Workforce sentiment moves before capability moves. The quarterly cadence converts that lead time into a 90-day early warning system the rest of the dashboard cannot give the board.
From kickoff to a defensible board-grade reading. Not a discovery exercise, not a survey, not a maturity model. A decision instrument with a 90-day action plan as the deliverable.
Three readings, never folded into one
Capability, sentiment, and avoidance are reported alongside one another. Folding them into a single composite would create the perverse incentive to optimise for the average. The variance between them is the diagnostic.
Capability score
0 to 100, banded against the live global cohort. How mature the organisation is across six dimensions of AI capability. Carries a Reliance Rating so the board knows what can be cited externally.
Sentiment index
0 to 100, in two facets: Propensity to innovate, and Maturity to do so safely. The leading indicator. Most boards do not have one for AI. The GAAB gives them one.
Avoidance index
Where AI is being deliberately avoided, by whom, and for what reason. Distinguishes ignorance from refusal. The intervention is in the reason, not the percentage.
Four things most assessments do not do
- 01
Three-layer triangulation
The Executive Team, every Department Lead, and every Individual Team Member answer the questions only their layer can credibly answer. Where the layers agree, the score is reliable. Where they disagree, the disagreement is the diagnostic.
- 02
Sentiment as a leading indicator
Capability moves slowly; sentiment moves first. The Quarterly Pulse gives the leadership team a 90-day early warning system the rest of the dashboard cannot give them.
- 03
Avoidance, segmented by reason
Most assessments ignore deliberate non-use. Two organisations with the same capability score and the same sentiment can have radically different avoidance profiles, and the reason mix indicates the right intervention: access, training, ethics, or fear.
- 04
Direct transition to an AI Management System
The Board Report is engineered to feed the Inaix AI Governance Framework or an existing AIMS programme. It is the first layer of the system, not another report.
Four steps, three to four weeks
Onboarding
Scope confirmation, anchor month set, communication approved.
Three layers complete the instrument
Every Executive Team member, every Department Lead, and every Individual Team Member in scope responds. Anonymous at Layer 3.
Scoring, variance, reliance
Dimension scores, Layer Variance Flags, Sentiment, Avoidance, Reliance Rating per output.
Board Report and 90-day plan
Annual Baseline Report delivered. NIST Playbook Action List and Inaix Pillar Implementation Map, mapped to ESG disclosure.
From there, the Quarterly Pulse runs in a 10-day window each quarter. After four quarters every Pulse-cadence question carries five readings, enough to classify the trajectory shape.
The six dimensions of AI capability
Equally weighted at one-sixth of the composite. Engineered to align with the functional structure of the NIST AI Risk Management Framework and to cover the seven trustworthy characteristics.
Standards alignment
Every scored question carries five reference tags. The aggregate is a board-ready disclosure profile across the international standards your auditors and regulators are already using.
Same instrument. Same scale. Two depths.
Five minutes. Twelve questions, drawn directly from the full instrument in original wording. A directional reading on the same 0 to 100 scale as the full assessment.
- Free GAAB Score banded against the live cohort.
- Sentiment quadrant placement.
- Avoidance flag and cohort comparison.
- Three personalised takeaways, mapped to your highest-gap dimension.
Three to four weeks. The full instrument, completed by every Executive Team member, every Department Lead in scope, and every Individual Team Member in participating departments.
- Full Board Report with three-layer Gap Map and Layer Variance Flags.
- NIST AI RMF Profile, NIST Playbook Action List, Inaix Pillar Implementation Map.
- AI ESG Disclosure Profile and Lateral Signals Suite.
- Quarterly Pulse Report, with trajectory shape and cohort-relative position.
Run this if any of the following is true
- 01AI usage is increasing across the organisation but is not coordinated centrally.
- 02The leadership team does not have a complete view of AI activity, particularly third-party AI exposure.
- 03Governance frameworks exist on paper but have not been tested against operational reality.
- 04Outcomes from AI initiatives are inconsistent or unmeasured against business value.
- 05ESG, regulatory, or audit obligations require defensible evidence of AI governance posture.
Three questions a CEO should be able to answer
If you can answer all three with evidence the board would accept, you have a baseline. If you cannot, you do not yet have one. The Free Assessment is the way to find out in five minutes; the Enterprise Benchmark is the way to close the gap in three to four weeks.
- 01How mature is our AI capability against the global cohort, on a defensible 0 to 100 scale, by sector and by size.
- 02How does the workforce feel about it, separated into willingness and confidence, by department.
- 03Where is AI being deliberately avoided in our business, by what percentage of the workforce, and for what reason.
Most leadership teams cannot answer the three with evidence. The GAAB exists to give them that evidence.