Almost three years ago Google DeepMind released "Levels of AGI" – five levels and six degrees of autonomy. The framework looked convincing, but verification was impossible: anyone could claim their system operated at level 2, and disputing that claim was almost as hard as proving a unicorn exists.

In March 2026 the paper "Measuring Progress Toward AGI" appeared. Instead of abstract labels it introduced ten scales built on tools from cognitive psychology. The scales cover perception, generation, reasoning, attention, learning and two composite abilities. Now models can be compared without personal taste.

Current benchmarks are already compromised: MMLU and HumanEval have been absorbed into training corpora, and ChatGPT tests blend the model with prompt engineering, calculators and search engines. The new scales separate pure memory from tool‑use skill – objectivity finally enters the arena.

For investors this means that instead of debating parameter counts they can look at a model's real autonomy. A standardized metric will speed alignment with regulators and force analysts to recalculate company valuations when scale scores exceed seven. Firms with high scores will enjoy easier capital access and faster regulatory clearance.

What CEOs should do: demand AI divisions report on DeepMind's new scales, map those results onto existing KPIs and shift resources toward projects that demonstrate a high autonomy level. This will cut the risk of overvaluation, boost investor confidence and accelerate regulatory approval.

AIDeepMindAGIautonomytechnology