Trends in Frontier AI Model Count: A Forecast to 2028

A data-driven forecast predicting the dramatic growth of large-scale foundation models between 2023 and 2028, assessing how many models will surpass training compute thresholds under emerging AI governance frameworks like the EU AI Act.

CartaNova

Jul 6, 2025

Author: Iyngkarran Kumar, Sam Manning

Link: https://arxiv.org/abs/2504.16138

Summary

This paper presents a data-driven forecast of how many frontier AI models will exceed key training compute thresholds in the coming years, particularly under regulatory definitions proposed by global AI governance frameworks.

The study aims to help policymakers anticipate and manage the growing regulatory burden associated with general-purpose AI (GPAI) systems. Using historical compute trends, the authors estimate the number of models that will likely cross the compute limits defined by:

  • The EU AI Act – 10²⁵ FLOPs threshold

  • The U.S. AI Executive Order (October 2023) – 10²⁶ FLOPs threshold (labeled as “controlled models”)

Motivation

Regulators are increasingly turning to training compute as a measurable proxy for potential model capabilities, risk level, and oversight requirements. However, little prior work has attempted to forecast how many models might fall under these thresholds in the future.

With rapid innovation in AI hardware, software, and model design, the number of models hitting these thresholds is expected to accelerate, creating new governance challenges such as:

  • Oversaturation of regulatory systems

  • Enforcement resource strain

  • Policy mismatches with real-world model deployment

Forecast Methodology

The authors use the Epoch AI Notable Models Dataset, which includes detailed historical data on AI model training compute from 2017 through 2023. They build forecasts based on:

  • Training compute trends (in FLOPs)

  • Model release rates

  • Anticipated hardware/software efficiency gains

  • Organization behavior patterns

The model outputs a Monte Carlo simulation generating thousands of potential futures, from which median and confidence intervals are derived.

Key Projections

  • By the end of 2028, the study predicts:

    • Between 103 and 306 models will surpass the 10²⁵ FLOPs threshold (EU standard)

    • Between 45 and 148 models will surpass the 10²⁶ FLOPs threshold (U.S. standard)

  • Compute growth is projected to be superlinear, not linear. This means the number of large-scale models will grow faster every year, not just steadily.

Introducing Frontier-Connected Thresholds

To address concerns about static regulation becoming outdated, the authors propose the idea of "frontier-connected thresholds." These are thresholds that:

  • Scale dynamically based on the size of the largest known model

  • Adjust over time to reflect real-world model capabilities

  • Provide a moving baseline to reduce over- or under-regulation

This approach could help stabilize regulatory scope despite a fast-moving technology landscape.

Policy Implications

  • Regulatory burden will grow significantly under current static thresholds.

  • AI governance bodies may become overwhelmed if thresholds are not adapted.

  • More flexible mechanisms such as tiered thresholds, dynamic scaling, and model classification systems may be necessary.

The authors urge policymakers to anticipate:

  • Rising demand for model registration, review, and monitoring

  • Shifting risk landscapes due to fine-tuning, open weights, and derivative models

  • The potential for developers to intentionally avoid thresholds through architectural changes

Limitations and Uncertainties

  • The dataset may be incomplete or biased toward publicly known models.

  • Forecasts assume continuity in current development trends.

  • The influence of new training paradigms, model compression, or open-weight proliferation could disrupt these projections.

Conclusion

This paper offers an important early warning for the AI governance community: the number of frontier AI models is likely to explode within the next three years, far outpacing the regulatory frameworks currently in place. As such, static compute thresholds may soon become impractical, and smarter, more adaptive governance tools are urgently needed.

By quantifying future model growth and comparing it with existing policy boundaries, the authors provide a foundation for more scalable, sustainable, and risk-proportionate AI oversight.

More Insights

[

ARTICLE

]

Building Data Governance Architecture on AWS

This diagram illustrates an end-to-end architecture designed to establish robust data governance using a suite of Amazon Web Services (AWS) tools. The structure enables organizations to collect, ingest, store, process, analyze, and visualize data in a secure and scalable environment. The entire flow is divided into six major stages, each fulfilling a key function in the data lifecycle.

[

ARTICLE

]

Building Data Governance Architecture on AWS

This diagram illustrates an end-to-end architecture designed to establish robust data governance using a suite of Amazon Web Services (AWS) tools. The structure enables organizations to collect, ingest, store, process, analyze, and visualize data in a secure and scalable environment. The entire flow is divided into six major stages, each fulfilling a key function in the data lifecycle.

[

ARTICLE

]

Building Data Governance Architecture on AWS

This diagram illustrates an end-to-end architecture designed to establish robust data governance using a suite of Amazon Web Services (AWS) tools. The structure enables organizations to collect, ingest, store, process, analyze, and visualize data in a secure and scalable environment. The entire flow is divided into six major stages, each fulfilling a key function in the data lifecycle.

[

PAPER

]

Ontology Development 101: A Guide to Creating Your First Ontology

A practical introduction to ontology creation, this guide outlines step‑by‑step methodology—defining domain scope, reusing existing vocabularies, building class hierarchies, properties, and instances—and addresses complex design issues like semantic relationships and iterative refinement within Protégé‑2000.

[

PAPER

]

Ontology Development 101: A Guide to Creating Your First Ontology

A practical introduction to ontology creation, this guide outlines step‑by‑step methodology—defining domain scope, reusing existing vocabularies, building class hierarchies, properties, and instances—and addresses complex design issues like semantic relationships and iterative refinement within Protégé‑2000.

[

PAPER

]

Ontology Development 101: A Guide to Creating Your First Ontology

A practical introduction to ontology creation, this guide outlines step‑by‑step methodology—defining domain scope, reusing existing vocabularies, building class hierarchies, properties, and instances—and addresses complex design issues like semantic relationships and iterative refinement within Protégé‑2000.

[

PAPER

]

Self‑Rewarding Language Models

This paper introduces Self-Rewarding Language Models, where large language models iteratively generate, evaluate, and optimize their own outputs without relying on external reward models—establishing a new paradigm of self-alignment and performance improvement.

[

PAPER

]

Self‑Rewarding Language Models

This paper introduces Self-Rewarding Language Models, where large language models iteratively generate, evaluate, and optimize their own outputs without relying on external reward models—establishing a new paradigm of self-alignment and performance improvement.

[

PAPER

]

Self‑Rewarding Language Models

This paper introduces Self-Rewarding Language Models, where large language models iteratively generate, evaluate, and optimize their own outputs without relying on external reward models—establishing a new paradigm of self-alignment and performance improvement.