For most of artificial intelligence’s history, many researchers expected that building truly capable systems would need a long series of scientific breakthroughs: revolutionary algorithms, deep insights into human cognition, or fundamental advances in our understanding of the brain. While scientific advances have played a role, recent AI progress has revealed an unexpected insight: A lot of the recent improvement in AI capabilities has come simply from scaling up existing AI systems.1

Here, scaling means deploying more computational power, using larger datasets, and building bigger models. This approach has worked surprisingly well so far.2 Just a few years ago, state-of-the-art AI systems struggled with basic tasks like counting.3,4 Today, they can solve complex math problems, write software, create extremely realistic images and videos, and discuss academic topics.

This article will provide a brief overview of scaling in AI over the past years. The data comes from Epoch, an organization that analyzes trends in computing, data, and investments to understand where AI might be headed.5 Epoch maintains the most extensive dataset on AI models and regularly publishes key figures on AI growth and change.

To read more, click here.