To hear companies such as ChatGPT’s OpenAI tell it, artificial general intelligence, or AGI, is the ultimate goal of machine learning and AI research. But what is the measure of a generally intelligent machine? In 1970 computer scientist Marvin Minsky predicted that soon-to-be-developed machines would “read Shakespeare, grease a car, play office politics, tell a joke, have a fight.” Years later the “coffee test,” often attributed to Apple co-founder Steve Wozniak, proposed that AGI will be achieved when a machine can enter a stranger’s home and make a pot of coffee.

Few people agree on what AGI is to begin with—never mind achieving it. Experts in computer and cognitive science, and others in policy and ethics, often have their own distinct understanding of the concept (and different opinions about its implications or plausibility). Without a consensus it can be difficult to interpret announcements about AGI or claims about its risks and benefits. Meanwhile, though, the term is popping up with increasing frequency in press releases, interviews and computer science papers. Microsoft researchers declared last year that GPT-4 shows “sparks of AGI”; at the end of May OpenAI confirmed it is training its next-generation machine-learning model, which would boast the “next level of capabilities” on the “path to AGI.” And some prominent computer scientists have argued that with text-generating large language models, it has already been achieved.

To know how to talk about AGI, test for AGI and manage the possibility of AGI, we’ll have to get a better grip on what it actually describes.

To read more, click here.