The rise in capabilities of artificial intelligence (AI) systems has led to the view that these systems might soon be conscious. However, we might be underestimating the neurobiological mechanisms underlying human consciousness.
Modern AI systems are capable of many amazing behaviors. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. When we, humans, are interacting with ChatGPT, we consciously perceive the text the language model generates, just as you are currently consciously perceiving this text here.
The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.
New research published in the journal Trends in Neurosciences, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.
To read more, click here.