Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them.” That’s a quote from Leopold Aschenbrenner, a San Francisco-based AI researcher in his mid 20s who was recently fired from OpenAI and who, according to his own website, “recently founded an investment firm focused on artificial general intelligence.” Aschenbrenner, a former economics researcher at Oxford University’s Global Priorities Institute, believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why. I spent the last weekend reading the essay, “Situational Awareness: The Decade Ahead,” and I now understand a lot better what is going on, if not in AI, then at least in San Francisco, sanctuary of tech visionaries.
Let me first give you the gist of his argument. Aschenbrenner says that current AI systems are scaling up incredibly quickly. The most relevant factors that contribute to the growth of AI performance at the moment are the increase of computing clusters and improvements of the algorithms. Neither of these factors is yet remotely saturated. That’s why, he says, performance will continue to improve exponentially for at least several more years, and that is sufficient for AI to exceed human intelligence on pretty much all tasks by 2027. We would then have artificial general intelligence (AGI)—according to Aschenbrenner.
So far, I agree with Aschenbrenner. I think he’s right that it won’t be long now until AI outsmarts humans. I believe this not so much because I think AI are smart but because we are not. The human brain is not a good thinking machine—I speak from personal experience: It’s slow and makes constant mistakes. Just speeding up human thought and avoiding faulty conclusions will have a dramatic impact on the world.
To read more, click here.