OpenAI has raised tens of billions of dollars to develop AI technologies that are changing the world.
But there's one glaring problem: it's still struggling to understand how its tech actually works.
During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood.
"We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers.
Good luck with that.
To read more, click here.