Quantinuum researchers are shining a light into creating a new generation of artificial intelligence (AI) systems designed to be both interpretable and accountable, challenging the opaque nature of current “black box” AI technologies, according to a company blog post.
The team, led by Dr. Stephen Clark, Head of AI at Quantinuum, published a paper on the pre-print server ArXiv that reveals a shift towards creating AI frameworks that users can understand and trust, addressing long-standing concerns over AI’s decision-making processes and their implications.
At the heart of the AI mystery is the question of how machines learn and make decisions, a process that has largely remained elusive due to the complex nature of artificial neural networks—computer models inspired by the human brain. The obscurity of these neural networks has given rise to what is known as the “interpretability” issue in AI, raising alarms over the potential risks AI poses when its workings and rationales remain unfathomable.
To read more, click here.