As artificial intelligence (AI) becomes more sophisticated, it also becomes more opaque. Machine-learning algorithms can grind through massive amounts of data, generating predictions and making decisions without the ability to explain to humans what it’s doing. In matters of consequence—from hiring decisions to criminal sentencing—should we require justifications? A commentary published today in Science Robotics discusses regulatory efforts to make AI more transparent, explainable, and accountable. Science spoke with the article’s primary author, Sandra Wachter, a researcher in data ethics at the University of Oxford in the United Kingdom and the Alan Turing Institute. This interview has been edited for brevity and clarity.
To read more, click here.