One question has relentlessly followed ChatGPT in its trajectory to superstar status in the field of artificial intelligence: Has it met the Turing test of generating output indistinguishable from human response?
Two researchers at the University of California at San Diego say it comes close, but not quite.
ChatGPT may be smart, quick and impressive. It does a good job at exhibiting apparent intelligence. It sounds humanlike in conversations with people and can even display humor, emulate the phraseology of teenagers, and pass exams for law school.
But on occasion, it has been found to serve up totally false information. It hallucinates. It does not reflect on its own output.
To read more, click here.