In earlier days of generative artificial intelligence’s existence, laptops would take hours to chug through cumbersome code as AI models slowly learned to write, spell and ultimately output strange and hilarious Halloween costumes, pickup lines or recipes. Optics researcher Janelle Shane was intrigued enough by one list of such recipes—which called for ingredients such as shredded bourbon and chopped water—to put her own laptop on the task. Since 2016 she has blogged through such neural networks’ rapid advancement from endearingly bumbling to surprisingly coherent—and, sometimes, jarringly wrong. Shane’s 2019 book You Look Like a Thing and I Love You broke down how AI works and what we can (and can’t) expect from it, and her recent posts on her blog AI Weirdness have explored image-generating algorithms’ bizarre output, ChatGPT’s attempts at ASCII art and self-criticism and AI’s other rough edges. Scientific American talked with Shane about why a spotless giraffe stumps AI, where these models absolutely should not be used and whether a chatbot’s accuracy can ever be fully trusted.
To read more, click here.