Machine-learning models are quickly becoming common tools in scientific research. These artificial intelligence systems are helping bioengineers discover new potential antibiotics, veterinarians interpret animals’ facial expressions, papyrologists read words on ancient scrolls, mathematicians solve baffling problems and climatologists predict sea-ice movements. Some scientists are even probing large language models’ potential as proxies or replacements for human participants in psychology and behavioral research. In one recent example, computer scientists ran ChatGPT through the conditions of the Milgram shock experiment—the famous study on obedience in which people gave what they believed were increasingly painful electric shocks to an unseen person when told to do so by an authority figure—and other well-known psychology studies. The artificial intelligence model responded in a similar way as humans did—75 percent of simulated participants administered shocks of 300 volts and above.
But relying on these machine-learning algorithms also carry risks. Some of those risks are commonly acknowledged, such as generative AI’s tendency to spit out occasional “hallucinations” (factual inaccuracies or nonsense). Artificial intelligence tools can also replicate and even amplify human biases about characteristics such as race and gender. And the AI boom, which has given rise to complex, trillion-variable models, requires water- and energy-hungry data centers that likely have high environmental costs.
To read more, click here.