Do you start your ChatGPT prompts with a friendly greeting? Have you asked for the output in a certain format? Should you offer a monetary tip for its service? Researchers interact with large language models (LLMs), such as ChatGPT, in many ways, including to label their data for machine learning tasks. There are few answers to how small changes to a prompt can affect the accuracy of these labels.
Abel Salinas, a researcher at USC Information Sciences Institute (ISI) said, "We are relying on these models for so many things, asking for output in certain formats, and wondering in the back of our heads, 'what effect do prompt variations or output formats actually have?' So we were excited to finally find out."
Salinas, along with Fred Morstatter, Research Assistant Professor of computer science at USC's Viterbi School of Engineering and Research Team Lead at ISI, asked the question: How reliable are LLMs' responses to variations in the prompts? Their findings, posted to the preprint server arXiv, reveal that subtle variations in prompts can have a significant influence on LLM predictions.
To read more, click here.