![]() |
Image Credit: Copilot AI generated by Scientific Frontline prompts |
The most recent version of ChatGPT passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative.
As artificial intelligence has begun to generate text and images over the last few years, it has sparked a new round of questions about how handing over human decisions and activities to AI will affect society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions?
A team anchored by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, characterized the personality and behavior of ChatGPT’s popular AI-driven bots using the tools of psychology and behavioral economics in a paper published Feb. 22 in the Proceedings of the National Academy of Sciences. This study revealed that the most recent version of the chatbot, version 4, was not distinguishable from its human counterparts. In the instances when the bot chose less common human behaviors, it was more cooperative and altruistic.
“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” said Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research.