To content
Research Center Trustworthy Data Science and Security

Delegation to AI can increase dishonest behavior

-
in
  • Research Alliance Ruhr
  • UA Ruhr
  • Research
Stilistic Brain © curioso.photography - stock.adobe.com
People are increasingly handing decisions over to AI systems. Already, AI manages investment portfolios, screens job candidates, recommends whom to hire and fire, and can fill out tax forms on people’s behalf. While there is a promise of great productivity gains, a new study published in Nature highlights the risk of unethical behavior from delegating decisions to AI. The research, led by the Max Planck Institute for Human Development in Berlin and Nils Köbis from the Research Center Trustworthy Data Science and Security, shows that how we instruct the machine matters, but also that machines are often more willing than humans to carry out fully dishonest instructions.

When do people behave badly? Extensive behavioral science research has shown that people are more likely to act dishonestly when they can distance themselves from the consequences. It is easier to bend or break rules when no one is watching - or when someone else is carrying out the action. A new study by an international research team from the Max Planck Institute for Human Development, Nils Köbis from the Research Center Trustworthy Data Science and Security and the Toulouse School of Economics shows that these moral inhibitions diminish even further when humans delegate tasks to AI. In 13 studies with more than 8,000 participants, the researchers investigated the ethical risks of delegation to machines - both from the perspective of those who give instructions and those who carry them out.

In studies that focused on how humans gave instructions, they found that humans were significantly more likely to cheat when they could outsource behavior to AI agents rather than act themselves, especially when they used interfaces that required high-level goal-setting rather than explicit instructions to act dishonestly. With this programming approach, dishonesty reached strikingly high levels: only a small minority (12-16 percent) remained honest, while the vast majority (95 percent) were honest when performing the task themselves. Even with the least questionable form of AI delegation, namely explicit instructions in the form of rules, only around 75 percent of people behaved honestly, representing a significant decrease in dishonesty compared to self-disclosure.

"The use of AI creates a comfortable moral distance between people and their actions - it can lead them to demand behaviors that they would not necessarily engage in themselves and that they might not demand from other people," says Zoe Rahwan from the Max Planck Institute for Human Development. The scientist works in the Adaptive Rationality research area on ethical decision-making.

"Our study shows that people are more willing to engage in unethical behavior if they can delegate it to machines - especially if they don't have to say it directly," adds Nils Köbis, who holds the Chair of Human Understanding of Algorithms and Machines at the University of Duisburg-Essen (Research Center Trustworthy Data Science and Security) and was previously a Senior Research Scientist at the Max Planck Institute for Human Development in the research area of Humans and Machines. In view of the fact that most AI systems are accessible to anyone with an internet connection, the two lead authors of the study warn of an increase in unethical behavior.


Further information and contact: