Humans often cooperate, but ample research has shown that they’re conditionally cooperative; that is, they are far more likely to cooperate with those who they consider “good.”
In large societies, however, people don’t always know the reputations of the people with whom they interact. That’s where reputation monitoring systems—such as the star ratings for eBay sellers or the scores assigned by credit bureaus—come into play, helping guide people’s decisions about whether or not they want to help or interact with another person.
In a new paper in the journal Nature Communications, a team from Penn uses mathematical modeling to study how public institutions of reputation monitoring can foster cooperation and also encourage participants to adhere to its assessments instead of relying on their own subjective judgments of each others’ reputations.
“We show how to construct institutions of public monitoring that foster cooperation, regardless of the social norm of moral judgement,” says Joshua Plotkin, a professor in the Department of Biology in Penn’s School of Arts & Sciences who coauthored the paper with postdoctoral fellows Arunas Radvilavicius and Taylor Kessinger. “And then adherence to the public institution will naturally spread.”
The work explores the concept known as indirect reciprocity. Unlike direct reciprocity, in which two people may take turns helping one another, indirect reciprocity depends on a shared moral system.