Small rewards make people see the truth in politically unfavorable information

Figuring out why so many people are willing to share misinformation on the Internet is an important focus among behavioral scientists. It’s easy to think that partisanship drives it all – people will just share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have shown that many people don’t seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting lots of likes on social media. Given this, it’s unclear what motivates users to stop sharing things that a little scrutiny would show to be untrue.

So a team of researchers tried the obvious: We’ll pay you if you stop and rate the accuracy of a story. The work shows that small payments and even minimal rewards boost the accuracy of people’s ratings of stories. Almost all of this effect comes from people identifying stories that don’t favor their political stance as factually accurate. While cash boosted conservatives’ accuracy the most, it lagged so far behind liberals in accuracy that the gap remains significant.

Money to be exact

The basic outline of the new experiments is pretty simple: take a bunch of people, ask them about their political leanings, then show them a bunch of headlines as they would appear on a social networking site like Facebook. The headlines were scored based on their accuracy (ie, whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.

Consistent with previous experiments, participants were more likely to rate headlines that favored their political leanings as truthful. As a result, the majority of disinformation labeled as true came about because people liked how it was consistent with their political leanings. While this is true on both sides of the political spectrum, conservatives were significantly more likely to rate misinformation as truthful—an effect so commonly seen that the researchers cite seven different papers that have previously shown it.

By itself, this type of reproduction is useful but not very interesting. The interesting things emerged when researchers began to vary this process. And the simplest variant was one where they paid participants a dollar for each story they correctly identified as true.

In news that will surprise no one, people have gotten better at spotting when stories aren’t true. In raw numbers, participants averaged exactly 10.4 accuracy scores (out of 16) in the control condition, but over 11 out of 16 in the payoff. This effect also occurred when, instead of payment, participants were told that the researchers would give them an accuracy score when the experiment was conducted.

What was most striking about this experiment was that almost all of the improvement came when people were asked to rate the accuracy of statements favoring their political opponents. In other words, the reward made people better at recognizing the truth in statements that, for political reasons, they would prefer to believe were not true.

Smaller void, but still void

The opposite was true when the experiment was shifted and people were asked to identify stories that their political allies would like. Here, accuracy fell. This suggests that participants’ mindsets played a large role, as their motivation to focus on politics made them focus less on accuracy. In particular, the effect was about as large as a financial prize.

The researchers also created a condition where users didn’t say the source of the headline, so they couldn’t identify whether it came from a partisan-friendly outlet. This made no significant difference to the results.

As noted above, conservatives are generally worse at this than liberals, with the average conservative getting 9.3 out of 16 correct and the typical liberal getting 10.9. Both groups see their accuracy increase when incentives are present, but the results are greater for conservatives, increasing their accuracy to an average of 10.1 out of 16. But while this is much better than when not there is a motive, they are not as good as liberals do when there is no motive.

So while it seems that some of the problems with conservatives sharing misinformation is due to a lack of motivation to get things right, that explains only part of the result.

The research team suggests that while a payment system would probably be impossible to scale, the fact that an accuracy score had roughly the same impact could mean that it points to a way for social networks to limit misinformation spread by their users. . But that seems naive.

Fact checks were originally promoted as a way to curb misinformation. But according to these results, they tended to rate more of the pieces shared by conservatives as misinformation and ultimately ended up being labeled partisan. Similarly, efforts to curb the spread of misinformation on social networks have seen the heads of those networks accused of censoring conservatives in congressional hearings. So even if it works in these experiments, it’s likely that any attempt to develop a similar system in the real world will be very unpopular in some quarters.

Nature Human Behaviour, 2023. DOI: 10.1038/s41562-023-01540-w (About DOIs).

Leave a Comment