About that new mega-study on the effectiveness of honesty oaths

Andreas Ortmann
3 min readMar 9, 2024

--

You might have heard about that new megastudy pre-print, currently — to judge from the formatting — under review at Nature or one of its off-springs.

Here is what the abstract says:

Dishonest behaviors such as tax evasion impose significant societal costs. Ex-ante honesty oaths — commitments to honesty before action — have been proposed as useful interventions to counteract dishonest behavior, but the heterogeneity in findings across operationalizations calls their effectiveness into question. We tested 21 honesty oaths (including a baseline oath) — proposed, evaluated, and selected by 44 expert researchers — and a no-oath condition in a megastudy in which 21,506 UK and US participants played an incentivized tax evasion game. Of the 21 interventions, 10 significantly improved tax compliance by 4.5 to 8.5 percentage points, with the most successful nearly halving tax evasion. Limited evidence for moderators was found. Experts and laypeople failed to predict the most effective interventions, but experts’ predictions were more accurate. In conclusion, honesty oaths can be effective in curbing dishonesty but their effectiveness varies depending on content. These findings can help design impactful interventions to curb dishonesty.

These are quite remarkable results, last but not least given recent failed replication attempts of well-known dishonesty studies, prominently the one by a couple of authors also involved in this study (Ariely, Mazar), the ongoing kerfuffle over the curious data irregularities in the original study, or truly large-scale studies like this one (which curiously is not referenced in the present write-up.)

The details of the megastudy are sneakily presented, and sketchy. We learn in the abstract (and slightly differently worded on p. 7 of the introduction) that more than 20,000 participants in the UK and the US “played an incentivized tax evasion game”. Wow. So many participants and incentivization, to boot! What’s not to like?!

Unfortunately, it takes well after the Conclusion on p. 30, in the Methods section starting on that page, that we learn some highly relevant details. The UK and US participants were recruited from Prolific, an online labor market similar to Amazon’s M-Turk. On p. 35 we learn that participants received a base payment of £0.60 (less than US$0.80, less than AU$1.20) and they could earn a bonus payment of up to £1.20, with the average earnings being £0.49. Wow. We never learn how long the experiment took, but it seems safe to say that these were rather low-powered incentives if you can call them that. Maybe not surprisingly, almost 5 percent of the participants failed comprehension or attention checks and about 3 percent reported more than they actually earned (p. 34) (Parenthetically I note that the experiment seems to have been poorly piloted. See p. 38 for deviations from the preregistration.)

Of course, the authors mean the key issue to be the different degrees of (dis)honesty across the 22 treatments but surely questions need to be asked about the external validity of this “incentivized tax evasion game” and the effectiveness of various honesty oaths. It is easy to be honest when the stakes are low. Does this transfer to situations where the stakes are actually substantial and where participants are in a repeated-game situation? I doubt it and trust that tax authorities do not rely on this kind of poorly incentivized studies. It seems that the truly large-scale study I mentioned earlier, with about 30 times as many “participants” (actual taxpayers) and 120 times as many data points (actual tax declarations), provides some important caveats.

Consider making your opinion known by applauding, or commenting, or following me here, or on Facebook.

--

--

Andreas Ortmann

EconProf: I post occasionally on whatever tickles my fancy: Science, evidence production, the Econ tribe, Oz politics, etc. Y’all r entitled to my opinions …