Andreas Ortmann
8 min readNov 13, 2023

Reflections on the Ariely-Gino saga, the M-CaP, and the swamp that is the Poets&Quants ecology of (wannabe) sellebrities …

Stop the press:

Instead of an executive summary …

An updated version can be found here.

On 6 November the much anticipated (and repeatedly postponed) launch of the Many-Co-authors project (M-CaP) finally happened. It triggered another flurry of social-media activity on X as well as various anonymous sites. Francesca Gino also had her say, complaining about her being singled out and set up to be thrown under the bus.

… auditing only my papers actively ignores a deeper reflection for the field. Why is it that the focus of these efforts is solely on me?

Her point that the M-CaP site will deflect from deeper reflection for the “field” is well taken; in fact, some of the contributors have taken their cue from her (see here and here) and been mostly evasive.

The Many-Co-authors project was initiated by some of Gino’s more frequent co-authors (Max Bazerman, Julia Minson, Don Moore, Juliana Schroeder, Maurice Schweitzer) and Uri Simonsohn (a one-off Gino co-author and also one of the Data Colada trio that followed up on Zoé Ziani’s whistle-blowing on a Gino et al 2014 paper). Notably, Ariely, Brooks, Cunningham, Galinsky, Kouchaki, Norton, and Staats (all of whom have more than half a dozen papers with Gino) were not among the organizers of the project. So was Gino who apparently was invited to contribute but declined.

Gino’s co-authors auditing themselves is a questionable strategy, as many have noted; alas, having the M-CaP site (mnemonically easy to remember as Mad-CaP) seems a useful point from where to start cleaning up what seems a deeply corrupted ecology at the center of which are Ariely and Gino and similar (wannabe) sellebrities.

Within a day of M-CaP going live, Stephanie Lee reported:

So far, underlying data is available for relatively few of the papers — often because Gino’s collaborators have indicated that they do not have it. Out of the 120 papers that were listed as of Monday and relied on original data, around 20 contained links where accompanying data sets could be downloaded.

And Quentin Andre (drawing on a graph provided by M-CaP) tweeted:

On social media scores of gems were shared such as this one:

Following the emerging concerns for data falsification in other papers identified by DataColada, we reran all the analyses for Studies 1 and 2. Our results correspond to those reported in the paper. https://manycoauthors.org/gino/127

Note that the data were not made available for others to reproduce and analyze the results, something that has become the norm at every respectable journal.

Similar for paper 111:

Following the emerging recent concerns for data falsification in other papers by DataColada, we reran all the analyses and were able to replicate the findings reported in the paper. https://manycoauthors.org/gino/111

The M-CaP corresponding co-author wrote:

In 2018, a replication study was published in Journal of Personality and Social Psychology, with another team of researchers who collected data using our manipulation on different samples.

While “another team of researchers” was involved in the data collection of the replication study, one of the co-authors of the replication study was the M-CaP corresponding author herself. That’s not per se bad although it is hardly an arms-length replication (which admittedly has its own problems; see this excellent piece).

Alas, it would be desirable if the authors would make the data publicly available.

Another gem:

I will not be publicly posting the data or replication packet because of concerns that evaluations will not be well-adjudicated in the current public sphere.” https://manycoauthors.org/gino/121

To that author’s credit: he reversed himself within 24 hours, after this statement drew considerable reaction on social media. Kudos.

More gems:

As I remember it, the first author (Zhong) and I were provided with fully written Results sections for Experiments 1 and 3, emailed to us by the third author (Gino), but were never provided with any of the data used to generate these write-ups. … https://manycoauthors.org/gino/128

As I remember it, the second author (Bohns) and I were provided with fully written Results sections for all three experiments, emailed to us by the first author (Gino). Neither first nor second author had access to the raw data that the first author used to generate these write-ups. … https://manycoauthors.org/gino/128

Again, no data for reproducing and analyzing results available for others.

Similar for paper number 129.

Which is a shame because it is the public availability of data that allows appropriate scrutiny, here of yet another paper in which Gino was involved:

For the details see Nick Brown’s analysis here.

Overall it is quite amazing to see how

  • few data are being shared, even now (and apparently then even among co-authors);
  • the division of labor within the teams plays out (e.g., see paper 53);
  • often indeed Gino seems to have been the sole provider of the data (especially in the early years); this is eerily reminiscent of Stapel’s practices a few years earlier.

To the extent that the M-CaP has shone a light on all this, it has fulfilled an important purpose. It also has delineated to some extent the swamp that needs to be drained. But the next step needs to be for more data to be posted that are currently not even though they seem available. And, no, it is not bullying to ask for these data given the reputational hit behavioral science (the “field”), and adjacent areas, have taken and will continue to take until this swamp has been drained once and for all.

It will take a while for all the gems to be dug out but it seems not too early to conclude that this whole ecology is knee-deep in the hoopla. How much we won’t know before all authors in this space have posted their data, or at least made them in principle accessible.

And, no, it is simply not correct to argue:

We have done some simple analyses on the information provided and, just as one silver lining, a significant minority of the papers (about 40%) contain no studies handled by Gino. These papers can be treated just like any other paper despite having Gino as a co-author on them.

This is not a silver lining; it’s an ill-advised attempt at deflection; Gino got that right.

At this point the whole Ariely — Gino network is under suspicion, as they should because they allowed this whole scheme to go on for more than a decade as we now have reason to believe.

I could not agree more with this tweet, posted in response to this tweet by one of the drivers behind the M-CaP (JS):

It is of course not just data availability that is at stake. It is about:

=> the design and implementation of studies, i.e. appropriate experimental practices.

=> the pre-registration of hypotheses and analysis plan

=> the hands-on involvement of the experimenter

In other words, outright fraud is but the proverbial tip of the iceberg. Shady questionable research practices are as much to worry about. Not that this any news, see this entry (and the commentary here) on Gelman’s Statistical Modeling, Causal Inference, and Social Science in 2016: Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud. And the sequel to it: Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud (pizza gate edition).

I submit that the poor replicability of studies in the Ariely-Gino-et al ecology, and for that matter in marketing, is a consequence of the pre-vailing experimental and statistical practices as well as a lack of proper theorizing.

It’s an interesting question why and how “this goofy psychology research” has become so popular, and Gelman (and various commentators) has (have) given answers to that question worth a read. Says Gelman,

The answer is that there has been a sort of transubstantiation, whereby the success (such as it was) of scientific studies had the effect of elevating the researchers involved into a higher state, granting them authority so that their evidence-free speculations were given the status of scientific findings.

We have been learning that the success (such as it was) of scientific studies seems to have been in too many cases undeserved and damaging for the science enterprise as such. And, unfortunately, we are far away from understanding how many folks in this ecology have undeserved reputations. (Recall Hardisty’s call on what needs to be done now.) But … surely it is time to reflect on silly enablers like Poets&Quants and their pathetic 50Thinkers ranking of “influential business professors”.

If viability and visibility (and jet-set life, fat speaking fees, interviews with big outlets, and well-paying consulting gigs) become the kpis du jour, no one should be surprised that sound science gets hi-jacked. It is simply not good enough to clamor for rigor and relevance if the reality is starkly different.

To once again say the self-evident, we need a complete auditing of all those in the Ariely — Gino network and those that have become sellebrities. It would also be progress to agree on the number of retractions that should bar someone from becoming any kind of gate-keeper (e.g. editor at a journal): Two? Three?

Back to Gino. She complained that she was singled out. Well, yes. But …given all her accusatory statements (about defamation and discrimination and procedural failures) and her lawsuit have set her up for this treatment. It will be interesting to see whether Harvard’s and Data Colada’s motions to dismiss her suit will succeed. My bet is, they will. (Nothing like a testable hypothesis, no?)

As the one who filed a lawsuit to instill fear in good-faith data sleuths, it is certainly rich for her to comment:

Unfortunately, though, the MCAP is raising a lot of fear. Some collaborators told me they fear excessive scrutiny into their work, motivated by animus and directed to certain groups of people (women more so than men). They fear becoming targets of false accusations. They fear the behavioral science field lost its way by having scholars accusing others instead of engaging in dialogue. I don’t think these fears will make behavioral science better.

Andreas Ortmann
Andreas Ortmann

Written by Andreas Ortmann

EconProf: I post occasionally on whatever tickles my fancy: Science, evidence production, the Econ tribe, Oz politics, etc. Y’all r entitled to my opinions …

No responses yet