April 27, 2013


The Mind of a Con Man (YUDHIJIT BHATTACHARJEE, 4/28/13, NY Times Magazine)

Stapel's fraud may shine a spotlight on dishonesty in science, but scientific fraud is hardly new. The rogues' gallery of academic liars and cheats features scientific celebrities who have enjoyed similar prominence. The once-celebrated South Korean stem-cell researcher Hwang Woo Suk stunned scientists in his field a few years ago after it was discovered that almost all of the work for which he was known was fraudulent. The prominent Harvard evolutionary biologist Marc Hauser resigned in 2011 during an investigation by the Office of Research Integrity at the Department of Health and Human Services that would end up determining that some of his papers contained fabricated data.

Every year, the Office of Research Integrity uncovers numerous instances­ of bad behavior by scientists, ranging from lying on grant applications to using fake images in publications. A blog called Retraction Watch publishes a steady stream of posts about papers being retracted by journals because of allegations or evidence of misconduct.

Each case of research fraud that's uncovered triggers a similar response from scientists. First disbelief, then anger, then a tendency to dismiss the perpetrator as one rotten egg in an otherwise-honest enterprise. But the scientific misconduct that has come to light in recent years suggests at the very least that the number of bad actors in science isn't as insignificant as many would like to believe. And considered from a more cynical point of view, figures like Hwang and Hauser are not outliers so much as one end on a continuum of dishonest behaviors that extend from the cherry-picking of data to fit a chosen hypothesis -- which many researchers admit is commonplace -- to outright fabrication. Still, the nature and scale of Stapel's fraud sets him apart from most other cheating academics. "The extent to which I did it, the longevity of it, makes it extreme," he told me. "Because it is not one paper or 10 but many more."

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. "It was a quest for aesthetics, for beauty -- instead of the truth," he said. [...]

Stapel stayed in Amsterdam for three years after his Ph.D., writing papers that he says got little attention. Nonetheless, his peers viewed him as having made a solid beginning as a researcher, and he won an award from the European Association of Experimental Social Psychology. In 2000, he became a professor at Groningen University.

While there, Stapel began testing the idea that priming could affect people without their being aware of it. He devised several experiments in which subjects sat in front of a computer screen on which a word or an image was flashed for one-tenth of a second -- making it difficult for the participants to register the images in their conscious minds. The subjects were then tested on a task to determine if the priming had an effect.

In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would -- through an automatic comparison -- rate themselves as less attractive than subjects exposed to the other image.

The experiment -- and others like it -- didn't give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. "I said -- you know what, I am going to create the data set," he told me.

Sitting at his kitchen table in Groningen, he began typing numbers into his laptop that would give him the outcome he wanted. He knew that the effect he was looking for had to be small in order to be believable; even the most successful psychology experiments rarely yield significant results. The math had to be done in reverse order: the individual attractiveness scores that subjects gave themselves on a 0-7 scale needed to be such that Stapel would get a small but significant difference in the average scores for each of the two conditions he was comparing. He made up individual scores like 4, 5, 3, 3 for subjects who were shown the attractive face. "I tried to make it random, which of course was very hard to do," Stapel told me.

Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.

He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. "I realized -- hey, we can do this," he told me.

Stapel's career took off. 

Posted by at April 27, 2013 7:08 AM

blog comments powered by Disqus