Eugenie Reich’s take on the Schön affair

Last month’s issue of Physics World contains an article by Eugenie Reich on the Schön affair. It is an edited excerpt of the book on the same topic that she has been working hard on for the last few years. I know that because she was already at it when I met her in Vienna in 2005.

Here is a quote that I guess sums up what the author has concluded about the celebrated self-correction properties of science:

The self-correcting process happened, but it was haphazard and disorganized, with a lot of self-doubt along the way. In comparison with the archetypal picture of scientists as an army of self-correctors marching in an organized way towards the truth, this was more of a guerrilla war.

I think Reich’s book will make compulsory reading for all physicists – particularly those involved in condensed matter research. After all, the unmasking of Hendrik Schön relied, according to the author, on the inspired actions of a few individuals who had to go, initially, against the mainstream. Our obligation
as researchers is not only not to commit fraud ourselves, but also to be vigilant about what our colleagues are doing – our collaborators and those whose works we cite in our own papers.

One interesting aspect of the Schön affair is that it happened, as Eugenie mentions, at a time when employees at Bell labs were subject to a lot of pressure. This brings to my mind the additional responsibilities of managers of research organisations to create the conditions under which honest, intelligent research thrives and ‘bullshitting’ opportunists have a hard time. This is particularly relevant to organizations like national or industrial laboratories that are outside of academia, as in them managers wield a lot more power than in universities. It seems that just the opposite was happening in Bell labs at the time of Hendrik Schön’s deceptions.

Greg Kochanski wrote a very interesting article on the way the performance review process worked at Bell labs. He emphasized how the positive effects of this management instrument could turn into the very opposite in times of difficulty – like those being experienced while Schön was working there. Coincidentally (or not?), Kochanski has a blog post on a closely related topic this month. He ventures that

There is a good argument to be made that it is the extreme level of competition in science that drives a lot of fraud and bad behaviour.  And it drives a lot of the self-delusion, too.  It’s much easier to, somehow, never get around to making those potentially embarassing checks of your results if you are in a hurry and under pressure.

Somehow it does not seem entirely improbable, but if so it is not devoid of irony. It would suggest that scientists lacking job security as well as moral fiber would be deliberately sloppy in their research in order to hang on to scientific careers whose meaning is completely lost once their publications can no longer be trusted. I guess it is possible: some people don’t know when to give up. On the other hand, Hendrik Schön seems to have been aiming much higher: he wanted not only to survive, but indeed to obtain the highest accolades… I guess there is only so much one can understand about the mind of scientists engaging in such behaviours.

5 Comments

  1. This isn’t quite the right interpretation:
    “It would suggest that scientists lacking job security as well as moral fiber would be deliberately sloppy in their research in order to hang on to scientific careers whose meaning is completely lost once their publications can no longer be trusted.”

    It’s probably not deliberate sloppiness. It’s a case of lack of time. All scientists I’ve ever met have more things to do than they can realistically accomplish. Reviews, research, teaching, grant proposals, administration, whatever. It would be more a question of putting off those potentially uncomfortable extra tests, or never quite managing to sit down and think about potential problems.

    It’s (unsurprisingly) more of a problem in fields like linguistics and psychology, where experiments are messy and theories less clearly defined. In any experiment involving humans, there are *always* loose ends that you could check. The experimenters I respect check most of the possible loose ends, cross their fingers, and acknowledge potential problems. However, I’ve met some people who just don’t seem to think about potential complications much.

    So, I hope there aren’t many people out there who know they are producing junk research, but continue for they paycheck. I’ve never heard anyone admit it, and I’ve never known such a person well enough to decide on the basis of observation. But, I’m sure that there are people producing junk who don’t quite realize it.

    1. Point taken: you were not suggesting deliberate sloppiness, but rather actual incapability to cover all the bases due to lack of time.

      But then again, as researchers we usually are involved in as many projects as we have chosen to get into. So in some sense getting involved in too many projects is already a kind of sloppiness, which I think results from outside (funding) pressures. (Though of course there is also the pure thrill of doing more stuff!)

      In any case you are right to point out that there is a difference between I guess three categories: those who, though they do not cover all the bases, acknowledge what has been left undone explicitly in their papers, talks, discussions, etc.; those who perhaps have not even realized there was a gap in their argument, presumably due to lack of time; and those who would feign the latter while in fact being conscious of the problem. Like you, I have never met anyone who definitely falls into the third category, though unfortunately I have met a number of people who fall right into the second one.

    2. By the way, may I add that there is also plenty of room in condensed matter physics to leave stones unturned. For example, ideally one would always check that experimental results are not sample-dependent. But for some materials good-quality samples can take months, or even longer of trial-and-error to grow, so it is often not practicable to wait for the new samples before publishing the results. (I guess dealing with samples is in some sense like dealing with people…) In general this is fine as long as the samples are clearly identified – so the results can be compared with those obtained on other samples when they eventually turn up.

  2. I agree with you that the general pressure level probably wasn’t an issue for Schön himself. But it probably made his collaborators and managers reluctant to rock the boat.

Leave a reply to Greg Kochanski Cancel reply