My attention was drawn to this news item. It stirs memories from nearly as long ago as the 1961 experiment itself, of my time studying social psychology in 1973. Milgram’s experiment was one of those most often cited on my Cambridge course, but not for the reasons given most prominence in the item.
The suggestion that the research showed a human propensity for evil was never the emphasis back then. The whole stress was laid on the issue of obedience to authority. In retrospect, authoritarianism and its evils were a major theme of social psychology at that time, and it’s instructive to ask why that was. One doesn’t have to look far. Lecturers in psychology in 1973 were products of universities in the sixties – the time of student unrest, anti-establishment protest, Bob Dylan and Timothy Leary – and of course paedophilia as an example of sexual liberation, but we’ll pass that one by.
One of my lecturers (on linguistics) actually divided the world into “Pre-Woodstock” and “Post-Woodstock”. My supervisor on Piaget had been an agitator who led the Cambridge student riots in 1967 (though by my time he had become a Christian – that’s another story). That subordination of science to personal and social values isn’t my main theme today, but bears on it.
The new research cited in the news piece re-examined the original Milgram transcripts and found that a major part of the reason behind subjects’ willingness to (apparently) electrocute victims, unmentioned by Milgram, was his selling them the idea that they had contributed to a greater good, namely the cause of science. I guess this is a useful addition to the original conclusions – that we are more likely to commit dreadful acts not just because an authority tells us to, but because that authority figure tells us we’re contributing to the greater good.
But both the old and the new conclusions seem to me to miss the one factor that unifies them (if you don’t count the question of “capacity for evil” itself in a general sense). And that is what gave Milgram credible authority, and also what made subjects believe they had contributed to some unspecified “greater good”. The answer is, of course, science. Milgram would have been much less persuasive in his commands to turn up the voltage if he’d posed as a gap year classics student winging the experiment whilst the professor was on holiday. And it’s doubtful if they would have felt so good about themselves if he’d told them they’d furthered the cause of Islam, or enabled him to make a step up the career ladder. The subjects clearly already believed (a) that a science professor is a worthy authority figure and (b) that “scientific progress” is a great good in itself.
One can’t draw too many conclusions about science itself from that – the kind of people who volunteer for science research are self-selected believers in the worth of science, just as the kind of people who go to fight in Syria are self-selected believers in Islam, even if radicalisation is a later occurence. But I wonder if Milgram’s research would have been so successful now, given climate change skepticism, suspicions about scientific fraud scandals, constantly changing health advice and a general disillusion with the whole idea of scientific progress. In 1961 science and scientists carried intrinsically more public authority than they do now, for better or worse.
I’m not sure what lessons there are to draw in this, but there surely are some. It’s not that ideas of authority and the greater good should be automatically suspect. It’s true that the Nazi program, which ultimately triggered Milgram’s work, was both massively authoritarian in its portrayal of the Führer, and managed a persuasive propaganda campaign for a future thousand-year Utopia (without Jews and other undesirables – a largely social Darwinist programme).
But in free and peaceful societies, sober and rational people generally retain an idea that governments have a valid authority, which may sometimes include asking them to waive the usual prohibitions on killing for the sake of the commonwealth. It’s all about context.
I suppose what characterizes both Naziism and Milgram’s experiments is the blanket assumption that something is good, be that the state, science, or of course religion. Perhaps it might conceivably be justified to “lie for Jesus” or “lie for science” or “lie for the state” under some circumstances, but it’s never a sufficient reason simply that they are Jesus, science or the state. A clear moral goal must be in place.
The first proper science book I acquired was Practical Biology at Home (as a school music prize, in 1965 – I never liked to do what was expected of me, which was to choose a music book). I remember being impressed by what its author said about killing animals for dissection:
I feel [readers] will agree that the dignity of science demands unquestionably that the killing must be painless or instantaneous or both.
Only in retrospect do I see that the “dignity of science” demands no such thing: the ethical dimension is (quite rightly) grafted on to science from societal norms or other moral sources. Vivisection was integral to biology from Francis Bacon to the present day, the German medical profession was readily (and generally) persuaded to abandon Hippocratic ethics in the 1930s for the sake of eugenics and racial supremicism, and (of course) Milgram’s own experiments, though not actually committing torture, encouraged subjects to believe they were torturing others – the justification being only that very same dignity of science, vaguely conceived.
Two things follow: firstly that the idea that scientific progress is self-justifying is always dangerous and wrong. That might not be so if it were true that science is objective and explores all that is out there, but as my Cambridge social psychology experience (and modern philosophy of science) shows, science is a humanly targeted and socially conditioned enterprise. Evil scientists will uncover evil things, and society has a duty to understand (and oversee) the goals. Milgram’s subjects should have seen through the white coat and asked, “Exactly how has my torture of these people furthered the greater good?”
And secondly that morality and ethics are prior to, and cannot be derived from, science. It’s not that scientifically-derived ethics are bound to be bad, but that they’re bound to be unscientific, and actually derived from values from outside science itself. So it’s quite correct to say that Darwin’s science was not responsible for the racial and medical totalitarianism of Social Darwinism. The ethical perversions it embodied were already present in European and American society. What “evolutionary ethics” did, however, was to cover simple evil with the mantle of scientific objectivity. “I believe Jews and blacks are subhuman” became “Nature shows Jews and blacks are subhuman,” just as today, “I don’t think of fetuses as human” becomes “Science has shown fetuses are not human.”
It was, remember, scientists like Galton, Pearson, Weldon, Haeckel etc who baptized their moral prejudices in this way, and it was the blanket authority of “science” that was persuasive, or at least confirmatory, to those like Kaiser Wilhelm or Adolf Hitler who implemented them on a grand scale.
Likewise, wars for World Democracy can be fought pretty much independently of the moral values on which democracy depends, and Crusades on the authority of religion without the underpinnings of religion being much in evidence. That, I suppose, confirms the thing that Milgram never set out to prove, though my news article suggests he did: humans have a great propensity for evil, and need to guard against whatever authorities may persuade them to indulge in it.
Or as it says in Ex. 23.2, “Do not follow the crowd in doing wrong.”