In an earlier post, I suggested that one way moral psychologists treat war, is as “just another context” within which our regular moral processes and preferences unfold. This treatment is rarely explicit, it just shows up as a random war-based scenario among a bunch of other scenarios, used to test a particular theory about moral reasoning, emotions, behaviours, and whatnot.
But this is not the only way that moral psychologists think about war (i.e. by not really thinking about it). Another way, which is at least as common, tackles the moral problem of war head on: if we generally think killing is wrong, what’s going on with war? One popular answer is that people are able to morally disengage from immoral behaviour. Moral disengagement can take various forms, but one way of doing it – that you’ve probably heard about – is to dehumanize the enemy. They are vermin, scum, unfeeling zombies, so you don’t need to feel bad for killing them, you don’t need to feel bad that others in your group are killing them, and you don’t expect to be condemned, nor condemn anyone else, for killing them.
Paul Bloom recently wrote about dehumanization for The New Yorker, but started with the provocative subtitle: “Perpetrators of violence, we’re told, dehumanize their victims. The truth is worse.”
The truth is, the “truth” he’s referring to is a third way for moral psychologists to think about war. It’s a new-ish way of thinking about it (as far as I can tell), and for the most complete account you should read Virtuous Violence, by Tage Rai and Alan Fiske. (I should mention, they argument is not just about war, it’s about violence in general – just like moral disengagement is about violence and other immoral behavior in general.)
To summarize – Fiske and Rai say that moral disengagement theory is wrong. It’s not the case that your moral compass will always say “do no harm”, and killers (whether in war or at other times) need to find a way to disengage from that anchor. Sometimes your moral compass says “killing is the morally obligatory thing to do”, and you are fully onboard. Or as Bloom puts it, much more neatly:
… morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.
Fans of moral disengagement theory might point out that one way of morally disengaging (as theorized by Bandura, but studied less than dehumanization for example) is in fact to find a moral justification for your action, which kind of brings Fiske and Rai’s idea back into the disengagement fold.
I find it useful, though, to disengage from the disengagement framework. To instead just look squarely at the context of war, and at morality in war. What counts as virtue and vice in a soldier? What is seen as right and wrong on the battlefield?
The first approach would answer, “the same things as in peace”.
The second approach would answer, “the same things as in peace, but then you find a way to disengage from it to feel better.”
The third approach would answer, “let’s find out.”
Okay so I’m oversimplifying slightly. They would probably all say “let’s find out”; they are scientists, after all. But given the current state of the moral psychology research on war, I think the third approach is in the best position, as far as starting assumptions and generative research frameworks go, to actually find out. You can quote me on that, although I expect myself to change my mind at least once in the next 5 years.