Changing Ethics by Changing Brains

In his very enjoyable PBS series and the accompanying book, The Brain: The Story of You, neuroscientist David Eagleman writes about the famous “trolley dilemma.” Here is the scenario: A runaway trolley is barreling down the tracks towards a group of four workers. You see that they will all be killed unless you pull a lever diverting the trolley onto a side track where it will only kill one workman. What do you do? In this case almost everyone will pull the lever. You sacrifice one life to save four. Now imagine a slightly different scenario: The same runaway trolley is heading for the group of four workers, and the only way you can save them is by physically pushing a man into the path of the trolley. Do you do it? In this case hardly anyone will agree to push the man.
What is going on here? The utilitarian calculation is the same in each case—sacrifice one life to save four. Yet, the decision feels entirely different for most people. Brain imaging shows that in the first scenario, logical thinking and calculating regions of the brain are engaged, but not those involved with emotion. “To the brain, the first scenario is just a math problem,” says Eagleman. In the second scenario, networks involved with emotions get involved in the decision making process. In the first scenario, we are emotionally detached from the decision to kill. In the second, we feel that the act would be murder. We feel abhorrence at such an act of violence, and this feeling conflicts with our problem-solving faculties to create a sense of stress or conflict.
So, are utilitarians cold, heartless calculators, emotionless Mr. Spock types intoning “The good of the many outweighs the good of the few”? Are deontologists, with their emphasis on rights, justice, and respect for persons more empathetic and emotional? Actually, humans, both individually and collectively, can move rapidly and often unreflectively from a utilitarian to a deontological pole and back again. We can easily evoke intuitions going in either direction.
For the past couple of years I and my coauthor have been working on a book on the nuclear tests in the Marshall Islands from 1946 to 1958. These tests resulted in the permanent displacement of hundreds of native Marshallese from their ancestral homes and the irradiation of hundreds of others, particularly in the Castle Bravo event of March 1, 1954, a 15 megaton blast that dropped fallout over an area of 7000 square miles, including inhabited atolls. Today, atmospheric tests of thermonuclear weapons would be considered irresponsibly, perhaps even insanely, risky, even in “remote” areas. At the time these tests were widely approved and vigorously defended by American officials with wide support from the American public. Even the alarming events of Castle Bravo, which included the infamous irradiation of Japanese fishermen of The Lucky Dragon, did not deter or delay continued atmospheric testing by the U.S., which continued until 1963.
What is going on when a policy seems obviously reasonable, responsible, and justifiable at one time and later seems unaccountably risky and irresponsible? Has some sort of moral paradigm shift occurred? Are we now perhaps far more sensitive to the rights of indigenous peoples than we were sixty years ago? I hope so, but recall that far more nuclear tests were carried out in Nevada than in the Pacific, and the tests inside the U.S. irradiated not hundreds, but hundreds of thousands of American citizens. Not only was there little questioning among the American public at the time, the tests were a popular spectacle. From any empty parking lot in Las Vegas you could watch the mushroom clouds sprouting out in the desert about sixty miles away. There was even a “Miss Atomic Bomb” beauty contest (I am not making this up.). Some locales, such as St. George, Utah occasionally got very significant doses of radiation, which the Atomic Energy Commission just shrugged off. The John Wayne/Susan Hayward cinematic turkey The Conqueror was filmed in a highly radioactively polluted Utah valley. Ninety one of the 220 people involved in the filming developed some form of cancer.
Were 1950s people nuts? No, I think that what was going on was something that relates to the trolley problem. When people are stressed, and survival appears to be at stake, they calculate what is needed to survive. Risks are assessed and not much time is spent in emotional reflection. Empathy is largely switched off. You do what you have to do—as you perceive it at that time. Later, when the urgent sense of danger has subsided, we permit ourselves more emotional involvement with the victims of our utilitarian decisions. We might even be appalled at how we could seemingly have been so callous or reckless. How could we have thrown those innocent Marshallese (or Utahans) under the trolley? When we have to sacrifice people to survive, we feel that we are pulling the lever. When, later, we are not under survival pressure and more reflective, it feels like we pushed them onto the tracks.
The mid-1950s was a scary time. Both the U.S. and the Soviet Union were developing thermonuclear weapons, weapons of virtually unlimited destructiveness. Before the H-bomb policy makers could still pretend that nuclear weapons were just more bang for the buck and did not fundamentally change the nature of warfare. With thermonuclear weapons, it became clear that their use would be genocidal and so would be the retaliation. Frightened people calculate their odds and take the best ones they can get. At the time, a balance of terror seemed to be the best chance, and if you had H-bombs they had to be tested. So, risks to a few hundred natives, or even a few hundred thousand Americans was no big deal.
I tell my introductory ethics classes that the final test of an ethical theory is whether we can live with it. However rigorously logical, an ethical theory is doomed if we just cannot stomach it, i.e. if its implications clash with our basic, pre-theoretical intuitions about right and wrong. This is why Kant, whose theory is as rigorous and logical as any, floundered so haplessly when confronted with the consequence that the absolute proscription on lying could condemn an innocent person to murder. If the only way you can save your father from the insane ax-murderer is to lie, you will lie and feel good about it. Categorical imperatives be damned.
But what can we live with? Will not that vary according to historical circumstance, and, at the individual level, according to neurological states? As I say, the historical record shows that the same society may be under radically different moral regimes at one time or another—to such an extent that the views of those under the opposite regime will seem incomprehensible and perverse. Worse, such dichotomies exist within a given society at a particular time, and proponents of opposing views will be so at odds that each will seem morally obtuse to the other. “Debate” descends into shrill accusation and counter-accusation and ad hominem abuse. Sound familiar?
For instance, if immigrants threaten you, if you regard them as criminals, rapists, and drug dealers, you will easily be persuaded that harsh, even draconian measures are needed to deal with the perceived threat. On the other hand, if you do not perceive such an alleged threat, you will regard such measures as appallingly unjust. The former are ready to pull the lever on immigrants; the latter are not willing to push them onto the tracks. Note that I am not falsely equating these two views of immigrants. I regard the anti-immigrant rhetoric of Donald Trump as blatant demagoguery, a vicious and shameless appeal to ignorance and bigotry. I am merely pointing out that our ethical intuitions at any given time will depend crucially on our brain states as determined by our beliefs about the existence or non-existence of threats. Whether such belief is rationally justifiable is a different question.
The lesson, it seems to me, is that to if we want to change hearts and minds what we really have to change are brains, and to change brains we have to know how they really work. Neuroscience is therefore directly relevant to ethics and politics. Another insight offered by Eagleman is the extent to which the brain is wired for empathy. Brain imaging shows that when we feel empathy, the network called the pain matrix of our brains becomes active, much as when we ourselves feel pain. We do not literally feel the physical pain, but we feel the emotional response to pain. To that extent, then, you do feel another’s pain. The problem is that there are other brain processes that all too readily allow empathy to be shut off when some group is perceived as an outgroup—the “other”—as literally dehumanized.
How do we prevent the dehumanization, the reduction to the “other,” and the subsequent disabling of the empathy function? How, in short, do we change the brains of our fellow citizens so that they do not relegate some to an “other” status and switch off their empathy for those persons? But in speaking of changing brain states, aren’t we dehumanizing as well? Are we not objectifying and manipulating, speaking in terms of changing the state of an organ rather than in terms of addressing human beings? We are indeed! How to escape this conundrum? By realizing that the brains are not just any old organs, like lungs or livers. They are us.
With neuroscience therefore must come neuroethics. I would sum it up like this: Do unto another brain as you would have done to your own brain. I am quite happy to have my brain changed in some ways. Give me reasons to convince me. Tell me stories that make me see the humanity of others and to feel my connection with them. Present me with works of art that enlarge my vision or enhance my depth of feeling. Bond with me through shared experience. I like to have my brain changed in these ways, and I therefore feel comfortable about attempting to change others’ brains in these ways.

This article is archived.