Atheist neuroscientist Sam Harris's book The Moral Landscape: How Science Can Determine Human Values (2010) argues for a consequentialist and realist moral theory where the good is whatever promotes human well-being. Harris says that insofar as there are objective facts about what makes humans thrive, there are objective facts about how we ought to treat other people. In any one situation, there may be multiple relevant types of well-being to be considered, it may be difficult to gather data and assess risk, and there may be dissenting opinions, but this does not mean the moral question has no correct answer. Moral facts may exist even if we are untalented at perceiving them and tend to disagree about them.
There are facts about morality, he says, because there are facts about well-being. "If we were to discover a new tribe in the Amazon tomorrow, there is not a scientist alive who would assume a priori that these people must enjoy optimal physical health and material prosperity," Harris writes. Scientists would want to study the tribe's health and prosperity before passing judgment. On the other hand, he suspects that
"news that these jolly people enjoy sacrificing their firstborn children to imaginary gods would prompt many (even most) anthropologists to say that this tribe was in possession of an alternate moral code every bit as valid and impervious to refutation as our own...The disparity between how we think about physical health and mental/societal health reveals a bizarre double standard: one that is predicated on our not knowing - or, rather, on our pretending not to know - anything at all about human well-being."
We do indeed know some things about individual and social well-being, Harris says, such as that killing children doesn't contribute to it. "But this notion of 'ought' is an artificial and needlessly confusing way to think about moral choice," he qualifies. "For instance, to say that we ought to treat children with kindness seems identical to saying that everyone will tend to be better off if we do."
He bravely acknowledges some of the problems with consequentialism. For example, he quotes philosopher Patricia Churchland who has pointed out that "no one has the slightest idea how to compare the mild headache of five million against the broken legs of two, or the needs of one's own two children against the needs of a hundred unrelated brain-damaged children in Serbia." He also mentions the problem of calculating well-being as raised by philosopher Derek Parfit: If total well-being is our aim, it must be better to fill the Earth with hundreds of billions of people who have the barest glimmer of joy than to make the current seven billion people very happy. If, on the other hand, we are to focus on the average well-being per capita, we should euthanize our unhappiest people and we should prefer large numbers of mostly miserable people over one exceedingly miserable person. We would not normally intuit these to be moral goals, so perhaps something is amiss with these approaches to consequentialism.
Harris also acknowledged John Rawls's criticism that fair play and optimum outcomes sometimes seem at odds with each other. What if enslaving a few people would make the rest of humanity very happy? In a consequentialist moral theory, why can't we treat people as means to the ends of others? Harris doesn't try to tackle too much in his response to these objections - which was probably the right thing to do within the scope of a book that is already quite ambitious - but it is important to realize that these questions are left open. This kind of radical uncertainty doesn't lend support to his idea that moral questions have right and wrong answers. It isn't just that the calculation is too hard to crunch; it's that it looks suspiciously like it can't be run at all.
Harris explores the evolution of morality and its roots in cooperation and language, referring to the work of William Hamilton on kin selection, Robert Trivers on reciprocal altruism, and Geoffrey Miller on sexual selection. He talks at some length about how the brain's medial prefrontal cortex (MPFC) is associated with both mathematical and ethical beliefs; people with damage to that area are more likely to remove emotion from moral analyses. He correlates a healthy amount of fear with moral understanding.
He points out that, as Adam Smith noted, we tend not to be swayed by anonymous and distant suffering, even if we know it involves large numbers of people. He mentions of the work of psychologist Paul Slovic who has pointed out that this leads to "genocide neglect." Harris concludes: "Clearly, one of the great tasks of civilization is to create cultural mechanisms that protect us from the moment-to-moment failures of our ethical intuitions." Another gap in our moral reasoning is that we suffer from poor intuition on risk analysis. We tend to gravitate toward the power to "save" some and shy away from the complementary choice of "losing" the rest, even when these amount to the same thing and the only difference is in the linguistic expression of the glass being half full or half empty. Harris explains: "Another way of stating this is that people tend to overvalue certainty: finding the certainty of saving life inordinately attractive and the certainty of losing life inordinately painful."
He doesn't believe in free will and thinks we trump up the idea of it more than we actually feel a sensation that represents free will. First of all, from a neurological point of view, unconscious brain activity associated with an action has been documented to precede our awareness of having chosen to do anything. It's as if the car starts before we turn the key. The sense of having "chosen" may be an illusion. Secondly, although we have the freedom to emphasize certain facts, we cannot, given a body of available evidence, freely choose to believe something that contradicts the evidence. He concludes that scientific and ethical judgments have something in common; beliefs about facts and beliefs about values are not very different from a neurological perspective.
The book's shortcoming lies in his attempt to straddle the fence on the question of how much we should listen to our moral intuition. For example, he says that the fact that his moral theory fits our commonsense definition is a reason in favor of accepting his theory: "While moral realism and consequentialism have both come under pressure in philosophical circles, they have the virtue of corresponding to many of our intuitions about how the world works." This does not quite dovetail with his list, mentioned above, of all the ways in which our moral intuitions tend to let us down due to rational and emotional flaws in our brains. If we're so fallible about the details of calculating consequentialist outcomes, why should we assume that consequentialism itself is a correct theory? Who cares what seems right to a species that is so often wrong?
A specific application of this problem occurs when he takes anthropologist Scott Atran to task for claiming that the religious motives articulated by violent Muslim jihadis may not be what really causes them to become violent; they may be radicalized by a lack of social integration, for example. Harris seems outraged as he insists that "given the clarity with which they articulate their core beliefs, there is no mystery whatsoever as to why certain people behave as they do." This is an odd comment, coming from him; isn't one of the main arguments of his book that people frequently are mistaken about their own motives? As mentioned above, Harris did say that we should create "cultural mechanisms" that would, in essence, trick us into behaving better. There seems to be a trace of unexamined bias here: is it well-meaning atheists who would benefit from paternalistic cultural guidance designed to draw out increasingly better behavior that reflects their authentic inner goodness, while militant Muslims aren't expected to have psychological depth beyond what they announce publicly and are in this respect unreformable?
A related criticism is that he's unclear about the specific roles of reason and emotion in our moral judgments. Mentioning the work of philosopher Jonathan Haidt who has argued that moral decisions are usually made on closed-minded gut instinct rather than on open-minded careful reasoning, Harris insists that just because we're not always rational doesn't mean we shouldn't set rationality as a goal. But wasn't his mention of the MPFC's role in injecting emotion into moral decisions meant to imply that emotion is an inseparable part of moral decision-making? "Of course," he says, "it is now well known that our feeling of reasoning objectively is often illusory. This does not mean, however, that we cannot learn to reason more effectively, pay greater attention to evidence, and grow more mindful of the ever-present possibility of error." One wonders whether he would be willing to apply this charge of an "illusory feeling of reasoning" to his own "intuition about how the world works" as mentioned above.
Skepticism of any kind is a slippery slope to radical skepticism. Perhaps we should be radical skeptics about many of our moral intuitions, but Harris doesn't let us know exactly when and why he jumps off the skeptic's bandwagon. Alternatively, rather than a problem of skepticism, it could be a problem of subjectivity that confronts all scientists, and perhaps especially those scientists that study the human mind. How can a person use his or her own subjectivity to study other subjective beings and call it an objective process?
The conflation of reason and emotion, and of the metacognition that chooses which method of judgment to apply in a given circumstance, also plagued Jonah Lehrer's recent book on the neuroscience of decision-making. In How We Decide, Lehrer claims that reason and emotion are interrelated phenomena, and he prods this blurry area without defining or successfully redefining these terms. Lehrer's book is not listed in Harris's prodigious bibliography. One wonders if Harris might have avoided making the same error had he studied Lehrer's presentation.
Overall, this is a provocative book worth reading, especially for those with prior exposure to these popular topics in philosophy and neuroscience. Harris sticks to the narrow path and, for the most part, avoids wading into the treacherous waters of bitter diatribes against religion. He effectively demolishes the allegation of a metaphysical basis for morality that is defended by most religions, not by making sport of religious traditions and ancient texts, but simply by offering a substantive, nuanced, functional, science-based alternative.