There's a good deal in the blogosphere currently on the mutual relevance (or lack thereof) between scientific and ethical deliberation. Some of this pertains to Sam Harris' talk at the TED conference, in which he offered a short precis of his upcoming book on how to ground morality in empiricism. This talk drew a lot of comment, especially from Sean Carroll at Discover Blogs; Harris responded to Carroll and then Carroll re-responded. I limit myself to links just to these two, but there are a lot of posts out there--you'll find them if you follow your nose--on either side of the question of whether we can surmount the is/ought distinction. Harris thinks this is a shibboleth we'd do well to have done with; Carroll thinks that it ain't going anywhere anytime soon. And that pretty much lays out the terms of the debate.
Well, yes, it's a little subtler than that, but I can't help but feel we're dealing not with arguments alone, but with powerful motives for arguing. The sides have not been chosen on the basis of merits; the merits have been picked out and presented on the basis of a choice.
I have some opinions on the issues Harris discusses, and I'll go on record that I find him far more engaging on this than in his anti-religion jeremiads; but for now I want to focus on a more specific instance: not whether "science can answer moral questions," what science says about how we answer them already.
According to research recently published in the Proceedings of the National Academy of Sciences, a strong burst of magnetism to the right part of the brain can make us regress, at least temporarily, a number of Kohlbergian stages.
O.K., that's really my editorializing gloss on it. "Regress" is a sort of loaded term, and the abstracts and reviews of the research I have read do not actually refer to Kohlberg. The magnetism is not really essential--it just happens to be the method the researchers used to put the brain's right temporoparietal junction, or TPJ (where the brain's parietal and temporal lobes meet), out of order for a short while. This is because they hypothesize that the TPJ is an important region of the brain for understanding other people's motivations. But the experiment did demonstrate that without a normally-functioning TPJ, the subjects tended to make moral evaluations in a decidedly younger style.
Given a basic narrative premise--say, a woman stirring something in to her friend's coffee-- which could then go in more than one direction ([a], the woman thinks it's sugar, but winds up poisoning her friend; or [b], she thinks it's poison, but winds up doing nothing more than sweetening the coffee with sugar), subjects evaluated the ethics of the situation. The test subjects, ranging in age from 18 to 30, were asked to rate how excusable they considered the woman's behavior, from 1 ("not at all') to 7 ("completely.") Under ordinary circumstances, most adults tend to view scenario [a] as an accident with no moral ramifications, and [b] as a grave situation in which the woman is blameworthy despite nothing unfortunate having actually happened. However, following a transcranial magnetic stimulation (TMS) disrupting activity in the right TPJ, test subjects tended to evaluate the stories somewhat differently. They did still consider unsuccessful murder more serious than accidental killing; but "subjects were significantly more forgiving of attempted murder when their right temporoparietal junctions were knocked out by TMS than when they were functioning normally." That is, they seemed to think that the results of the scenario mattered more; whether or not anyone got hurt figured larger in their considerations, regardless of the motives of the actors.
Liane Young, of M.I.T., one of the paper's authors, notes that the TMS-affected subjects were exhibiting a style of moral evaluation more often seen in three- or four-year-olds. As the National Public Radio story on the research mentioned,
Studies show that at this age, children will usually say a child who breaks five teacups accidentally is naughtier than a child who breaks one teacup on purpose, [Young] says. That's probably because their brains are still developing the ability to understand the intentions of other people.In other words, young children tend to consider consequences, which they can understand, more than the motives which they are not yet equipped to grasp. I thought of this recently when commenting on Love of All Wisdom, where Amod has a post up about consequentialism, in connection with telling oneself lies. Can one, he asks, make a moral case for holding a false belief on the grounds that believing it offers a pragmatic advantage? This question arises, for instance, in conjunction with the issue of depressive realism which I mentioned before: how should we evaluate the will-to-accuracy that science (for instance) exemplifies, if one of the fruits of that accuracy is the conclusion that accuracy is best served by pessimism, whereas optimism serves one's likelihood to live and live well?
Back, however, to Young's research. Seeking comment, NPR went on to ask Joshua Greene of Harvard University, who offered his own take. NPR reported:
This last remark is the sort of thing for which the phrase "non sequitur" was invented. With all due caveats about getting one's science from journalists, I can't help but reflect that "a mechanical explanation" of morality is pretty far from anything even remotely suggested by the study in question. What it strongly suggests, of course, is that the normal function of this region of the brain is part of the usual way human beings discern and evaluate other people's motivations. This is very similar to Greene's own researches, which are very suggestive about what physical systems in the brain are involved in moral evaluation, and perhaps why some moral questions are more difficult to resolve than others. In the case, for instance, of the well-known "crying baby scenario," in which you are offered the hypothetical choice of either smothering the eponymous baby, or failing to and attracting the attention of a murderous death squad upon a whole roomful of hiding refugees, Greene sees both"Moral judgment is just a brain process," [Greene] says. "That's precisely why it's possible for these researchers to influence it using electromagnetic pulses on the surface of the brain."
The new study is really part of a much larger effort by scientists to explain how the brain creates moral judgments, Greene says. The scientists are trying to take concepts such as morality, which philosophers once attributed to the human soul, and "break it down in mechanical terms."
If something as complex as morality has a mechanical explanation, Green says, it will be hard to argue that people have, or need, a soul.
an emotional impulse to think it’s wrong to smother the baby, as well as a utilitarian impulse to weigh the number of deaths with each possible outcome. Moreover, different parts of the brain are at work in the emotional and utilitarian case.Assuming that this could be demonstrated in some watertight way, what exactly would have been demonstrated? Well, that when we make moral evaluations, we use our brains, and not always the same part of our brains. Indeed, no amount of scientific casuistry could ever come within spitting distance of telling you whether to smother the baby. It can only say what is happening while you consider the question.
Greene seems to sense that this denouement is a trifle bathetic. But then, it is not really the how of the brain's functioning that interests neurophilosophers like himself, he says in this paper; it's the fact of it itself:
What we really want, I think, is to see the mind’s clockwork, "as clear and complete as those see-through exhibitions at auto shows." ...the promise of useful applications is not what fascinates us. Our fascination is existential. We are hooked on the idea of understanding ourselves in transparently mechanical terms. But a strange feature of this impulse to see the mind’s clockwork is that, so far as this impulse is concerned, the clockwork’s details are almost irrelevant. We don’t care how it works, exactly. We just want to see it in action.But then again, it isn't just the fact itself; it's also a certain sense of what that fact means:
Officially, we scientists already know (or think we know) that dualism is false and that we are simply complex biological machines. But insofar as we know this, we know this in a thin, intellectual way. We haven’t seen the absence of the soul. Rather, we have inferred its absence, based on the available evidence and our background assumptions about what makes one scientific theory better than another. But to truly, deeply believe that we are machines, we must see the clockwork in action. We’ve all heard that the soul is dead. Now we want to see the body.This is an admirably candid declaration. It is worth bearing in mind that it is precisely a programme that is being described here, and not a set of conclusions. No amount of research could ever demonstrate the absolute reduction of persons to being "simply complex biological machines," and Greene does not here aspire to demonstrating it. It is not a proper object of attempted demonstration; it is a motive.
I think it's a motive that Sam Harris shares, and it bears underscoring that it's not got much to do with evidence per se.
I say nothing here about the rightness or misguidedness of this motive. What I am fairly sure of is that (1) my brain is doing a lot of work when I evaluate it, and (2) if my right temporoparietal junction were knocked out by a magnetic field, a drug, or a tire iron, this would not have anything to do with how I ought to evaluate it; that is, on whether the metaphysical picture it depends upon is true.
Let me make this observation. The problem of the baby to-be-or-not-to-be smothered may well be an insolvable problem. When we have mapped exactly the human processes that try to solve this problem we may find them equivalent or equivocal. That doesn't mean there is no value in the science that brought you to that conclusion. We run across such problems in science and math all the time. Harris' point is that morality is the ONLY time we then turn around and say the whole agenda has been a failure. But what may be interesting is asking why one chooses to accept the rules of math but not a proposed rational set of rules for morality as Harris proposes. Perhaps we will find that some reject the notion because they have deeply committed themselves to an intellectual position while others are deeply committed to a religious position. If we then find that unanswerable questions do not invalidate the rules of science but in similar situations rules of morality are rejected, might we not assume this rejection is spurious to the issue at hand and rooted elsewhere?
ReplyDeleteThe agenda to clarify what thought processes are going into what decisions and judgements promises to free us from confusion about what we are really arguing about. It would be nice to know if this argument is being made because of the merits of the topic or for some prejudice held. It is my suspicion that the special relativism offered to morality (on the one hand as a liberal dogma and on the other hand as a number of religious dogmas--relative here because they only relate to their own sects) only exists because a rational moral system would lead us to conclusions unacceptable by the belief systems we have.
Personally I don't know what that has to do with souls or any other belief.
d~~
ReplyDeleteYes, the do-you-smother-the-baby question, and other ethical dilemmas, could be moral analogues of Godel sentences. I think Harris' perspective might be able to countenance this move. But he'd have to let go of his too-rigid distinction between "no-answers-in-practice" and "-in-principle," at least as I am reading him (he may nuance this more in his forthcoming book), because there really are undecidables in post-Godel logic.
Greene believes that this has a great deal to do with "souls" because he thinks that moral insight has historically been the privileged domain of the [idea of] the soul; once we can show all the pulleys and gears whirring in any given moment of moral deliberation, whatever residual soul-concept there is will have finally evaporated into superfluity. This evaporation is clearly a motive for him in and of itself. Harris I am sure would like to see this phase transition as well; but I don't assume that this in itself is what motivates his contention that science can legitimately opine on moral questions.