Friday, April 25, 2014

The Case For (and Against) Life After Death (5): Does Morality Require the Cosmic Justice of Heaven and Hell?

Does human morality necessarily presuppose the reality of Heaven and Hell?  That's the argument of Dinesh D'Souza, and that's the common assumption of those who believe in Heaven and Hell.  This is also implicit in the Kantian dualistic assumption that we live in "two worlds," as manifested in the is/ought dichotomy.

According to D'Souza, "the presupposition of an afterlife and the realization of the ideal of cosmic justice makes sense of our moral nature better than any competing hypothesis" (168).  He explains:
"Unlike material objects and all other living creatures, we humans inhabit two domains: the way things are, and the way things ought to be.  We are moral animals who recognize that just as there are natural laws that govern every object in the universe, there are also moral laws that govern the behavior of one special object in the universe, namely us.  While the universe is externally moved by 'facts,' we are internally moved also by 'values.'  Yet these values defy natural and scientific explanation because physical laws, as discovered by science, concern only the way things are and not the way they ought to be.  Moreover, the essence of morality is to curtail and contradict the powerful engine of human self-interest, giving morality an undeniable anti-evolutionary thrust.  So how do we explain the existence of moral values that stand athwart our animal nature?  The presupposition of cosmic justice, achieved not in this life but in another life beyond the grave, is by far the best and in some respects the only explanation.  The presupposition fully explains why humans continue to espouse goodness and justice even when the world is evil and unjust." (166-67)
There are at least four dubious claims here that I deny.  First, the claim that an ought cannot be derived from an is Second, that morality must contradict human self-interest.  Third, that evolutionary science cannot explain morality.  And, finally, that morality is impossible without the cosmic justice of heavenly rewards and hellish punishments in the afterlife.

HOW WE MOVE FROM IS TO OUGHT
In everything we do, we move from "is" to "ought" through some hypothetical imperative in which "ought" means a hypothetical relationship between desires and ends.  For example, "If you desire to be healthy, then you ought to eat nutritious food."  Or, "If you desire safe air travel, you ought to seek out air planes that are engineered for flying without crashing."  Or, "If you desire the love of friends, you ought to cultivate personal relationships based on mutual respect and affection and shared interests."

Such hypothetical imperatives are based on two kinds of objective facts.  First, human desires are objective facts.  We can empirically discover--through common experience or through scientific investigation--that human beings generally desire self-preservation, health, and friendship.  Second, the causal connection between behavior and result is an objective fact about the world.  We can empirically discover that through eating good food, flying on safe air planes, and cultivating close personal relationships, we can achieve the ends that we desire.  For studying these objective facts, the natural sciences of medicine, engineering, and psychology can be instructive.  It is false, therefore, for D'Souza to say that science cannot tell us anything about the way things ought to be.

D'Souza might respond by saying that even if science can tell us about the ought of a hypothetical imperative, it cannot tell us about the ought of a moral imperative, which must be categorical rather than hypothetical.  But this would ignore the fact that if a categorical imperative is to have any motivating truth, it must become a hypothetical imperative.  So when Kant or some other moral philosopher tells us that we ought to do something, we can always ask, Why?  And ultimately the only final answer to that question of motivation is that obeying this ought is what we most desire to do if we are rational and sufficiently informed.

Even Kant implicitly concedes this.  In his Groundwork of the Metaphysics of Morals, he says that everyone desires to obey his categorical imperatives, because everyone--"even the most hardened scoundrel"--desires the "greater inner worth of his own person" [einen grosseren inneren Wert seiner Person] that comes only from obeying the moral law and thus becoming a "better person" (Ak 4.454).  In this way, Kant's categorical imperatives are reduced to a hypothetical imperative:  If you desire to be a better person with a sense of self-worth, then you ought to obey my categorical imperatives.  This, then, rests on two kinds of empirical claims--that human beings most desire personal self-worth and that obeying Kant's categorical imperatives will achieve that desired end.

On this point and others here, I am indebted to Richard Carrier, "Moral Facts Naturally Exist (and Science Could Find Them)," in John W. Loftus, ed., The End of Christianity (Prometheus Books, 2011), 333-64.

Similarly, D'Souza's categorical imperatives are reducible to a hypothetical imperative.  If you desire to attain the eternal rewards of Heaven and to avoid the eternal punishments of Hell, then you ought to obey D'Souza's categorical imperatives.  And, again, it becomes an empirical question as to whether this is what all human beings naturally desire and as to whether obeying those imperatives really will achieve those desired ends.

Some of my previous posts on hypothetical imperatives and the fallacy of the is/ought dichotomy can be found here, here, here, here., and here.

THE EVOLUTIONARY MORALITY OF SELF-INTEREST
D'Souza argues that evolutionary science cannot explain morality because morality requires selfless behavior that contradicts self-interest, and any evolutionary explanation of morality would have to reduce morality to self-interest.  Even when evolutionary psychologists try to explain the evolution of altruism, they assume that altruism is a form of extended or long-term selfishness.  For example, Robert Trivers' account of "reciprocal altruism" assumes that animals will be generous with others if they eventually get something beneficial in return.  But this is not really altruism if we understand altruism as utterly selfless benevolence.

But this ignores the fact that every choice to do one thing rather than another is a choice for one course of action as more desirable than another.  And thus all intentional action is motivated by self-interest or self-love.  We do what we do because we think it's better for us to do it.  Of course, we can often be mistaken about this.  What we most desire is not what we happen to desire at any moment, but what we would desire if we were rational and sufficiently informed.  All of this is open to scientific study because it's all a matter of empirical study.  We can empirically study what human beings desire through the psychological study of the proximate and ultimate motivations of human beings.  We can also empirically study the causal connections between behavior and outcomes.

These empirical studies--both through common sense experience and through natural science--show that human self-interest does not dictate selfishness, because being the naturally social animals that we are, we can see that the social virtues are necessary for our happiness.

D'Souza implicitly contradicts his claim that morality cannot be based on self-interest, because he assumes that a morality of cosmic justice in the afterlife appeals to our self-interest:  he assumes that our pursuit of happiness requires attaining the rewards of Heaven and avoiding the punishments of Hell.

Some posts on the problem of self-interest and altruism can be found here and here.

THE COSMIC JUSTICE OF HEAVEN AND HELL
If I am right that the good is the desirable, and that the natural good for human beings is determined by the natural desires of their evolved human nature, then the good is relative to the nature of the human species.  If we had evolved to be a radically different kind of species, with different natural desires, then our morality would have been different.  Thus, our morality is anthropological rather than cosmological.  There is no cosmic right or wrong beyond the right or wrong that we know as human beings.

D'Souza rejects this because he thinks that morality must necessarily be cosmic--there must be some cosmic right and wrong that is true independently of our human nature.  What's the source of that cosmic morality?  Apparently, it's God's command.  We know what is good only insofar as we know what God has commanded us to do, with the threat of eternal punishment in Hell for our disobedience.

It seems, therefore, that D'Souza embraces a divine command theory of morality--that the good is whatever God commands, and the bad is whatever he prohibits.  But D'Souza never explicitly defends this position or answers the objections to such a position.  How do we know that what God commands is always good if we don't already know what the good is for us?

And is eternal punishment in Hell really good?  How do people earn such punishment?  If we are all sinful, as Christianity teaches, do we all go to Hell?  Or do we avoid Hell by becoming Christians?  It's clear that many Christians have done some evil deeds.  And it's clear that some non-Christians have done some good deeds.  Does this influence their fate in the afterlife?  D'Souza is never very clear about this.  And he never responds to those Christians today who doubt the reality of eternal punishment in Hell, because they think a merciful God should allow all to enter Heaven.

D'Souza generally speaks favorably about all those people who believe in an afterlife.  But sometimes he suggests that some of those who have such a belief will be punished in Hell.  He indicates, for example, that "the Islamic suicide bombers are not being attended by beautiful virgins in paradise but rather by big hairy guys with tattoos" (187).  How does he know that?  Does he believe that some people who sincerely believe they are obeying God's commands are mistaken, and God will punish their mistake with eternal punishment?  Is D'Souza convinced that the intentional killing of innocent people must always be wrong?  If so, how does he explain those many places in the Bible where God commands killing innocent people?

Such questions point to the implausibility of any divine command theory of morality.

Some of my posts on the divine command theory can be found here, here., and here.

Tuesday, April 22, 2014

The Case For (and Against) Life After Death (4): Neuroscience, Consciousness, and Free Will

According to D'Souza, neuroscience shows that life after death is both possible and reasonable.
"Neuroscience reveals that the mind cannot be reduced to the brain, and that reductive materialism is a dead end.  The whole realm of subjective experience lies outside its domain, and outside the domain of objective science altogether.  Two features of the mind--specifically consciousness and free will--define the human soul.  These features seem to operate outside the laws of nature and therefore are not subject to the laws governing the mortality of the body.  The body dies, but the soul lives on." (Life After Death, 220)
There are three arguments here, which D'Souza elaborates in his book. The first is that while neuroscience can explain the objective physical reality of the brain (the structure and functioning of the brain's neural circuitry), it cannot explain the subjective mental reality of the mind (thoughts, emotions, decisions, and so on).  My brain states can be objectively observed by others.  But my mental states cannot, because they are private experiences that only I have.  Other people cannot have the direct access to my mental states that I have.  My brain states are located in space.  But my mental states are not.  My mental states can intentionally refer to things external to them--they are "about" something.  But my brain states don't refer to anything outside themselves.  I can be infallibly sure about my mental states: my thoughts might be mistaken, but I know they are my thoughts.  But I cannot be infallible about my brain states.  In all of these ways, brain states and mental states differ. Thus they cannot be identical, and mind cannot be reduced to brain.  Any attempt to reduce the mind to the brain is implausible, because it denies the self-evident subjective reality of mental experience. 

It is true that neuroscience can show that brain states and mental states are correlated, but that does not show that brain states cause mental states.  It could be that the brain receives or channels the mind, analogous to a radio receiving signals that are translated into sounds.

If this is so, then it's at least possible that mind could live on after the death of the brain.

The second argument is that consciousness has no physical or scientific explanation, because the subjective experience of self-awareness cannot be objectively observed.  Consciousness is an irreducible element of reality like matter and energy.  Even a scientific materialist like Steven Pinker must admit that "there is no scientific explanation of consciousness."

The third argument is that free will is another mysterious feature of mental experience that cannot be explained through natural laws of physical causality.  We all know that we have the power to make mental decisions and then execute those decisions through our brains and bodies.  Neuroscientists recognize this as neuroplasticity--that is, the mind can change the brain.  For example, people suffering from obsessive compulsive disorder (OCD) can be taught through cognitive therapy how to refocus their minds away from the compulsion and redirect their thoughts towards more desirable behavior.  When they succeed in doing this, neuroscientists can see that these people are changing the neurocircuitry of their brains.  This free will seems to be free from the causal determinism of the physical brain and body, and therefore it seems reasonable that the mental capacity for free will could live on after the death of the brain and body.

D'Souza rightly notes that some of the founders of modern neuroscience--such as Charles Sherrington, Wilder Penfield, and John Eccles--were dualists who were open to the possibility of the mind existing as a spiritual reality separated from the body.

I agree that mental experience, self-consciousness, and free will are all mysterious in their correlation with the brain and body, because it's mysterious as to exactly how the brain acts on the mind or the mind acts on the brain. 

We can explain the emergent evolution of the mind or soul by saying that with the increasing size and complexity of the primate brain--and particularly the higher parts of the brain in the prefrontal cortex that support conscious deliberation and choice--the human mind emerged once the human brain passed over a critical threshold of size and complexity.  This still leaves us with a mystery because we do not now know exactly how the brain creates a mind that can then act on the brain itself.

The radical dualism of Kant and D'Souza--the claim that mind belongs to a transcendent world of spirit beyond the natural world of bodies--does not resolve this mystery.  Rather, it tries to overcome one mystery with an even deeper mystery--the mystery of how a transcendent world interacts with our natural world.  D'Souza even admits that if we adopt his dualism, "we still won't be able to fully understand how minds interact with bodies" (127).

And even if we recognize the mystery in this interaction of minds and bodies, it does not follow logically from this mystery that we must believe that minds without any interaction with living bodies can live forever in Heaven or Hell.

Monday, April 21, 2014

The Case For (and Against) Life After Death (3): Kantian Dualism

As I have often argued on this blog, there is a fundamental opposition between Darwinian naturalism and Kantian dualism. 

When Darwin turns to the moral sense in The Descent of Man (in chapter 4), he indicates "that of all the differences between man and the lower animals, the moral sense or conscience is by far the most important" (Penguin edition, 120).  He then quotes from Immanuel Kant: "Duty!  Wondrous thought, that workest neither by fond insinuation, flattery, nor by any threat, but merely by holding up thy naked law in the soul, and so extorting for thyself always reverence, if not always obedience; before whom all appetites are dumb, however secretly they rebel; whence thy original?"  Darwin then writes: "This great question has been discussed by many writers of consummate ability; and my sole excuse for touching on it, is the impossibility of here passing it over; and because, as far as I know, no one has approached it exclusively from the side of natural history."

Darwin's quotation of Kant is from his Critique of Practical Reason (AA, p. 86).  Immediately after the quoted sentence, Kant says that moral duty shows us "man as belonging to two worlds"--a phenomenal world of natural causes and a noumenal world of human freedom.  Apparently, Darwin does not accept this Kantian dualism, because he proposes to explain moral duty "exclusively from the side of natural history," and thus he implicitly rejects Kant's claim that human morality belongs to a transcendental world beyond the natural world.

In her review of Darwin's Descent, Frances Cobbe complained that Darwin's denial of Kantian dualism and of the cosmic transcendence of human morality would promote moral nihilism.  She also worried that this would deny life after death.  If we were to carefully study people who are dying, she argued (in "The Peak in Darien"), we could see that some of them give us a fleeting glimpse of the transition from this world to the next world.

Like Cobbe, Dinesh D'Souza adopts Kantian dualism as the ground for his defense of life after death.  He is explicit about this both in chapter 9 of Life After Death: The Evidence and chapter 15 of What's So Great About Christianity (Regnery, 2007). 

D'Souza uses Kantian dualism to refute empirical realism.  "Empirical realism is based on a premise that many people would consider obvious: there is a real world out there, and we come to know it objectively through our senses and through scientific testing and observation.  This is sometimes called the correspondence theory of truth, because it presumes a correspondence between the real world and our sensory and intellectual apprehension of that world" (Life, 148).  He needs to refute this empirical realism so that he can argue that the empirical world--the world that we know by natural experience and reasoning--is not the only world, because there is a supernatural or transcendental world that is beyond the limits of reason.

He begins by asking what he takes to be the fundamental question for modern Western philosophy: "How do we know that the representations of reality that we have in our minds correspond to reality itself?" (Life, 149).  His Kantian answer is that we don't know this.  We have no way to prove that our subjective mental experiences correspond to the objective material world.

The fallacy of empirical realism, D'Souza contends, is the failure to see that there is a distinction between experience and reality, because the world as we experience it does not always correspond to the world as it really is.  George Berkeley was right: "The only things we perceive are our perceptions."  Our apprehension of the world depends upon our perceptual apparatus--our five senses and the cognitive system of our brains--which filters our experience.  Whatever cannot be captured by this perceptual and cognitive system cannot be known to us. 

So, for example, we known that dogs, bats, and bees have perceptual capacities beyond ours, and thus we cannot perceive what they perceive.  All animals are limited in what they can perceive by their sensory and cognitive apparatus.

From this, Kant inferred that we live in two worlds--the world as it appears to us (the phenomenon) and the world as it really is in itself (the noumenon).  Our reason is limited in that it knows the phenomenal world but not the noumenal world.  Kant argued for this limit on reason as a way to create room for faith, and this is what D'Souza finds so attractive: Kantian dualism supports religious faith in a transcendent reality that is beyond empirical realism.  If "human reason can never grasp reality itself" (Christianity, 173), as D'Souza says, then human reason cannot judge religious belief in the reality of a transcendent, supernatural world.  "We learn from Kant that within the domain of experience, human reason is sovereign, but it is in no way unreasonable to believe things on faith that simply cannot be adjudicated by reason" (D'Souza, "What Atheists Kant Refute").

In response to Daniel Dennett's claim that many people have refuted Kant, D'Souza answered: "In fact, there are no refutations" (Christianity, 174).  So Kant cant be refuted?  Is it irrefutable that "human reason can never grasp reality itself," that we "see things in a limited and distorted way," and that our "minds have a built-in disposition toward illusion"?

On the contrary, far from being irrefutable, Kant refutes himself.

Consider the following remark by D'Souza: "There are things in themselves--what Kant called the noumenon--and of them we can know nothing.  What we can know is our experience of those things, what Kant called the phenomenon" (Christianity, 171).  How do Kant and D'Souza know this?  If "we can know nothing" of things in themselves, then how do Kant and D'Souza know that there are things in themselves, and that these things in themselves are different from our experience of those things?  If "human reason can never grasp reality itself," then how can the human reason of Kant and D'Souza grasp the reality of the distinction between the noumenal and phenomenal worlds?  Isn't their argument self-refuting?

Kant and D'Souza are sophistical in assuming that by refuting a naïve realism they have refuted empirical realism.  It is naïve to believe that what we know by experience and reason always corresponds exactly and fully to reality.  Of course, our experience and reasoning are fallible in their grasp of reality.  But from that it does not follow that we can never have any grasp of reality in itself.  We can correct the mistakes of our experience and reasoning to strive for an approximate correspondence to reality.  So, for example, we can discover the limits of our sensory apparatus, and we can see that other animals have sensory capacities that we do not have.  We can use our cognitive capacities to infer how the world looks to dogs, bats, and bees.  We can also infer the existence of subatomic particles that are not directly accessible to our senses.  This is what science does.

Moreover, we can see that having evolved for life on earth, we are naturally adapted in our sensory and cognitive capacities for gathering information about our world and responding to it in adaptive ways.  If our mental models of the world had no correspondence to that world, and if we were unable to correct those models to make them correspond at least approximately to that world, we could not have survived and reproduced.

Through our experience and reasoning, and with the assistance of science, we need to probe ever deeper into the inexhaustible depths of the natural world, so that as we reach new levels of reality, we see new mysteries that raise new questions.  There is no need, as Kant and D'Souza insist, to assume that this wonderful world of nature is an illusion that hides the real world that can only be reached by denying reason and experience.  We were not thrown into this natural world from some other world far away.  This natural world is our home because we are naturally adapted live in it and investigate its wonders.

Sunday, April 20, 2014

The Case For (and Against) Life After Death: Near Death Experiences

Do near-death experiences "provide strong support for life after death," as Dinesh D'Souza claims (64)?

That they do is the theme of "Heaven Is For Real" a new movie just released over the Easter season based upon a book of the same title by Todd Burpo and Lynn Vincent.  The book is a best-seller, and the movie is attracting a lot of attention, particularly from Christians and others who see it as confirmation for their belief in Heaven.  The movie is worth seeing even if you're a skeptic.  The little kid who plays Colton Burpo is amazing.

But why isn't this movie entitled "Heaven and Hell Are For Real"?  Why don't the people who have had near-death experiences report going to Hell?  Is the message here that Hell is not for real?

Todd Burpo is a minister at the Crossroads Wesleyan Church in Imperial, Nebraska.  Some years ago, his 4-year-old son--Colton--underwent an emergency appendectomy in which he almost died.  Some months afterward, he casually reported that during the operation he had visited Heaven.  Over time, he gradually offered more details.  He sat in Jesus's lap.  Jesus wears a white robe.  Jesus has a multicolored horse.  Colton met John the Baptist and the Holy Spirit, and they are nice people.  There are lots of angels with wings.  All of the human beings there are young.  So that those who died in old age revert back to around age 30.

Pastor Burpo struggled over whether he should believe this.  But when Colton described meeting Pastor Burpo's grandfather, he was convinced.  His wife Sonja resisted, but even she was convinced when Colton reported seeing in Heaven the sister that had died in his mother's tummy, although Colton had never been told by his parents about this miscarriage.

Burpo's book was published over 7 years after Colton began telling his stories, and apparently it took many years for all the stories to come out.

Although it is not reported in the movie, the book relates that Colton saw the future battle of Armageddon in which Jesus and the good people will defeat Satan and the bad people in a bloody conflict.  Colton saw his father helping to kill the bad people with either a sword or a bow and arrow.

So does this prove life after death in Heaven, or was this a hallucination induced by medical trauma?

If Colton had heard nothing about his mother's miscarriage, but discovered his sister in Heaven, that would be impressive.  And yet, is it possible that he overheard his parents speaking about the miscarriage, or otherwise figured this out?  In the movie, we see that Sonja has kept some baby clothes that she bought before the miscarriage.  Is it possible that Colton understood that she was grieving for a lost child?  We are left wondering.

We also wonder about the coherence of Colton's story.  He reports that human beings in Heaven are all young adults, even those who died in old age.  But he also reports that the miscarried foetus of his sister now lives in Heaven as a young child, so she has grown up to the age of a young child, but apparently she will not grow older.

Can't we explain Colton's description of Heaven as the imaginative construction of a child's mind that has been shaped by growing up in a Christian household of a Methodist minister?  If he had been growing up as a Hindu child in India, wouldn't he have told a different story?

We also notice that Todd and Sonja had become so poor that they could not pay their debts.  And so we wonder whether the prospect of writing a best-selling book might have motivated them--even if subconsciously--to embellish the story to make it engaging for readers and thus profitable.

These are the kinds of questions that come up in considering such reports of near-death journeys to the afterlife.

Raymond Moody, a young medical student, coined the phrase "near-death experience" (NDE) in his book Life After Life, which was first published in 1975, and which has sold millions of copies around the world.  He told many stories of people who were resuscitated from death and then reported that they had left their bodies.  They flew upward to the ceilings of their hospital rooms and looked down at their own bodies being worked on by doctors and nurses.  These patients often described moving  through a dark tunnel towards a brilliant light, and then passing over a threshold into a transcendent realm of peace and bliss that seemed heavenly.  Many of them reported seeing God or Jesus or other divine and angelic figures.

Moody and others fascinated by this apparent evidence of a transcendent world of life after death founded the International Association for Near-Death Studies, which publishes a peer-reviewed journal for scientific research in this area--the Journal of Near-Death Studies.   One of the best surveys of this research is The Handbook of Near-Death Experiences: Thirty Years of Investigation, edited by Janice Miner Holden, Bruce Greyson, and Debbie James (Santa Barbara, CA: Praeger Publishers, 2009).

Most of the researchers in this area are committed to showing that near-death experiences are evidence for a world beyond this world, for a transcendent life after death, and thus as confirming supernaturalism and refuting scientific materialism, because this shows that the mind or soul lives on after the death of the brain and the body.  Some of the researchers doubt this, however, because they see near-death experiences as hallucinations of the brain under life-threatening stress.  The arguments of these skeptics are well stated by Keith Augustine's "Hallucinatory Near-Death Experiences" (2008), which is available online.

If we're looking for empirical evidence that NDEs show human minds operating totally independently of the body, and thus that minds can survive the death of the body, then we have to be interested in what Janice Miner Holden (in "Veridical Perception in Near-Death Experiences," in The Handbook, 185-211) calls "apparently nonphysical veridical NDE perception (AVP)":  "In AVP, NDErs report veridical perception that, considering the positions and/or conditions of their physical bodies during the near-death episodes, apparently could not have been the result of normal sensory processes or logical inference--nor, therefore, brain mediation--either before, during, or after these episodes.  Thus, AVP suggests the ability of consciousness to function independent of the physical body" (186).

One of the most commonly cited cases of AVP is the story of a NDEr named Maria, which is related this way by D'Souza:
"Another remarkable case involved a Seattle woman who reported a near death experience following a heart attack.  She told social worker Kimberly Clark that she had separated from her body and not only risen to the ceiling but floated outside the hospital altogether.  Clark did not believe her, but a small detail the woman mentioned caught her attention.  The woman said that she had been distracted by the presence of a shoe on the third floor ledge at the north end of the emergency room building.  it was a tennis shoe with a worn patch and a lace stuck under the heel.  The woman asked Clark to go find the shoe.  Clark found this ridiculous because she knew that the woman had been brought into the emergency room at night, when she could not possibly see what was outside the building, let alone on a third-floor ledge.  Somewhat reluctantly, Clark agreed to check, and it was only after trying several different rooms, looking out several windows, and finally climbing out onto the ledge that she was able to find and retrieve the shoe" (63-64).

Many of the leading NDE researchers have relied on this case as demonstrative evidence for how a mind can float separated from a body.  For example, Kenneth Ring and Madelaine Lawrence have said: "Assuming the authenticity of the account, which we have no reason to doubt, the facts of the case seem incontestable.  Maria's inexplicable detection of the inexplicable shoe is a strange and strangely beguiling sighting of the sort that has the power to arrest the skeptic's argument in mid-sentence, if only by virtue of its indisputable improbability" ("Further Evidence for Veridical Perception During Near-Death Experiences," Journal of Near-Death Studies, 11 [Summer 1993]: 223).

But then, in 1996, Hayden Ebbern, Sean Mulligan, and Barry Beyerstein reported that Clark's story of Maria was inaccurate.  Their article ("Maria's Near-Death Experience: Waiting for the Other Shoe to Drop," The Skeptical Inquirer,  20 [July/August 1996]: 27-33) is available online.

Maria's NDE occurred in 1977 at Seattle's Harborview Medical Center.  It was reported by Clark in 1984.  In 1994, Ebbern and Mulligan visited the hospital to survey the site and interview Clark.  They discovered that Marie had disappeared.  To test the story of the shoe, they placed a running shoe at the place indicated by Clark.  When they went outside the hospital, they could easily see the shoe.  They also discovered that the shoe was easily seen from inside the room of the hospital.  Since the shoe was easily visible both outside and inside the hospital, Maria could have seen the shoe, or she could have overheard people talking about this strange shoe on the ledge.  Clearly, Clark had embellished the story to make it look like an astonishing confirmation of AVP.

When D'Souza tells the story of Maria, he's completely silent about this debunking of Clark's report.  If you look at the video of D'Souza's debate with Dan Barker, you'll see that Barker points this out, and D'Souza has nothing to say in response.

We might wonder whether researchers have found better evidence for AVP that stands up to scrutiny.  In her survey of the research on AVP, Holden indicates that the most conclusive proof for AVP could come from field studies in hospitals, where researchers could plant visual targets in hospital rooms so that no one could see what's on the target unless they were floating around the ceiling.  So if an NDEr could report seeing what's on the target, that would show that a disembodied soul can see without any activity of the brain to support vision. 

Holden reports that there have been only five studies that satisfy the difficult conditions for such research.  "The bottom line of findings from these five studies," she concludes, "is quite disappointing: No researcher has succeeded in capturing even one case of AVP" (209).

She quotes a remark from Kenneth Ring about how discouraging this is:
"There is so much anecdotal evidence that suggests [experiencers] can, at least sometime, perceive veridcally during their NDEs . . . but isn't it true that in all this time, there hasn't been a single case of a veridical perception reported by an NDEr under controlled conditions?  I mean, thirty years later, it's still a null class (as far as I know).  Yes, excuses, excuses--I know.  But, really, wouldn't you have suspected more than a few such cases at least by now?" (210)

D'Sousa is silent about this failure of the most committed researchers to find demonstrative evidence that near-death experiences show how human minds can perceive reality accurately without any support from the body or the brain.

Saturday, April 19, 2014

The Case For (and Against) Life After Death

I have written a series of posts on the evolution of Heaven and Hell (in April and May of 2010) and on the various forms of immortality (in October and November of 2013).  Although I have been generally skeptical about life after death, I recognize that there are good arguments for believing in such a possibility. 

The best statement of those arguments that I have seen is Dinesh D'Souza's Life After Death: The Evidence (Regnery Publishing, 2009).  What is most interesting for me is that D'Souza claims to rely primarily on purely rational scientific and philosophic thinking that does not depend on religious faith.  This is the first of a series of posts on D'Souza's arguments. 

In my responses to D'Sousa, I have been influenced by Victor Stenger's book chapter "Life After Death: Examining the Evidence," in The End of Christianity, edited by John W. Loftus (Prometheus Books, 2011), 305-32, which is available online.  There is a series of YouTube videos of a debate between D'Souza and Dan Barker, which lays out some of the issues.

According to Stephen Cave, there are four possible ways that we might achieve immortality.  The Staying Alive Narrative says that we could become immortal if we could find a way to postpone death indefinitely.  The Resurrection Narrative says that even if death is unavoidable, we might be brought back to life.  The Soul Narrative says that even if our bodies must die, our souls can live forever because they are immaterial and thus not subject to bodily decay, and our souls are the most essential part of us.  Finally, the Legacy Narrative says that we can live on after death through those that live after us--either because they remember us or because they carry our genes.

D'Souza says nothing about the Staying Alive Narrative and quickly dismisses the Legacy Narrative (3-4).  So he's left with the Soul Narrative and the Resurrection Narrative.  He recognizes that the Soul Narrative came from Plato and was adopted by Christian theologians like Augustine, while the Resurrection Narrative was introduced into Christianity by Paul in the New Testament.  D'Souza never clarifies the relationship between these two forms of immortality. 

According to the orthodox Christian tradition defended by someone like Thomas Aquinas, our souls are separated from our bodies at death, but then they must be reunited with our bodies at the Second Coming of Christ and the Last Judgment, when the saved will go to Heaven for eternal reward, and the damned will go to Hell for eternal punishment.

Furthermore, according to Aquinas, this resurrected body must be a real living body. And since all living bodies are ageing bodies, the resurrected bodies must have a specific age. Since Jesus rose again at about age 30, that age must be the perfect age for the body, and so, Aquinas reasons, when human beings are resurrected, they will all have bodies of the same age--30 years old. Those who died as children will be moved up to age 30, and those who died in old age will be moved back to age 30 (ST, suppl., q. 81, a. 1).

But then we must wonder, when people wish for immortality, is this what they're wishing for--to be frozen eternally at one moment in time?  Shouldn't we say that this kind of immortality would be death?

This points to the problem of personal identity.  If my wish for immortality is a wish that I as a unique individual with a unique personal identity should live forever, then I want to be sure that whatever lives forever is really me and not just a copy of me.  As the embodied person that I am, my experience of myself combines my ageing body and my self-conscious mind as inseparable.  So it's not clear to me that an immaterial soul would really be me.  It's also not clear to me that a resurrected body would really be me if that body was frozen at the age of 30. 

D'Souza never faces up to this problem.  For example, he speaks about the Buddhist conception of immortality in which we must realize that "our individual souls are identical with the oneness of ultimate reality."  He explains: "Part of our enlightenment is to recognize that the very concept of 'I' is illusory; in reality, there is no man behind the curtain.  The term nirvana literally means 'blowing out,' and in case you're wondering, you are the one who must be blown out, like a candle" (51).  Is this what people wish for when they wish for immortality--to have their personal identity blown out?  If one's identity is blown out, isn't that death?

There's a similar problem with bodily  resurrection.  D'Souza indicates that our resurrected bodies will have to be very different from the earthly bodies that we have now, because our resurrected bodies will have to be eternally imperishable and ageless.  But if my resurrected body is so different from the body that I have known in my life, will this be my body?  Or will it be only a copy of my body?  Here, again, it seems that my personal identity as the real embodied mind that I am has been blown out.

As I have indicated in my blog post on Wallace Stevens's poem "Sunday Morning," we should consider the possibility that living forever is not desirable, because living timelessly and changelessly would not be really living, and that in living the lives that we have, "death is the mother of beauty."

Immortality only sounds good until you really think about it.

Monday, April 14, 2014

Does Pinker Show the Bias of a Pro-Western Imperialist, Capitalist, Elitist, and Anti-Communist Ideology?

Edward S. Herman and David Peterson have written one of the most elaborate critiques of Steven Pinker's Better Angels of Our Nature.  It's available online as an ebook--Reality Denial: Steven Pinker's Apologetics for Western-Imperial Violence (2012, 144 pages).  A short excerpt from their book has been published as a book review in the International Socialist Review (November-December, 2012).

As Rousseauean leftists, Herman and Peterson believe that our nomadic hunter-gatherer ancestors in the state of nature lived happily as peaceful egalitarians, but that this happy life was lost with the establishment of a sedentary life based on farming that eventually allowed for the sociopolitical complexity of bureaucratic states that brought all of the evils of modern life: "class structures, divisions of labor and social status, concentrations of wealth and poverty, and hierarchies of power and subordination, including religious and military power structures--all of the sins still very much with us in the modern world" (72).  They must consequently scorn Pinker as a classical liberal ideologue who wants to see a progressive history of declining violence and increasing liberty that began with the transition out of a Hobbesian state of nature among hunter-gatherers and that has culminated in the modern liberal peace.

As one manifestation of Pinker's ideological bias, Herman and Peterson point out that Pinker refuses to recognize that Western capitalist states wage imperial wars of conquest.  They quote Pinker as saying that not only do "democracies avoid disputes with each other," but that they "tend to stay out of disputes across the board," which is called the "Democratic Peace" (Pinker, 283).  They remark: "This will surely come as a surprise to the many victims of U.S. assassinations, sanctions, subversions, bombings and invasions since 1945.  For Pinker, no attack on a lesser power by one or more of the great democracies counts as a real war or confutes the 'Democratic Peace,' no matter how many people die" (Herman and Peterson, 9).

They also quote Pinker as saying: "Among respectable countries, conquest is no longer a thinkable option.  A politician in a democracy today who suggested conquering another country would be met not with counterarguments but with puzzlement, embarrassment, or laughter" (Pinker, 260).  They respond: "This is an extremely silly assertion.  Presumably, when George Bush and Tony Blair sent U.S. and British forces to attack Iraq in 2003, ousted its government, and replaced it with one operating under laws drafted by the Coalition Provisional Authority, this did not count as 'conquest,' as these leaders never stated that they launched the war to 'conquer' Iraq" (Herman and Peterson, 9).

Herman and Peterson don't indicate to their readers that Pinker's comments about the "Democratic Peace" are part of a summary of the research of Bruce Russett and John Oneal (Triangulating Peace: Democracy, Interdependence, and International Organizations [Norton, 2001]), who used the statistical technique of multiple logistic regression to analyze more than 2,300 militarized interstate disputes between 1816 and 2001, and who concluded "not that democracies never go to war . . ., but that they go to war less often than nondemocracies, all else being equal" (Pinker, 281).  Herman and Peterson don't point out any mistakes in the research of Russett and Oneal.   Indicating that democracies sometimes do fight wars does not refute the claim that they tend to go to war less often.

Pinker's comment that "conquest is no longer a thinkable option" comes in the context of his summary of Mark Zacher's research ("The Territorial Integrity Norm: International Boundaries and the Use of Force," International Organization 55 [2001]: 215-50).  Zacher has shown that since World War Two, there has been an international norm favoring the freezing of national borders.  As compared with previous centuries, the percentage of territorial wars that resulted in a redistribution of territory has dropped dramatically.  The recent international protest against Russia's acquisition of the Crimea is an illustration of this new international norm.  Herman and Peterson don't point out any mistakes in Zacher's research.  Instead, they cite the example of the invasion of Iraq by U.S. and British forces in 2003 as a conquest of that country.  But since there has been no change in the national borders of Iraq, it's not clear how this refutes Zacher's work.

Pinker argues that since 1945 there has been a "Long Peace"--the longest period in modern history in which the Great Powers have not fought a war with one another.  Herman and Peterson seem to agree with this, at least partially: "the First and especially the Second World War had taught them that with their advancing and life-threatening means of self-destruction, they could not go on playing their favorite game of mutual slaughter any longer.  But this didn't prevent them from carrying out numerous and deadly wars against the Third World, which filled-in the great-power war-gap nicely."  And, furthermore, the Long Peace is "increasingly threatened by a Western elite-instigated global class war and a permanent-war system" (92).  Herman and Peterson claim that Pinker ignores the "increasing structural violence of a global class war," in which capitalist nations have created a global economic system that allows them to exploit the poor nations (11, 62, 75).

According to Herman and Peterson, Pinker ignores the "structural violence" inherent in global capitalism because of his pro-capitalist and anti-communist bias.  An example of this is what he says about Mao Zedong's responsibility for the Great Famine during China's Great Leap Forward (1958-1961).  They quote Pinker as saying that "Mao masterminded . . . famine that killed between 20 million and 30 million people" (Pinker, 331).  For Pinker this shows the evil in communist ideology, because Mao's communism was responsible for the second worst atrocity in human history (second only to World War Two).  But Herman and Peterson insist that while Mao made a few mistakes, his communist policies were generally successful in improving the lives of the masses, and that life in China has become much worse under the influence of the capitalist reforms in China that began in 1979.

They write:
"China's death rate increased after 1979, with the surge of capitalist reforms and the associated sharp reduction in public medical services.  A recent review of China's past and current demographic trends showed that its rate of death was higher in 2010 than in 1982, and that the greatest declines in mortality occurred well prior to the reforms, with a national decline occurring even during the decade that included the famine (1953-1964)."
"So Pinker misrepresents the truths at a number of levels in dealing with the Chinese starvation episode.  He avoids the need to reconcile allegedly deliberate starvation deaths with a prior and continuous Chinese state policy of helping the masses by simply not discussing the subject.  He ignores the evidence that policy failure and ignorance rather than murderous intent was the source of those deaths.  He fails to mention the rise in mortality rates under the post-Mao new capitalist order." (60)

The reference here to a "recent review" of Chinese demographic trends is to an article by Xizhe Peng ("China's Demographic History and Future Challenges," Science 333 (29 July 2011): 581-87).  Herman and Peterson do not note Peng's warning that "there are widespread concerns in the scientific community regarding the quality of some of these population data" (581).  They are also silent about his statement that "the period 1959-1961 witnessed an exceptional demographic fluctuation mainly attributable to the great famine, with more than 20 million excess deaths" (581).

It is true, as they say, that Peng reports a slight increase in the death rate (per 1,000) from 6.6 in 1982 to 7.1 in 2010.  But what they don't say is that Peng reports that the death rate after 1979 was much less than in 1953 (14.0) or 1964 (11.6).  Furthermore, they are silent about Peng's reporting that life expectancy has been increasing and illiteracy has been declining since the capitalist reforms began in 1979.

Herman and Peterson quote from Jean Dreze and Amartya Sen (Hunger and Public Action [Oxford, 1989]) in explaining the Great Famine.  But they are silent about the judgment of Dreze and Sen that after 1979 "there is little doubt that the Chinese economy has surged ahead in response to market incentives, and the agricultural sector has really had--at long last--a proper 'leap forward'" (215).

Herman and Peterson are also silent about the growing evidence in recent years as to the brutality of the Great Famine and Mao's responsibility for it.  Based upon archival material in China that has only recently been opened to study, Frank Dikotter (in Mao's Great Famine: The History of China's Most Devastating Catastrope, 1958-1962 [Walker Publishing, 2010] concludes that at least 45 million people died unnecessarily between 1958 and 1962, and "the widespread view that these deaths were the unintended consequence of half-baked and poorly executed economic programs" is wrong.  He explains:
"As the fresh evidence presented in this book demonstrates, coercion, terror and systematic violence were the foundation of the Great Leap Forward.  Thanks to the often meticulous reports compiled by the party itself, we can infer that between 1958 and 1962 by a rough approximation 6 to 8 per cent of the victims were tortured to death or summarily killed--amounting to at least 2.5 million people.  Other victims were deliberately deprived of food and starved to death. . . . People were killed selectively because they were rich, because they dragged their feet, because they spoke out or simply because they were not liked" (xi).
Furthermore, Dikotter observes: "We know that Mao was the key architect of the Great Leap Forward, and thus bears the main responsibility for the catastrophe that followed.  he had to work hard to push through his vision, bargaining, cajoling, goading, occasionally tormenting or persecuting colleagues" (xiii).   He also concludes that "the catastrophe unleashed at the time stands as a reminder of how profoundly misplaced is the idea of state planning as an antidote to chaos" (xii).

This is a critical issue for Pinker's argument because his claim is that it's classical liberal thought that promotes declining violence, and that most of the atrocious violence of the 20th century was due to the illiberal regimes led by three individuals--Stalin, Hitler, and Mao.  Matthew White has calculated that the total death toll from communism in the 20th century is around 70 million, which would make the communist movement responsible for the greatest atrocity in human history (The Great Big Book of Horrible Things, Norton, 2012, pp. 453-57).

To make their case against Pinker, Herman and Peterson would have to demonstrate that this is not true.

Sunday, April 13, 2014

Pinker's List: A Distorted Record of Prehistoric War?

In his Nobel Peace Prize Acceptance Speech in 2009, President Barack Obama had to justify the awarding of the Nobel Peace Prize to a Commander in Chief who was leading his country in two major wars.  He argued that war is so deeply rooted in human nature and the human condition that it can never be completely abolished.  He declared: "War, in one form or another, appeared with the first man."  And yet, explaining how we can and should strive for peace, he quoted from President John Kennedy: "Let us focus on a more practical, more attainable peace, based not on a sudden revolution in human nature but on a gradual evolution in human institutions."  He then repeated that last phrase--"a gradual evolution of human institutions"--as the theme for his speech.  Without trying to change human nature, we can promote peace through institutional evolution--through culturally evolved norms of just war, human rights, global commerce, and international sanctions for punishing unjustified violence.  Obama thus summarized the argument of Steven Pinker that while war and violence express the "inner demons of our nature," we can move towards a life of peaceful coexistence as long as our cultural environment strengthens the "better angels of our nature."

Some of the critics of Pinker's argument think this is deeply mistaken because of its false claim that war has roots in human nature.  For example, in his book chapter--"Pinker's List: Exaggerating Prehistoric War Mortality"--R. Brian Ferguson challenges Pinker's evidence for prehistoric war that would support Obama's claim that "war, in one form or another, appeared with the first man."  (Ferguson's chapter appears in War, Peace, and Human Nature: The Convergence of Evolutionary and Cultural Views, edited by Douglas Fry [Oxford University Press, 2013].  A copy is available online.)

Ferguson concentrates his attention on Pinker's Figure 2-2 (page 49), which presents a list of societies showing the percentage of deaths in warfare in nonstate and state societies, classified into four groups: prehistoric archaeological sites, hunter-gatherers, hunter-horticulturalists and other tribal groups, and states.  The bar graphs show that the percentage of deaths in war is much higher for the first three groups than it is for states (ranging from ancient Mexico before 1500 CE to modern states from the 17th century to the present).

Ferguson claims that if one looks at the original sources for this data cited by Pinker, one discovers that Pinker's visual graph distorts the data to make it appear more supportive of his argument than it really is.  First, one should notice that among the 21 groups of prehistoric gravesites, the oldest archaeological site (Gobero, Niger, 14,000-6,200 BCE) has no war deaths at all.  And a couple of the prehistoric sites (Sarai Nahar Rai, India, 2140-850 BCE, and Nubia, 12,000-10,000 BCE) have only one violent death each.  If three skeletons are found at a site, and one of them shows evidence of violent death, then Pinker presents this as a bar graph showing 33% of deaths in war, which is much higher than that for modern states.  Surely, Ferguson suggests, one violent death at one gravesite hardly shows extensive warfare, but Pinker does not explain this to his reader.  Moreover, Ferguson notes, one set of 30 sites is from British Columbia, 3,500 BCE to 1674 CE.  Although he concedes this evidence for warfare, Ferguson indicates that these Indians along the Pacific Northwest Coast were "complex" hunter-gatherers--that is, hunter-gatherers who had settled into large villages with some hierarchical social structures, which was not characteristic of the nomadic hunter-gatherers who were our original ancestors.

Pinker presents bar graphs showing a range of 5% to 60% deaths in warfare for 8 hunter-gatherer societies.  But Ferguson points out that Pinker does not tell his reader that for two of these societies (the Ache of Paraguay and the Hiwi of Venezuela-Columbia), all of the war deaths were indigenous people killed by frontiersmen.

Pinker's bar graphs for 10 societies of hunter-horticulturalists and other tribal groups show a range of 15% to 60% deaths in warfare.  The 60% rate of death in war is the highest rate ever recorded by anthropologists, and it's for the Waorani of Eastern Ecuador.  When I was travelling through the Ecuadorian rainforest last summer, I heard about the Waorani and their reputation for violence.  One of my Quichua guides identified them as auca--"savages."

Ferguson concedes that the archaeological and anthropological evidence shows intense warfare among many complex hunter-gatherers and horticulturalists, but he argues that nomadic hunter-gatherers would not have shown this.  When one sees evidence of one or a few violent deaths among a group of nomadic hunter-gatherers, this should be identified as homicide not war.

Like Douglas Fry, Ferguson agrees that there has been lethal violence among nomadic hunter-gatherers, but this was personal violence rather than war.

In defense of Pinker, one could argue for Richard Wrangham's distinction between "simple" and "complex" war.  Like chimpanzees, nomadic hunter-gatherers do not fight pitched battles under the formal command of military leaders, because such "complex" warfare arises only in agrarian societies with military and political hierarchies.  Nomadic hunter-gatherers will kill members of outside groups only when the killers can surprise their outnumbered victims and then retreat after killing only a few individuals.  This raiding and feuding will not result in large numbers of battle deaths, and thus the archaeological record will not show any evidence of large numbers of violent deaths among nomadic hunter-gatherers.  Moreover, Pinker and Wrangham would predict that violent raiding and feuding among hunter-gatherers is infrequent, with long periods of peace, although the rate of killing is still comparable to that of American cities today.

Ferguson concludes: "We are not hard-wired for war.  We learn it" (126).

He does not indicate that those like Pinker and Wrangham actually agree with him about this.  They agree that war is not a biological necessity, although there are biological propensities to violence that can be triggered by the social environment.  They also agree with Ferguson that the establishment of agrarian societies with bureaucratic states created "complex" warfare as a purely cultural invention.  They also agree that the cultural evolution of recent centuries can move us towards peace.

Pinker and Wrangham agree with Obama and Kennedy:  in the quest for peace, we need not a sudden revolution in human nature but a gradual evolution in human institutions.

Some of these points are developed in earlier posts here, here, here., here, and here.

ADDENDUM
Brian Ferguson has pointed out to me that I have made a mistake here in attributing to him the point about the Hiwi and the Ache, because Ferguson only deals with the archaeological data in Pinker's list.  Actually, the point about the Hiwi and Ache was made by Douglas Fry (17-18).

Friday, April 11, 2014

Does Steven Pinker Distort the Data for Declining Violence?

Steven Pinker's Better Angels of Our Nature has over 115 figures--an average of one for every 6 pages of text.  Many of these figures are visual presentations of data to support his argument for a historical trend towards declining violence from the Stone Age to the present.  These figures are based on data found in thousands of cited sources.  This is one of his most impressive rhetorical techniques for persuading his readers that his reasoning is based on a meticulous statistical analysis of data.

Most readers will not take the trouble to read the sources for each figure to see whether Pinker is being accurate in his presentation of the data.  But some of his critics have done this for some of the figures, and they are accusing Pinker of manipulating the data to make it look more supportive of his argument that it really is.  Having looked into this myself, I think this is a fair criticism, although it's not fatal to his argument.  If Pinker had been totally honest about the gaps and uncertainties in the data, he could still have made a plausible argument for his conclusions.

Here I'll point to two examples: the table on page 195 that ranks the greatest atrocities in human history and Figure 2-2 on page 49 that shows the "percentage of deaths in warfare in nonstate and state societies."

Pinker identifies his table of the greatest atrocities as taken from Matthew White's list of "(Possibly) The Twenty (or so) Worst Things People Have Done to Each Other."  White identifies himself as an "atrocitologist" who for many years has maintained a website where he compiles records of the greatest atrocities in human history based on his estimates of violent deaths drawn from historical sources.  This work has been published as a book--The Great Big Book of Horrible Things: The Definitive Chronicle of History's 100 Worst Atrocities (Norton, 2012)--with a Foreword by Pinker.

In Pinker's table, he says that he's following White in ranking the 21 worst atrocities.  Number 1 is the Second World War with a death toll of 55 million.  Number 2 is Mao Zedong who was responsible for a death toll of 40 million (mostly through a government caused famine).  Number 3 is the Mongol Conquests of the 13th century with a death toll of 40 million.  Number 4 is the An Lushan Revolt in China in the 8th century with a death toll of 36 million.

This seems to confirm the common belief that the 20th century was the bloodiest in human history, especially when one notices that 5 of the top 21 atrocities were in the 20th century; and this would seem to refute Pinker's theory of a historical trend of declining violence.  In fact, White concludes his book by identifying the bloody events of the first half of the 20th century as the "Hemoclysm" (Greek for "blood flood"), which he sees as a series of interconnected events stretching from the First World War to the deaths of Hitler, Stalin, and Mao.  The collective death toll here would be 150 million, which would make it the other Number 1 atrocity of human history.

If Pinker is to save his theory of declining violence, he must reinterpret White's account of the historical record of violence culminating in the Hemoclysm of the 20th century.  Pinker does this with three arguments.

His first argument is that we must adjust White's numbers to overcome the illusion that the 20th century was much bloodier than past centuries.  Pinker adjusts the absolute numbers of violent deaths, and he also asks us to look at the relative numbers, calculated as a proportion of the populations.  Once these adjustments are made, Pinker can conclude that "the worst atrocity of all time was the An Lushan Revolt and Civil War, an eight-year rebellion during China's Tang Dynasty that, according to censuses, resulted in the loss of two-thirds of the empire's population, a sixth of the world's population at the time" (194).  In an endnote to this sentence, Pinker writes: "An Lushan Revolt.  White notes that the figure is controversial.  Some historians attribute it to migration or the breakdown of the census; others treat it as credible, because subsistence farmers would have been highly vulnerable to a disruption of irrigation infrastructure" (707, n. 13).

A reader who notices this endnote might become curious about what White has said about these controversial calculations concerning the An Lushan Rebellion.  A reader who looks at White's book will notice that he revises the estimates of violent deaths--moving from 36 million to 26 million to a final estimate of 13 million.  With the lower estimate, the An Lushan Rebellion ranks Number 13 on the list of atrocities, not Number 4 as Pinker has it, because Pinker accepts the highest estimate of 36 million (White 88-93, 529).

Historians know that the Chinese census recorded a population of 52,880,488 in the year 754, and then after ten years of civil war, the census of 764 recorded a population of 16,900,000.  This would suggest that 36 million people died in the war, which would be two-thirds of the entire population of China.  Pinker accepts these numbers, which allows him to rank the An Lushan Revolt as Number 4 on the list of atrocities.

But White indicates that most historians doubt the accuracy of these numbers, because they suspect that the chaos created by the war had impeded the ability of the Chinese census takers to find every taxpayer.  He cites five historians who commented on the census numbers.  He reports that two of them express "major doubt" about the census numbers, one expresses "slight doubt," one expresses "apparent acceptance," and one expresses "acceptance."  But a reader who checks these sources will see that the doubt is even greater than is reported by White.  The historian whom White identifies as expressing "slight doubt"--Peter Stearns--actually says that the population census of 16,900,000 was "certainly too low," which surely shows "major doubt."  And the historian whom White identifies as expressing "acceptance"--Peter Turchin--actually says there is "a certain degree of controversy among the experts" about the numbers, which surely indicates "slight doubt."

Not only does Pinker depart from White in accepting the 36 million estimate of violent deaths, Pinker also insists that death tolls should be adjusted as a proportion of the populations, because this allows us to judge the relative risk of being killed at different points in history.  The 55 million deaths in World War Two is higher than the 36 million in the An Lushan Revolt, but then the world population at the middle of the 20th century was much larger than that in the 8th century.  So if 36 million violent deaths was a sixth of the world's population in the 8th century, this would be the equivalent of 429 million violent deaths in the middle of the 20th century, which would raise the An Lushan Revolt to Number 1 on the list of atrocities; and World War Two would drop to Number 9 on the list.  White does not adjust the ranking in this way.

Pinker's second argument for why the Hemoclysm of the 20th century does not refute his theory of declining violence is that the causes of war can be so contingent that we can have something like World War Two erupt by chance without altering the otherwise declining trend of violence.  We can thus see World War Two as "an isolated peak in a declining sawtooth--the last gasp in a long slide of major war into historical obsolescence" (192).  If wars start and stop at random, then the accidents of history and the peculiarities of particular individuals can result in cataclysmic spasms of violence (200-222). 

In 1999, there was a lot of discussion about who should be considered the Most Important Person of the 20th Century.  White's answer was Gavrilo Princip.  And who was he?  He was the 19-year-old Serbian terrorist who assassinated Archduke Franz Ferdinand of Austria-Hungary.  This was a lucky accident for Princip.  If the archduke's driver had not made a wrong turn in Sarajevo, this would not have happened, and it's likely that World War One would not have happened, and this would not have set off the series of events leading to Lenin, Stalin, Mao, Hitler, World War Two, and the Cold War (White, 344-58; Pinker, 207-10, 262-63).

In the 80-year-long Hemoclysm sparked by Princip's bullets, three individuals--Stalin, Hitler, and Mao--were responsible for most of the violent deaths.  The communist regimes were responsible for 70 million deaths, which would justify ranking communism as the Number 1 atrocity--even greater than World War Two--except that it's hard to think of the whole communist movement as one event (White, 453-57).  Notice that what we see here is that most of the violence of the 20th century has been caused by illiberal ideology--Nazism and communism.

This supports Pinker's third argument for why the violence of the 20th century does not deny his theory of history.  The historical trend towards decreasing violence and increasing liberty depends on the spreading influence of classical liberal culture based on the principle that violence is never justified except in defense against violence.  That illiberal regimes have been the primary sources of violence in the 20th century confirms Pinker's argument. 

Because of the contingency of history, we can never be sure that illiberal leaders will not arise and cause great disasters.  Some day, we might see another Stalin, or Mao, or Pol Pot.  And that's why Pinker is clear in stating that there is no inevitability in the historical trend towards declining violence, because it could be reversed by illiberal turns (xxi, 361-77, 480).  But insofar as classical liberal ideas and norms spread around the world, they can increase the odds in favor of declining violence, which is what has happened since World War Two.

In my next post, I'll turn to Figure 2-2.

My first long series of posts on Pinker's Better Angels was written from October, 2011, to January, 2012.


Friday, April 04, 2014

Leo Strauss and Liberal Democracy: Grant Havers's Response

Grant Havers has sent me the following response to my blog post on "The Attack on Leo Strauss from the Paleoconservative Historicists":


While reading Professor Arnhart’s bracing review of my book on Strauss, I kept recalling the oft-quoted words of John Adams: “Facts are stubborn things.” The gist of Arnhart’s critique of my book seems to be that my historicist argument that Straussianism is essentially wrong to teach that there is a universal human desire for liberal democracy (regardless of faith, history, or culture) is unpersuasive because my thesis does not fit into the theory of evolution that Arnhart has popularly applied to the history and content of political philosophy. In brief, I am apparently wrong to dismiss this ideology of democratic universalism that Straussians usually teach because I set up a false dichotomy between nature and history. My insistence that bourgeois Protestant Christianity is a necessary precondition for successful constitutional self-government apparently flies in the face of liberal regimes that are not rooted in this faith tradition. Arnhart asks the rhetorical question: “Isn’t this historical evidence for the universal appeal of liberalism, suggesting that liberalism really does conform to a universal human nature?” Strauss’s teaching that liberal democracy is the best regime for all human beings must be correct, then, according to Arnhart, because both nature and history support it.

        I find it curious that Arnhart does not give much attention to the reasons why I make this historicist argument. Most of my book develops the argument that Strauss and his many students erroneously tried to locate the true origins of liberal democracy in Greek political philosophy, particularly Plato and Aristotle. This reading of ancient political thought is crucial to the Straussian assumption, which Arnhart shares, that human beings by nature seek the best political regime. In the process, they can argue that democracy is the universally best regime for humanity. Yet I show in my book that this central assumption is false because the ancient Greek concept of democracy never allowed for certain virtues that are crucial to successful self-government: these include Christian virtues such as charity (love thy neighbor, and even one’s enemy, as one loves God) as well as humility and mercy. The best evidence for the unnatural status of these virtues is that the greatest Greek philosophers did not even account for them, based on their own philosophies of nature. This fact was sometimes recognized by Strauss himself, who, as I show in the last chapter of my book, rigorously distinguished the moral teachings of “Athens,” or Greek political philosophy, from “Jerusalem,” biblical revelation. Until the Christian Era, which includes early modernity, the assumption that charity is a necessary precondition for a peaceful, stable, and humane government was absent in the works of political philosophers who followed Plato and Aristotle. The ancient Greek tolerance of slavery, infanticide, and natural hierarchy held no place for the ethic of caritas, a fact that was well-known to social contract theorists such as Hobbes, Spinoza, and Locke. (It is also central to Hegel’s philosophy of history.) Only in a specific historical period, as opposed to nature, then, do we find evidence of a regime that rises above nature to embody, however imperfectly, an ethic of charity.

        I recount all these facts because of Arnhart’s Darwinian-Straussian thesis that it is false to set up a dichotomy between nature and history. Apparently, Christian charity is just as natural as the desire for liberal democracy, according to my reviewer. To anyone who is familiar with the biblical tradition, the first assumption is shocking in its naiveté. If it is natural for human beings to love each other, why does our sinful nature so regularly conflict with this natural moral desire? And, why did the social contractarians cited above similarly insist that there is neither charity nor government in the “state of nature,” the natural order of humanity, if charity is so closely aligned with our instincts? (I must confess to a strong Protestant bias here about the fallenness of humanity as well as the sheer difficulty that human beings have in practicing charity on a consistent basis.)

        Arnhart, of course, will have none of this. His first specific objection to my argument is that I too harshly limit the universal morality of Christianity to the particular foundation of liberal Protestantism, even though I also inconsistently claim that Christian charity has great influence beyond this modern foundation. How can a morality be universal without being historically universal as well? My answer to this objection is that Arnhart is confusing moral universalism with historical universalism. It is one thing to claim that all human beings ought to be charitable. It is quite another to assert that all traditions in history have practiced or even understood charity. Arnhart confuses the “ought” with the “is” here because his adherence to the theory of evolution forces him into this theoretical cul-de-sac: if charity is not historically universal (that is natural), then it cannot be naturally intelligible to all human beings. Evolutionism, then, is inadequate in trying to explain how charity emerged naturally. (Arnhart presumably does not agree with his fellow evolutionist Richard Dawkins that Christian charity is so unnatural that only “suckers” would practice this ethic.) Arnhart can always fall back on his view that human nature and human history are equally influential, but that tactic is just question-begging. Which is most influential?

Additionally, how does evolution, in Arnhart’s own words, explain how “some historical traditions show a better grasp of human nature than do other historical traditions”? How indeed would evolution explain the fact that Athens, one of the founding traditions of the West, lacked a concept of charity if the latter ethic is natural? Why did it take so long for this ethic to be applied to politics, culminating in the creation of modern self-government, if it is all so natural and universal? How can we avoid the fact that Jerusalem, not Athens, makes possible the historical rise of charity as it is applied to politics?

        Arnhart’s response to all this is that “many different religious and philosophical traditions have discovered the Golden Rule (charity) as a reasonable inference from natural human experience” and cites C. S. Lewis’s famous argument in defence of natural law as the “Tao” that all human beings understand by nature. This assertion, to say the least, requires evidence. Although all religions teach some concept of moral obligation, Christianity is unique in teaching that love and obligation extend to one’s enemy, a teaching that is consistent with the Christian emphasis on mercy and humility. Arnhart would have to show how the pagan texts of antiquity, including those of Plato and Aristotle, contain these virtues. (Arnhart’s fellow Straussian Harry Jaffa, in his Thomism and Aristotelianism [1952], brought out this distinction between Jerusalem and Athens with great insight.) In Plato’s famous dialogue on love, The Symposium, the reader will look in vain for any expression of love akin to charity. Confucianism, which at a superficial level teaches moral obligation towards other human beings, generally restricts this sense of duty to one’s family. (Christ’s famous condemnation of family-based love in Luke 14:26 would be shocking to a Confucian.) Since charity teaches the love of both God and humanity, any religion or philosophy that dualistically opposes one to the other (either love God or humanity) is incompatible with Christian morality.

        I am not, of course, claiming that all Christians in history have adhered to this demanding ethic with perfect consistency. Arnhart is quite right to point out that abolitionists and slave-owners in the decades leading up to the American Civil War profoundly disagreed as to whether Scripture, including the Christian teaching on charity, opposed slavery or not. How, then, asks Arnhart, can I appeal to the Bible for guidance or claim that the Golden Rule was the foundation of Abraham Lincoln’s opposition to slavery when Americans on both sides of the Mason-Dixon line claimed to be good Christians? My answer, which I develop in detail in Lincoln and the Politics of Christian Love (2009), is that the 16th president never doubted that a true application of Christian charity was incompatible with slavery. Since no slave-owner would ever choose to be a slave, he could not justifiably enslave another human being. Yet Southern slave-owners sinfully and wilfully denied this moral truth even as they falsely projected onto the Bible a violently uncharitable rationale for slavery. Lincoln knew all too well that human beings were naturally inclined to enslave each other and to reject the “self-evident” nature of human equality. For this reason, he appealed to that most unnatural, yet humane, expressions of morality: charity.

        Arnhart nevertheless claims to have the facts on his side when he confidently recounts the “historical trend towards the spread of liberalism” around the world since the Enlightenment. (Ironically, this progressivist argument would not have sat well with Strauss, who absolutely opposed any appeal to the “march of progress” as a justification for a prudent politics.) He goes on to claim that many of these “liberal regimes are clearly not rooted in the historical tradition of liberal Protestant culture” such as Japan, Malaysia, and South Korea. These examples, however, do not exactly confirm his thesis since all three of these nations had some significant exposure to English or American ideals due to the influence of occupation, colonization, or war. What is shocking in this discussion is Arnhart’s deafening silence on the failure of liberal democracy to take root in most nations of the Middle East. If liberal democracy is so natural, then why has the Arab Spring become the Arab Winter? Could the grim oscillation between theocracy and dictatorship have anything to do with a distinct religious and historical tradition? And recent wars for democracy in Iraq, Afghanistan, and Libya have not exactly confirmed Arnhart’s optimistic view that all the peoples of the world are itching for constitutional government and the rule of law. But then again, history is full of inconvenient truths that do not fill well into ideological boxes.

Thursday, April 03, 2014

Incest Avoidance and Incest Taboos as Two Aspects of Human Nature: Arthur Wolf

In October of 2006, I wrote a post entitled "So What's Wrong with Incest?"  Amazingly, that post has continued to receive two to three dozen pageviews every week for the past seven and a half years!

That might just confirm that the only thing more interesting than sex is tabooed sex.  But I hope that it also shows an interest in incest avoidance and incest taboos as human universals that show the complex evolutionary interaction of human nature and human culture.

My thinking about this has been shaped largely by Edward Westermarck's Darwinian explanation of incest and the incest taboo and by Arthur Wolf's defense of Westermarck's theory based on the research that he and others have conducted over the past 60 years.  As I have argued in various posts, I have come to see the Westermarck theory as a model of how Darwinian science can explain human morality and culture in a manner that confirms the moral and political philosophy of the Scottish Enlightenment, particularly as expressed in the work of David Hume and Adam Smith.

For me, this began in 1998 when I lectured at a conference in Helsinki, Finland, on Westermarck's moral philosophy and his theory of incest avoidance and incest taboos.  Westermarck was Finnish, and the Westermarckian tradition was being carried on by some sociologists and anthropologists at the University of Helsinki.

There were many prominent people at the conference (Frans de Waal, for example) and some young people who would later become prominent (like Debra Lieberman).  But, clearly, Arthur Wolf (a professor of anthropology at Stanford University) was the center of attention, because of his recently published book Sexual Attraction and Childhood Association: A Chinese Brief for Edward Westermarck (Stanford University Press, 1995).  I had first met Wolf when I attended some of his lectures for the Program in Human Biology at Stanford in 1988-1989.  Later, after the Helsinki conference, I contributed to another conference organized by Wolf that led to the publication of Inbreeding, Incest, and the Incest Taboo (Stanford University Press, 2004), which contains my most extensive statement on this subject--"The Incest Taboo as Darwinian Natural Right."

Now Stanford University Press has just published a new book by Wolf--Incest Avoidance and the Incest Taboos: Two Aspects of Human Nature.  When I read the manuscript for this book last year for the Press, I saw it as a brilliant statement of the Westermarck/Wolf position.  My blurb for this book comes from my report for the publisher: "Arthur Wolf has done it again.  Wolf's evolutionary explanation of incest avoidance and the incest taboo is one of the greatest achievements in the social sciences over the past half century."

As the title suggests, Wolf distinguishes the question about incest avoidance from the question about the incest taboos.  The first question is: Why is it that most people avoid sexual relations with their close kin?  The second question is: Why is it that most people disapprove of other people having sexual relations with their close kin, a disapproval expressed as an incest taboo?

Wolf identifies two opposed groups in the debates over these questions.  The "constitutionalists" ground their explanations in human nature.  The "conventionalists" ground their explanations in human culture.  Wolf identifies himself as a constitutionalist, because as the subtitle of his new book indicates, he sees incest avoidance and the incest taboos as "two aspects of human nature."  And yet he thinks the constitutionalists have largely evaded the second question, and his rectification of that mistake is his major advance in this new book.

Wolf explains: "Constitutionalists always begin with the first question and commonly ignore the second--they assume that people disapprove of anyone doing something they would dislike doing.  Conventionalists, in contrast, always begin with the second question and usually ignore the first--they assume that people avoid doing what custom disapproves of their doing" (1).

Edward O. Wilson is an example of a constitutionalist who has often used the Westermarck/Wolf Darwinian theory of incest avoidance as one of the best developed examples of a sociobiological explanation of human social behavior and morality.  But as Wolf sees it, Wilson assumes that in answering the first question (about incest avoidance), he has answered the second question (about incest taboos).  This fails to recognize, according to Wolf, that a Darwinian explanation of incest avoidance as a behavior that human beings share with other mammals and primates is insufficient to provide a Darwinian explanation of incest taboos as culturally constructed moral norms that are unique to human beings.

In the first part of his new book, Wolf restates the evidence and argumentation for the Westermarckian explanation of incest avoidance (5-66).  In the second part, he lays out his explanation of incest taboos (66-133).

Referring to John Searle's "social ontology," Wolf concedes to the conventionalists that they have a good point when they argue that through language and symbolism, human beings can create social institutions and institutional norms that have little or no physical reality: Searle's favorite examples are money, marriage, and markets (70-72).  To explain incest taboos, we must explain why human beings agree to create the prohibition of incest as a moral norm.  We must explain not just why most of us avoid incest, but why most of us agree to the social norm that we ought to avoid incest.

What Wolf says here about the uniqueness of human social intentionality is similar to what Michael Tomasello has said about "shared intentionality" as a capacity that sets human beings apart--even as young children--from other primates.  This has been the subject of a previous post.

To explain the uniqueness of human incest taboos as social constructions, Wolf argues that a taboo is something to be avoided because a society regards it as dangerous.  Society prohibits incest because it is perceived to be unnatural or abnormal sex that elicits fear and foreboding.  Thus, the incest taboo arises from two traits of human nature--the fear of events perceived as unnatural or abnormal and the desire to conform to one's group.  These create an incest taboo as a collective intentionality to avoid behavior perceived as dangerous to the community (77-118).

This taboo becomes a moral norm not because there is some transcendent normativity or deontic force inherent in such a social rule, but because, as Westermarck argued, retributive emotions become moral emotions when they show "generality, apparent disinterestedness, and a flavor of impartiality" (119).  Emotional disapproval constitutes a moral rule when the emotional disapproval is expressed in such a general way as to conform to what an "impartial spectator" would condemn--thus following Adam Smith's account of the moral sentiments.

Wolf recognizes, however, that Westermarck's Darwinian explanation of the incest taboo will be scorned by those who fear what looks like Darwin's assault on human dignity.  In the 18th century, Francis Hutcheson explained the incest taboo as expressing an "innate moral sense" instilled in human beings by God.  Bernard Mandeville responded by denying this and arguing that the incest taboo was merely a matter of custom with no natural basis.  In this debate, Mandeville's conventionalism seemed to be an assault on the moral dignity of human beings.  But then when Westermarck developed his Darwinian account of the incest taboo as a cultural expression of naturally evolved dispositions that did not need to be attributed to the Creator, his naturalist or constitutionalist explanation seemed to be degrading in denying human moral transcendence.  At this point, the culturalists or conventionalists seemed to be the defenders of human dignity, because they offered a kind of creation story in which human beings create themselves through culture as transcending mere animal nature (25-27, 30, 33).  This explains the moralistic vehemence of those who reject a Darwinian science of human nature as a repugnant form of reductionism and materialism.

Wolf concludes his new book with a helpful summary of the 12 steps in his reasoning (134-35):
"1. Inbreeding is dangerous, raising the excess death-plus-major-defect rate by 20 to 40 percent in the case of primary relatives.
"2. The selection pressure generated by the dangers of inbreeding has so shaped primate sexuality that early association inhibits sexual relations.
"3. Because children are normally reared by their parents and siblings with one another, nuclear family incest is rare except as child abuse.
"4. Being an exceptionally emotional species, human beings are startled by rare events and see them as predicting misfortune.
"5. Nuclear family incest is a rare event and is therefore startling.
"6. There is always a consensus condemning incest because most people interpret incest as threatening, and the few who do not accept the majority view because they want to belong.
"7. Human institutions are created when people agree that something exists and agree to assign it a purpose.
"8. The incest taboos are the creation of a consensus condemning incest and have the purpose of forestalling the dangers it threatens.
"9. The incest taboos have a moral quality because they are the products of a general reaction that does not appear to serve any selfish interest. 
"10. The nuclear family is universal and everywhere existed before the creation of larger kinship groups like the clan.
"11. When larger kingship groups like the clan were created, they were modeled on the nuclear family and therefore included incest taboos as constitutional features.
"12. The scope of the extended incest taboos varies because the composition of the groups modeled on the family varies.
"Taken together, these twelve statements account for most if not all of what is known about incest avoidance and the incest taboos.  They add up to a constitutionalist solution to the incest problem, because they are claims about human nature or rest on assumptions about human nature."
My post on incest in 2006 includes links to most of my subsequent posts on this.


ADDENDUM


                                                            Arthur Wolf, 1932-2015

Arthur Wolf died on May 2, 2015, at the age of 83.  I will always remember him from my year at Stanford (1988-1989), when I attended his lectures in the Program for Human Biology, and from our meetings in Finland in 1998, and later at a gathering of scholars on incest avoidance at Stanford that he organized.

As indicated in this post, I regard his defense of the Westermarck theory of the incest taboo as one of the classic research projects in the social sciences that will be studied far into the future.

I remember him for his generosity in encouraging some of my research on the implications of Westermarck's thinking for moral and political philosophy.  I am grateful for the opportunity to have known him.