Mental Illness, Medicine, Malingering and Morality: The Myth of Mental Illness vs The Myth of Free Will

Thomas Szasz, Psychiatry: The Science of Lies New York: Syracuse University Press, 2008

The notion that psychiatric conditions, including schizophrenia, ADHD, depression, alcoholism and gambling addiction, are all illnesses ‘just like any other disease’ (i.e. just like smallpox, malaria or the flu) is obvious nonsense. 

If indeed these conditions are to be called ‘diseases’, which, of course, depends on how we define ‘disease’, they are clearly diseases very much unlike the infections of pathogens with which we usually associate the word ‘disease’. 

For this reason, I had long meant to read the work of Thomas Szasz, a psychiatrist whose famous (or perhaps infamous) paper, The Myth of Mental Illness (Szasz 1960), and book of the same title, questioned the concept of mental illness and, in the process, rocked the very foundations of psychiatry when first published in the 1960s. I was moreover, as the preceding two paragraphs would suggest, in principle open, even sympathetic, to what I understood to be its central thesis. 

Eventually, I got around to reading instead Psychiatry: The Science of Lies, a more recent, and hence, I not unreasonably imagined, more up-to-date, work of Szasz’s on the same topic.[1]

I found that Szasz does indeed marshal many powerful arguments against what is sometimes called the disease model’ of mental health

Unfortunately, however, the paradigm with which he proposes to replace this model, namely a moralistic one based on the notion of ‘malingering’ and the concept of free will, is even more problematic, and less scientific, than the disease model that he proposes to do away with.  

Physiological Basis of Illness 

For Szasz, mental illness is simply a metaphor that has come to be taken altogether too literally. 

Mental illness is a metaphorical disease; that, in other words, bodily illness stands in the same relation to mental illness as a defective television stands to an objectionable television programme. To be sure, the word ‘sick’ is often used metaphorically… but only when we call minds ‘sick’ do we systematically mistake metaphor for fact; and send a doctor to ‘cure’ the ‘illness’. It’s as if a television viewer were to send for a TV repairman because he disapproves of the programme he is watching” (Myth of Mental Illness: p11). 

But what is a disease? What we habitually refer to as diseases are actually quite diverse in aetiology. 

Perhaps the paradigmatic disease is an infection. Thus, modern medicine began with, and much of modern medicine is still based on, the so-called ‘Germ theory of disease’, which assumes that what we refer to as disease is caused by the effects of germs or ‘pathogens’ – i.e. microscopic parasites (e.g. bacteria, viruses), which inhabit and pass between human and animal hosts, causing the symptoms by which disease is diagnosed as part of their own life-cycle and evolutionary strategy.[2]

However, this model seemingly has little to offer psychiatry. 

Perhaps some mental illnesses are indeed caused by infections. 

Indeed, physicist-turned-anthropologist Gregory Cochran even controversially contends that homosexuality (which is not now considered by psychiatrists as a mental illness, despite its obviously biologically maladaptive effects – see below) may be caused by a virus

However, this is surely not true of the vast majority of what we term ‘mental illnesses’. 

However, not all physical diseases are caused by pathogens either. 

For example, developmental disorders and inherited conditions are also sometimes referred to as diseases, but these are caused by genes rather than germs. 

Likewise, cancer is sometimes referred to as a disease, and, while some cancers are indeed sometimes caused by an infection (for example, cervical cancer is usually caused by HPV, a sexually transmitted virus), many are not. 

What then do all these examples of ‘disease’ have in common and how, according to Szasz, do so-called mental illnesses differ conventional, bodily ailments? 

For Szasz, the key distinguishing factor is an identified underlying physiological cause for, or at least correlate of, the symptoms observed. Thus, Szasz writes: 

The traditional medical criterion for distinguishing the genuine from the facsimile – that is, real illness from malingering – was the presence of demonstrable change in bodily structure as revealed by means of clinical examination of the patient, laboratory tests on bodily fluids, or post-mortem study of the cadaver” (Myth of Mental Illness: p27) 

Thus, in all cases of what Szasz regards as ‘real’ disease, a real physiological correlate of some sort has been discovered, whether a microbe, a gene or a cancerous growth. 

In contrast, so-called mental illnesses were first identified, and named, purely on the basis of their symptomology, without any understanding of their underlying physiological cause. 

Of course, many diseases, including physical diseases, are, in practice, diagnosed by the symptoms they produce. A GP, for example, will typically diagnose flu without actually observing and identifying the flu virus itself inside the patient under a microscope. 

However, the existence of the virus, and its causal role in producing the symptoms observed, has indeed been demonstrated scientifically in other individuals afflicted with the same or similar symptoms. We therefore recognise the underlying cause of these symptoms (i.e. the virus) independently from the symptoms they produce. 

This is not true, however, for mental illnesses. The latter were named, identified and diagnosed long before there was any understanding of their underlying physiological basis. 

Rather than diseases, we might then more accurately call them syndromes, a word deriving from the Greek ‘σύνδρομον’, meaning ‘concurrence’, which is usually employed in medicine to refer simply to a cluster of signs and symptoms that seem to correlate together, whether or not the underlying cause is or is not understood.[3]

Causes and Correlates 

The main problem for Szasz’s position is that our understanding of the underlying physiological causes of psychiatric conditions – neurological, genetic and hormonal – has progressed enormously since he first authored The Myth of Mental Illness, the paper and the book, at the beginning of the 1960s. 

Yet reading ‘Psychiatry: The Science of Lies’, published in 2008, it seems that Szasz’s own position has advanced but little.[4]

Yet psychiatry, and psychology, have come a long way in the intervening half-century. 

Thus, in 1960, American psychiatry was still largely dominated by Freudian Fruedian psychoanalysis, a pseudoscience roughly on a par with phrenology, of which Szasz is rightly dismissive.[5]

Of particular relevance to Szasz’s thesis, the study of the underlying physiological basis for psychiatric disorders has also progressed massively.  

Every month, in a wide array of scientific journals, studies are published identifying neurological, genetic, hormonal and other physiological correlates for psychiatric conditions. 

In contrast, Szasz, although he never spells this out, seems to subscribe to an implicit Cartesian dualism, whereby human emptions, psychological states and behaviour are a priori assumed, in principle, to be irreducible to mere physiological processes.[6]

Szasz claims in Psychiatry: The Science of Lies that, once an underlying neurological basis for a mental illness has been identified, it ceases to be classified as a mental illness, and is instead classed as a neurological disorder. His paradigmatic example of this is Alzheimer’s disease (p2).[7]

Yet, today, the neurological correlates of many mental illnesses are increasingly understood. 

Nevertheless, despite the progress that has been made in identifying physiological correlates for mental disorders, there remains at least two differences between these correlates (neurological, genetic, hormonal etc.) and the recognised causes of both physiological and neurological diseases. 

First, in the case of mental illnesses, the neurological, genetic, hormonal and other physiological correlates remain just that, i.e. mere correlates

Here, I am not merely reiterating the familiar caution that correlation does not imply causation, but also emphasizing that the correlations in question tend to be far from perfect, and do not form the basis for a diagnosis, even in principle. 

In other words, as a rule, few such identified correlates are present in every single person diagnosed with the condition in question. The correlation is established only at the aggregate statistical level. 

Moreover, those persons who present the symptoms of a mental illness but who do not share the physiological correlate that has been shown to be associated with this mental illness are not henceforth identified as not truly suffering from the mental illness in question. 

In other words, not only is diagnosis determined, as a matter of convenience and practicality, by reference to symptoms (as is also often true for many physical illnesses), but mental illnesses remain, in the last instance, defined by the symptoms they produce, not any underlying physiological cause. 

Any physiological correlates for the condition are ultimately incidental and have not caused physicians to alter their basic definition of the condition itself. 

Second, the identified correlates are, again as a general rule, multiple, complex and cumulative in their effects. In other words, there is not one single identified physiological correlate of a given mental illness, but rather multiple identified correlates, often each having small cumulative effects of the probability of a person presenting symptoms. 

This second point might be taken as vindicating Szasz’s position that mental illnesses are not really illnesses. 

Thus, recent research on the genetic correlates of mental illnesses, as recently summarized by Robert Plomin in his book Blueprint: How DNA Makes Us Who We Are, has found that the genetic variants that cause psychiatric disorders are the exact same genetic variants which, when present in lesser magnitude, also cause normal, non-pathological variation in personality and temperament. 

This suggests that, at least at the genetic level (and thus presumably at the phenotypic level too), what we call mental illness is just an extreme presentation of what is normal variation in personality and behaviour. 

In other words, so-called mental illness simply represents the extreme tail-end of the normal bell curve distribution in personality attributes. 

This is most obviously true of the so-called personality disorders. Thus, a person extremely low in empathy, or the factor of personality referred to by psychometricians as agreeableness, might be diagnosed with anti-social personality disorder (or psychopathy). 

However, it is also true for so-called other mental disorders. For example, ADHD (attention deficit hyperactivity disorder) seems to be mere medical jargon for someone who is very impulsive, with a short attention span, and lacking self-discipline (i.e. low in the factor of personality that psychometricians call conscientiousness) – all traits which vary on a spectrum across the whole population. 

On the other hand, clinical depression, unlike personality, is a temporary condition from which most people recover. Nevertheless, it is so strongly predicted by the factor of personality known to psychometricians as neuroticism that psychologist Daniel Nettle writes: 

Neuroticism is not just a risk factor for depression. It is so closely associated with it that it is hard to see them as completely distinct” (Personality: p114). 

Yet calling someone ‘ill’ because they are at the extreme of a given facet of personality or temperament is not very helpful. It is roughly equivalent to calling a basketballer ‘ill’ because he is exceptionally tall, a jockey ‘ill’ because he is exceptionally small, or Albert Einsteinill’ because he was exceptionally intelligent

Mental illness or Malingering?

While Szasz has therefore correctly identified problems with the conventional disease model of mental health, the model which he proposes in its place is, in my view, even more problematic, and less scientific, than the disease model that he has rightly rejected as probematic and misleading. 

Most unhelpful is the central place given in his theory to the notion of malingering, i.e. the deliberate faking of symptoms by the patient. 

This analysis may be a useful way to understand the nineteenth century outbreak of so-called hysteria, to which Szasz devotes considerable attention, or indeed the modern diagnosis of Munchausen syndrome, which again involves complaining of imagined or exaggerated physical symptoms. 

It may also be a useful way to understand the recently developed diagnosis of chronic fatigue syndrome (CFS, formerly ME), which, like hysteria, involves the patient complaining of physical symptoms for which no physical cause has yet been identified. 

Interestingly from a psychological perspective, all three of these conditions are overwhelmingly diagnosed among women and girls rather than men and boys. 

However, malingering may also be a useful way to understand another psychiatric complaint that was primarily complained of by men, albeit for obvious historical reasons – namely, so-called ‘shell shock’ (now, classed as PTSD) among soldiers during World War One.[8]

Here, unlike with hysteria and CFS, the patient’s motive and rationale for faking the symptoms in question (if this is indeed what they were doing) is altogether more rational and comprehensible – namely, to avoid the horrors of the trenches (from which women were, of course, exempt). 

However, this model of ‘malingering’ is clearly much less readily applicable to sufferers of, say, schizophrenia

Here, far from malingering or faking illness, those afflicted will often vehemently protest that they are not ill and that there is nothing wrong with them. However, their delusions are often such that, by any ordinary criteria, they are undoubtedly, in the colloquial if not the strict medical sense, completely fucking bonkers. 

The model of malingering can, therefore, only be taken so far. 

Defining Mental Illness? 

The fundamental fallacy at the heart of psychiatry is, according to Szasz, the mistaking of moral problems for medical ones. Thus, he opines: 

Psychiatrists cannot expect to solve moral problems by medical methods” (Myth of Mental Illness: p24). 

Szasz has a point. Despite employing the language of science, there is undoubtedly a moral dimension to defining what constitutes mental illness. 

Whether a given cluster of associated behaviours represents just a cluster of associated behaviours or a mental illness is not determined on the basis of objective scientific criteria. 

Rather, most American psychiatrists simply regard as a mental illness whatever the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association classifies as a mental disorder. 

This manual is treated as gospel by psychiatrists, yet there is no systematic or agreed criteria for inclusion within this supposedly authoritative work. 

Popular cliché has it that mental illnesses are caused by a ‘chemical imbalance’ in the brain.  

Certainly, if we are materialists, we must accept that it is the chemical composition of the brain that causes behaviour, pathological or otherwise. 

But on what criteria are we to say that a certain chemical composition of the brain is an ‘imbalance’ and another is ‘balanced’, one behaviour ‘pathological’ and one ‘normal’? 

The criteria on which we make this judgement is, as I see it, primarily a moral one.[9]

More specifically, mental illnesses are defined as such, at least in part, because the behavioral symptoms that they produce tend to cause suffering or distress either to the person defined as suffering from the illness, or to those around them. 

Thus, a person diagnosed with depression is themselves the victim of suffering or distress resulting from the condition; a person diagnosed with psychopathy, on the other hand, is likely to cause psychological distress to those around them with whom they come into contact. 

This is a moral, not a scientific, criterium, depending as it does on the notion of suffering or harm

Indeed, it is not only a moral question, but it is also one that has, in recent years, been heavily politicized. 

Thus, gay right activists actively and aggressively campaigned for many years to have homosexuality withdrawn from the DSM and reclassified as non-pathological, and, in 1974, they were successful.[10]

This campaign may have had laudable motives, namely to reduce the stigma associated with homosexuality and prejudice against homosexuals. Yet it clearly had nothing to do with science and everything to do with politics and morality. 

Indeed, homosexuality satisfies many criteria for illness.[11]

First, it is, despite some ingenious and some not so ingenious attempts to show otherwise, obviously biologically maladaptive. 

Whereas the politically correct view is that homosexuality is entirely natural, normal and non-pathological variation of normal sexuality, from a Darwinian perspective, this view is obviously untenable. 

Homosexual sex cannot produce offspring. Homosexuality therefore involves a maladaptive misdirection of mating effort, which would surely strongly selected against by natural selection.[12]

Homosexuality is therefore best viewed as a malfunctioning of normal sexuality, just as cancer is a kind of malfunctioning of cell growth and division. In this sense, then, homosexuality is indeed best viewed as something akin to an illness. 

Second, homosexuality shows some degree of comorbidity with other forms of mental illness, such as depression.[13]

Finally, homosexuality is associated other undesirable life-outcomes, such as reduced longevity and, at least for male homosexuals, a greater lifetime susceptibility to various STDs.[14]

Yet, just as homosexuals successfully campaigned for the removal of homosexuality from the DSM, so trans rights campaigners are currently embarking on a similar campaign in respect of gender dysphoria

The politically correct consensus today holds that an adult or child who claims to identify of the opposite ‘gender’ to their biological sex should be encouraged and supported in their ‘transition’, and provided with hormone therapy, hormone blockers and sex reassignment surgery, as requested. 

This is roughly the equivalent of, if a person is mentally ill and thinks they are Napoleon, then, instead of telling them that they are not Napoleon, instead we provide them with legions with which to invade Prussia. 

Moving beyond the sphere of sexuality, some self-styled ‘neurodiversity’ activists have sought to reclassify autism as a normal variation of mental functioning, a claim that may seem superficially plausible in respect of certain forms of so-called ‘high functioning autism’, but is clearly untenable in respect of ‘low functioning autism’. 

Yet, on the other hand, there is oddly no similar, high-profile campaign to reclassify, say, anti-social personality disorder (ASPD) or psychopathy as a normal, non-pathological variant of human psychology. 

Yet psychopathy may indeed be biologically adaptive at least under some conditions (Mealey 1995). 

Yet no one proposes treating ASPD as normal or natural variation in personality, even though it is likely just that. 

The reason that there is no campaign to remove psychopathy from the DSM is, of course, because, unlike homosexuals, transexuals and autistic people, psychopaths are hughly disproportionately likely to cause harm to innocent non-consenting third-parties. 

This is indeed a good reason to treat psychopathy and anti-social personality disorder as a problem for society at large. However, this is a moral not a scientific reason for regarding it as problematic. 

To return to the question of disorders of sexuality, another useful point of comparison is provided by paedophilia

From a purely biological perspective, paedophilia is analogous to homosexuality. Both are biologically maladaptive because they involve sexual attraction to a partner with whom reproduction is, for biological reasons, impossible.[15]

Yet, unlike in the case of homosexuality, there has been no mainstream political push for paedophilia to be reclassified as nonpathological or removed from the Diagnostic and Statistical Manual of Mental Disorders of the AMA.[16]

The reason for this is again, of course, obvious and entirely reasonable, yet it equally obviously has nothing to do with science and everything to do with morality – namely, whereas homosexual behaviour as between consenting adults is largely harmless, the same cannot be said for child sexual abuse.[17]

Perhaps an even better analogy would be between homosexuality and, say, necrophilia

Necrophilic sexual activity, like homosexual sexual activity, but quite unlike paedophilic sexual activity, represents something of a victimless crime. A corpse, by virtue of being dead, cannot suffer by virtue of being violated.[18]

Yet no one would argue that necrophilia is a healthy and natural variation on normal human sexuality. 

Of course, although numbers are hard to come by due to the attendent stigma, necrophilia is presumably much less common, and hence much less ‘normal’, than is homosexuality. However, if this is a legitimate reason for regarding homosexuality as more ‘normal’ than is necrophilia, then it is also a legitimate reason for regarding homosexuality itself as ‘abnormal’, because homosexuality is, of course, much less common than heterosexuality.

Necrophile rights is, therefore, the reductio ad absurdum of gay rights.[19]

Medicine or Morality? 

The encroachment of medicine upon morality continues apace, as part of what Szasz calls the medicalization of everyday life Thus, there is seemingly no moral failing or character defect that is not capable of being redefined as a mental disorder. 

Selfish people are now psychopaths, people lacking in willpower and with short attention spans now have ADHD

But if these are simply variations of personality, does it make much sense to call them diseases? 

Yet the distinction between ‘mad’ and ‘bad also has practical application in the operation of the criminal justice system. 

The assumption is that mentally ill offenders should not be punished for their wrongdoing, but rather treated for their illness, because they are not responsible for their actions. 

But, if we accept a materialist conception of mind, then all behaviour must have a basis in the brain. On what basis, then, do we determine that one person is mentally ill while another is in control of his faculties?

As Robert Wright observes: 

“[Since] in both British and American courts, women have used premenstrual syndrome to partly insulate themselves from criminal responsibility… can a ‘high-testosterone’ defense of male murderers be far behind?… If defense lawyers get their way and we persist in removing biochemically mediated actions from the realm of free will, then within decades [as science progresses] the realm will be infinitesimal” (The Moral Animal: p352-3).[20]

Yet a man claiming that, say, high testosterone caused his criminal behaviour is unlikely to be let off on this account, because, if high testosterone does indeed cause crime, then we have good reason to lock up high testosterone men precisely because they are likely to commit crimes.[21]

Szasz wants to resurrect the concept of free will and hold everyone, even those with mental illnesses, responsible for their actions. 

My view is the opposite: No one has free will. All behaviour, normal or pathological, is determined by the physical composition of the brain, which is, in turn, determined by some combination of heredity and environment. 

Indeed, determinism is not so much a finding of science as its basic underlying assumption and premise.[22]

In short, science rests on the assumption that all events have causes and that, by understanding the causes, we can predict behaviour. If this were not true, then there would be no point in doing science, and science would not be able to make any successful predictions. 

In short, criminal punishment must be based on consequentialist utilitarian considerations such as deterrence, incapacitation and rehabilitation rather than such unscientific moralistic notions as free will, just deserts and blame.[23]

A Moral Component to All Medicine? 

Szasz is right, then, to claim that there is a moral dimension to psychiatric diagnoses. 

This is why psychopathy is still regarded as a mental disorder even though it is likely an adaptive behavioural strategy and life history in certain circumstances (Mealey 1995). 

It is also why homosexuality is no longer regarded as a mental illness, despite its obviously biologically maladaptive consequences, yet there is no similar campaign to remove paedophilia from the DSM. 

Yet what Szasz fails to recognise is that there is also a moral element to the identification and diagnosis of physical illnesses too. 

Thus, physical illnesses, like psychiatric illnesses, are called illnesses, at least in part, because they cause pain, suffering and impairment in normal functioning to the person diagnosed as suffering from the illness. 

If, on the other hand, an infection did not produce any unpleasant symptoms, then the patient would surely never bother to seek medical treatment and thus the infection would probably never come to the attention of the medical profession in the first place. 

If it did come to their attention, would they still call it a disease? Would they expect time and resources attempting to ‘cure’ it? Hopefully not, as to do so would be a waste of time and resources. 

Extending this thought experiment, what if the infection in question, not only caused no negative symptoms, but actually had positive effects on the person infected.   

What if the infection in question caused people to be fitter, smarter, happier, kinder and more successful at their jobs? 

Would doctors still call the infection a ‘disease’, and the microscopic organism underlying it a ‘germ’? 

Actually, this hypothetical thought experiment may not be entirely hypothetical. 

After all, there are indeed surely many microorganisms that infect humans which have few or negligible effects, positive or negative, and with which neither patients nor doctors are especially concerned. 

On the other hand, some infections may be positively beneficial to their hosts. 

Take, for example, gastrointestinal microbiota (also known as gut microbiota). 

These are microorganisms that inhabit our digestive tracts, and those of other organisms, and are thought to have a positive beneficial effect on the health and functioning of the host organism. They have even been marketed as probiotics and good bacteria in the advertising campaigns for certain yoghurt-like drinks. 

Another less obvious example is perhaps provided by mitochondrial DNA

In our ancient evolutionary history, this began as the DNA of a separate organism, a bacterium, that infected host organisms, but ultimately formed a symbiotic and mutualistic relationship with us, and now plays a key role in the functioning of those organisms whose distant ancestors it first infected. 

In short, all medicine has a moral dimension.  

This is because medicine is an applied, not a pure, science. 

In other words, medicine aims not merely to understand disease in the abstract, but to treat it. 

We treat diseases to minimize human suffering, and the minimization of human suffering is ultimately a moral (or perhaps economic, since doctors are paid, and provide a service to their patients), rather than a purely scientific, endeavour. 


[1] Although this post is a review of Thomas Szasz’s Pyschiatry: The Science of Lies, readers may note that many of the quotations from Szasz in the review are actually taken from his earlier, more famous book, The Myth of Mental Illness, published some several decades previously. By way of explanation, while this essay is a review of Szasz’s Psychiatry: The Science of Lies, I listened to an audiobook version of this book, and do not have access to a print copy. It was therefore difficult to find source quotes from this book. In contrast, I own a copy of The Myth of Mental Illness, but have yet to read it in full. I thought it more useful to read a more recent statement of Szasz’s views, so as to find out how he has dealt with recent findings in biological psychiatry and behavioural genetics. Unfortunately, as I discuss above, it seems that Szasz has reacted to recent findings in biological psychiatry and behavioural genetics hardly at all, and includes few if any references to such developments in his more recent book.

[2] Thus, proponents of Darwinian medicine contend that many infections produce symptoms such as coughing, sneezing and diarrhea precisely because these symptoms facilitate the spread of the disease through contact with the bodily fluids expelled, hence promoting the pathogens’ own Darwinian fitness or reproductive success.

[3] For example, the underlying physical cause of chronic fatigue syndrome (CFS) is not fully understood. On the other hand, the underlying cause of acquired immunodeficiency syndrome (AIDS) is now understood, namely HIV infection, but, presumably because it involves increased susceptibility to many different infections, it is still referred to as a syndrome rather than a disease in and of itself.

[4] Indeed, according to Szasz himself, in an autobiographical interlude in ‘Psychiatry: The Science of Lies’, he had arrived at his opinion regarding the scientific status of psychiatry even earlier, when first making the decision to train to become a psychiatrist. Indeed, he claims to have made the decision to study psychiatry and qualify as a psychiatrist precisely in order to attack the field from within, with the authority which this professional qualification would confer upon him. This, it hardly needs to be said, is a very odd reason for a career choice.

[5] Attacking modern psychiatry by a critique of Freud is a bit like attacking neuroscience by critiquing nineteenth century phrenology. It involves constructing a straw man version of modern psychiatry. I am reminded in particular of Arthur Jensen’s review of infamous charlatan Stephen Jay Gould’s discredited The Mismeasure of Man, which Jensen titled The debunking of scientific fossils and straw persons, where he described Gould’s method of trying to discredit the modern science of IQ testing and intelligence research by citing the errors of nineteenth-century phrenologists as roughly akin to “trying to condemn the modern automobile by merely pointing out the faults of the Model T”.

[6] In The Myth of Mental Illness, Szasz, writes: 

There remains a wide circle of physicians and allied scientists whose basic position concerning the problem of mental illness is essentially that expressed in Carl Wernicke’s famous dictum: Mental diseases are brain diseases’. Because, in one sense, this is true of such conditions as paresis and the psychoses associated with systemic intoxications, it is argued that it is also true for all other things called mental diseases. It follows that it is only a matter of time until the correct physicochemical, including genetic, bases or cause’, of these disorders will be discovered. It is conceivable, of course, that significant physicochemical disturbances will be found in some mental patients and in some conditions now labeled mental illnesses. But this does not mean that all so-called mental diseases have biological causes, for the simple reason that it has become customary to use the term mental illness to stigmatize, and thus control, those persons whose behavior offends society—or the psychiatrist making the diagnosis” (The Myth of Mental Illness: p103). 

Yet, if we accept a materialist conception of mind, then all behaviours, including those diagnostic of mental illness, must have a cause in the brain, though it is true that the same behaviours may result from quite different neuroanatomical causes.
It is certainly true that the concept of mental illness has been used to “stigmatize, and thus control, those persons whose behavior offends society”. So-called drapetomania provides an obvious example, albeit one that was never widely recognised by physicians, at least outside the American South. Another example would be the diagnosis of sluggish schizophrenia to& institutionalize political dissidents in the Soviet Union. Likewise, psychopathy (aka sociopathy or anti-social personality disorder) may, as I argue later in this post, have been classified as a mental disorder primarily because the behaviour of people diagnosed with this condition does indeed “offend society” and arguably demand the “control”, and sometimes detention, of such people.
However, this does not mean that the behaviours complained of (e.g. political dissidence, or anti-social behaviours) will not have neural or other physiological correlates. On the contrary they undoubtedly do, and psychologists have also investigated the neural and other physiological correlates of all behavours, not just those labelled as ‘mental illnesses’.
However, Szasz does not quite go so far as to deny that behaviours have physical causes. On the contrary, in The Myth of Mental Illness, hedging his bets against future scientific advances, Szasz acknowledges:

I do not contend that human relations, or mental events, take place in a neurophysiological vacuum. It is more than likely that if a person, say an Englishman, decides to study French, certain chemical (or other) changes will occur in his brain as he learns the language. Nevertheless, I think it would be a mistake to infer from this assumption that the most significant or useful statements about this learning process must be expressed in the language of physics. This, however, is exactly what the organicist claims” (The Myth of Mental Illness: p102- 3). 

Here, Szasz makes a good point – but only up to a point. Whether we are what Szasz calls ‘organicists’ or not, I’m sure we can all agree that, for most purposes, it is not useful to explain the decision to learn French in terms of neurophysiology. To do so would be an example of what philosopher Daniel Dennett, in Darwin’s Dangerous Idea, calls ‘greedy reductionism’, which he distinguished from ‘good reductionism’, which is central to science.
However, it is not clear that the same is true of what we call mental illnesses. Often it may indeed be useful to understand mental illnesses in terms of their underlying physiological causes, including for therapeutic reasons, since understanding the physiological basis for behaviour that we deem undesirable may provide a means of changing these behaviours by altering the physical composition of the brain. For example, if the hormone serotonin is involved in regulating mood, then manipulating levels of serotonin in the brain, or their reabsorption may be a way of treating depression, anxiety and other mood disorders. Thus, SSRIs and SNRIs, which are thought to do just this, have been found to be effective in doing just this.
However, for other purposes, it may be useful to look at a different level of causation. For example, as I discuss in a later endnote, although it may be scientifically a nonsense, it may nevertheless be useful to cultivate a belief in free will among some psychiatric patients, since it may encourage them to overcome their problems rather adopting the fatalistic view that they are ill and there is hence nothing they can do to improve their predicament. Szasz sometimes seems to be arguing for something along these lines.

[7] In The Myth of Mental Illness, as quoted in the preceding endnote, Szasz also gives as examples of behavioural conditions with well-established physiological causes “paresis and the psychoses associated with systemic intoxications(The Myth of Mental Illness: p103).

[8] I hasten to emphasize in this context, lest I am misunderstood, I am not saying that Szasz’s model of ‘malingeringis indeed the appropriate way to understand conditions such as hysteria, Munchausen syndrome, chronic fatigue syndrome or shell shock – only that a reasonable case can be made to this effect. Personally, I do not regard myself as having a sufficient expertise on the topic to be willing to venture an opinion either way.

[9] Of course, we could determine whether a certain composition and structure of the brain is ‘balanced’ ‘imbalanced’ on non-moralistic, Darwinian criteria. In other words, if a certain composition/structure and the behaviour it produces is adaptive (i.e. contributes to the reproductive success or fitness of the organism) then we could call it ‘balanced’; if, on the other hand, it produces maladaptive behaviour we could call it ‘imbalanced’. However, this would produce a quite different inventory and classification of mental illnesses than that provided by the DSM of the APA and other similar publications, since, as we will see, homosexuality, being obviously biologically maladaptive, would presumably be classified as an ‘imbalance’ and hence a mental illness, whereas psychopathy, since it may well, under certain conditions, be adaptive, would be classed as non-pathological and hence ‘balanced’. This analysis, however, has little to do with mental illness as the concept is currently conceived.

[10] Oddly, Szasz himself is sometimes lauded by some politically correct-types as being among the first psychiatrists to deny that homosexuality was a mental illness. Yet, since he also denied that schizophrenia was a mental illness, and indeed rejected the whole concept of ‘mental illness’ as it is currently conceived, this is not necessarily as ‘progressive’ and ‘enlightened’ a view as it is sometimes credited as having been.

[11] Here, a few caveats are in order. Describing homosexuality as a mental illness no more indicates hatred towards homosexuals than describing schizophrenia as a mental illness indicates hatred towards people suffering from schizophrenia, or describing cancer as an illness indicates hatred towards people afflicted with cancer. In fact, regarding a person as suffering from an illness is generally more likely to elicit sympathy for the person so described than it is hatred.
Of course, being diagnosed with a disease may involve some stigma. But this is not the same as hatred.
Moreover, as is clear from my conclusion, I am not, in fact, arguing that homosexuality should indeed be classified as a mental illness. Rather, I am simply pointing out that it is difficult a frame a useful definition of what constitutes a ‘mental disorder’ unless that definition includes moral criteria, which are necessarily extra-scientific. However, in the final section of this piece, I argue that there is indeed a moral component to all medicine, psychiatry included.
Of course, as I also discuss above, there are indeed some moral reasons for regarding homosexuality as undesirable, for example its association with reduced longevity, which is generally regarded as an undesirable outcome. However, whether homosexuality should indeed be classed as a ‘mental disorder’ strikes me as debatable and also dependent on the exact definition of ‘mental disorder’ adopted.

[12] If homosexuality is therefore maladaptive, this, of course, raises the question as to why homosexuality has not indeed been eliminated by natural selection. The first point to make here is that homosexuality is in fact quite rare. Although Kinsey famously originated the since-popularized claim that as many as 10% of the population are homosexual, reputable estimates using representative samples suggest less than 5% of the population identifies as exclusively or preferentially homosexual (though a larger proportion of people may have had homosexual experiences at some time, and the ‘closet factor’ makes it possible to argue that, even in an age of unprecedented tolerance and indeed celebration of homosexuality, and even in anonymous surveys, this may represent an underestimate due to underreporting).
Admittedly, there has recently been a massive increase in the numbers of teenage girls identifying as non-heterosexual, with numbers among this age group now slightly exceeding 10%. However, I suspect that this is also as much a matter of fashion as of sexuality. Thus, it is notable that the largest increase has been for identification as ‘bisexual’, which provides an convenient cover by which teenage girls can identify with the so-called ‘LGBT+ community’ while still pursuing normal, healthy relationships with opposite-sex boys or men. The vast majority of these girls will, I suspect, grow up to have sexual and romantic relationships primarily with members of the opposite sex.
Yet even these low figures are perhaps higher than one might expect, given that homosexuality would be strongly selected against by evolution. (However, it is important to remember that, when homosexuals were persecuted and hence mostly remained in the ‘closet’, homosexuality would have been less selected against, precisely because so many gay men and women would have married members of the opposite sex and reproduced if only to evade accusations of homosexuality. With greater tolerance, however, they no longer have any need to do so. The liberation of homosexuals may therefore, paradoxically, lead to their gradual disappearance through selection.)
A second point to emphasize is that, contrary to popular perception, homosexuality is not especially heritable. Indeed, it is rather less heritable than other behavioural traits about which it is much less politically correct to speculate regarding the heritability (e.g. criminality, intelligence).
If homosexuality is primarily caused by environmental factors, not genetics, then it would be more difficult for natural selection to weed it out. However, given that exclusive or preferential homosexuality would be strongly selected against by natural selection, humans should have evolved to be resistant to developing exclusive or preferential homosexuality under all environmental conditions that were encountered during evolutionary history. It is possible, however, environmental novelties atypical of the environments in which our psychological adaptations evolved are responsible for causing homosexuality.
For what it’s worth, my own favourite theory (although not necessarily the best supported theory) for the evolution of male homosexuality proposes that genes located on the X chromosome predispose a person to be sexually attracted to males. This attraction is adaptive for females, but maladaptive for males. However, since females have two X chromosomes and males only one and therefore any X chromosome genes will find themselves in females twice as often as they find themselves in males, any increase in fitness for females bearing these X chromosome genes only has to be half as great as the reproductive cost to males for the genes in question to be positively selected for.
This is sometimes called the ‘balancing selection theory of male homosexuality’. However, perahps more descriptive and memorable is Satoshi Kanazawa’s coinage, ‘the horny sister hypothesis’.
This theory also has some support, in that there is some evidence the female relatives of male homosexuals have a greater number of offspring than average and also that gay men report having more gay uncles on their mother’s than their father’s side, consistent with an X chromosome-linked trait (Hamer et al 1993; Camperio-Ciani et al 2004). Some genes on the X chromosome have also been linked to homosexuality (Hamer et al 1993; Hamer 1999).
On the other hand, other studies find no support for the hypothesis. For example, Bailey et al (1999) found that rates of reported homosexuality were no higher among maternal than among paternal male relatives, as did McKnight & Malcolm (1996). At any rate, as explained by Wilson and Rahman in their excellent book Born Gay: The Psychobiology of Sexual Orientation:

Increased rates of gay maternal relatives might also appear because of decreased rates of reproduction among gay men. A gay gene is unlikely to be inherited from a gay father because a gay man is unlikely to have children” (Risch et al 1993) (Born Gay: p51).

[13] Gay rights activists assert that the only reason that homosexuality is associated with other forms of mental illness is because of the stigma to which homosexuals are subject on account of their sexuality. This has sometimes been termed the ‘social stress hypothesis’, ‘social stress model’ or ‘minority stress model’. There is indeed statistical support for the theory that the social stigma is indeed associated with higher rates of depression and other mental illnesses.
It is also notable that, while homosexuality is indeed consistently associated with higher levels of depression and suicide, conditions that can obviously be viewed as a direct response to social stigma, I am not aware of any evidence suggesting higher rates of, say, schizophrenia among homosexuals, which would less obviously, or at least less directly, result from social stress. However, I tend to agree with the conclusions of Mayer and McHugh, in their excellent review of the literature on this subject, that, while social stress may indeed explain some of the increased rate of mental illness among homosexuals, it is unlikely to account for the totality of it (Mayer & McHugh 2016).

[14] Yet, in describing the life outcomes associated with homosexuality, as undesirable, I am, of course, making am extra-scientific value judgement. Of course, the value judgement in question – namely that dying earlier and being disproportionately likely to contract STDs is a bad thing – is not especially controversial. However, it still illustrates the extent to which, as I discuss later in this post, definitions of mental illnesses, and indeed physical illnesses, always include a moral dimension – i.e. diseases are defined, in part, by the fact that they cause suffering, either to the person afflicted, or, in the case of some mental illnesses, to the people in contact with them.

[15] Indeed, from a purely biological perspective, homosexuality is arguably even more biologically maladaptive than is paedophilia, since even very young children can, in some exceptional cases, become pregnant and even successfully birth offspring, yet same-sex partners are obviously completely incapable of producing offspring with one another.

[16] Indeed, far from there being any political pressure to remove paedophilia from the DSM of the AMA, as ocurred with homosexuality, there is instead increasing pressure to add hebephilia (i.e. attraction to pubescent and early-post-pubescent adolescents) to the DSM. If successful, this would probably lead to pressure to also add ‘ephebophilia’ (i.e. the biologically adaptive and normal male attraction to mid- to late-adolescents) to the DSM, and thereby effectively pathologize and medicalize, and further stigmatize, normal male sexuality.

[17] Of course, homosexual sex does have some dangers, such as STDs. However, the same is also true of heterosexual sex, although, for gay male sex, the risks are vastly elevated. Yet other perceived dangers result from only from heterosexual sex (e.g. unwanted pregnancies, marriage). Meanwhile, the other negative life outcomes associated with homosexuality (e.g. elevated risk of depression and suicide) probably result from a homosexual orientation rather than from gay sex as such. Thus, a celibate gay man is, I suspect, just as likely, if not more likely, to suffer depression than is a highly promiscuous gay man.
Yet, while gay sex may be mostly harmless, the same cannot, of course, be said for child sexual abuse. It may indeed be true that the long-term psychological effects of child sexual abuse are exaggerated. This was, of course, the infamous conclusion of the Rind et al meta-analysis, which resulting in much moral panic in the late-1990s (Rind et al 1998). This is especially likely to be the case when the sexual activity in question is consensual and involves post-pubertal, sexually mature (but still legally underage) teenagers. However, in such cases the sexual activity in question should not really be defined as ‘child sexual abuse’ in the first place, since it neither involves immature children in the biological sense, nor is it necessarily abusive. Yet, it must be emphasized, even if child sexual abuse does not cause long-term psychological harm, it may still cause immediate harm, namely the distress experienced by the victim at the time of the abuse.

[18] Of course, one might argue that the relatives of the deceased may suffer as a result of the idea of their dead relatives’ bodies being violated. However, much the same is also true of homosexuality. So-called ‘homophobes’, for example, may dislike the idea of their adult homosexual sons having consensual homosexual sex. Indeed, they may even dislike the idea of unrelated adult strangers being allowed to have consensual homosexual sex. This was indeed presumably the reason why homosexuality has been criminalized and prohibited in so many cultures across history in the first place, i.e. because other people were disgusted by the thought of it. However, we no longer regard this sort of puritanical, disapproval other people’s private lives as a sufficient reason to justify the criminalization of homosexual behaviour. Why then should it be a reason for criminalizing necrophilia?

[19] Other similar thought experiments involve the prohibitions on other sexual behaviours such as zoophilia and incest. In both these cases, however, the case is morally more complex, in the case of zoophilia on account of whether the animal participant suffers harm or has consented, and, in the case of incest, because of eugenic considerations, namely the higher rate of the expression of deleterious mutations among the offspring of incestuous unions.

[20] Indeed, the courts, in both Britain and America, have been all too willing to invent bogus pseudo-psychiatric diagnoses in order to excuse women, in particular, for culpability in their crimes, especially murder. For example, in Britain, the Infanticide Acts of 1922 and 1938 provide a defence against murder for women who kill their helpless new-born infants where “at the time of the act… the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child”. In terms of biology, physiology and psychology, this is, of course, a nonsense, and, of course, no equivalent defence is available for fathers, though, in practice, the treatment of mothers guilty of infanticide is more lenient still (Wilczynski and Morris 1993).
Similarly, in both Britain and America, women guilty of killing their husbands, often while the latter was asleep or otherwise similarly incapacitated, have been able to avoid being a murder conviction by claiming to have been suffering from so-called ‘battered women syndrome’. There is, of course, no equivalent defence for men, despite the consistent finding that men are somewhat more likely to be the victim of violence from their female intimate partners than women are to have been victimized by their male intimate partners (Fiebert 2014). This may partly explain why men who kill their wives receive, on average, sentences three time as long as women who kill their husbands (Langan & Dawson 1995).

[21] Of course, another possibility might be some form of hormone therapy to reduce the offender’s testosterone. Actually, it must be acknowledged that whether testosterone is indeed correlated with criminal or violent behaviour is the subject of some dispute. Thus, Alan Mazur, a leading researcher in this area, argues that testosterone is not associated with aggression or violence as such, but rather only with dominance behaviours, which can also be manifested in non-violent ways. For example, a high-powered business tycoon is likely to be high in social dominance behaviours, but relatively unlikely to commit violent crimes. On the other hand, a prisoner, being of low status, may be able to exercise dominance only through violence. I am therefore giving the example of high testosterone only as a simplified thought experiment.

[22] Of course, one finding of science, namely quantum indeterminism, complicates this assumption. Ironically, while determinism is the underlying premise of all scientific enquiry, nevertheless one finding of such enquiry is that, at the most fundamental level, determinism does not hold.

[23] Nevertheless, I am persuaded that there may be some value in the concept of free will, after all. Although it is a nonsense, it may, like some forms of religious belief, nevertheless be a useful nonsense, at least in some circumstances.
Thus, if a person is told that there is no free will, and that their behaviours are inevitable, this may encourage a certain fatalism and the belief that people cannot change their behaviours for the better. In fact, this is a fallacy. Actually, determinism does not suggest that people cannot change their behaviours. It merely concludes that whether people do indeed change their behaviours is itself determined. However, this philosophical distinction may be beyond many people’s understanding.
Furthermore, if people are led to believe that they cannot alter their own behaviour, then this may become something of a self-fulfilling prophecy, and thereby prevent self-improvement.
Therefore, just as religious beliefs may be untrue, but nevertheless serve a useful function in giving people a reason to live and to behave prosocially and for the benefit of society as a whole, so it may be beneficial to encourage a belief in free will in order to encourage self-improvement, including among the mentally ill.


Bailey et al (1999) A Family History Study of Male Sexual Orientation Using Three Independent Samples, Behavior Genetics 29(2): 79–86. 
Camperio-Ciani (2004) Evidence for maternally inherited factors favouring male homosexuality and promoting female fecundity, Proceedings of the Royal Society B: Biological Sciences 271(1554): 2217–2221. 
Fiebert (2014) References Examining Assaults by Women on Their Spouses or Male Partners: An Updated Annotated Bibliography, Sexuality & Culture 18(2):405-467. 
Hammer et al (1993) A linkage between DNA markers on the X chromosome and male sexual orientation, Science 261(5119):321-7.  
Hammer (1999) Genetics and Male Sexual Orientation, Science 285(5429): 803. 
Langan & Dawson (1995) Spouse Murder Defendants in Large Urban Counties, U.S. Department of Justice Office of Justice Programs, Bureau of Justice Statistics: Executive Summary (NCJ-156831), September 1995. 
Mayer & McHugh (2016) Sexuality and Gender Findings from the Biological, Psychological, and Social Sciences, New Atlantis 50: Fall 2016. 
McKnight & Malcolm (2000) Is male homosexuality maternally linked? Evolution and Gender 2(3):229-252. 
Mealey (1995) The sociobiology of sociopathy: An integrated evolutionary model. Behavioral and Brain Sciences, 18(3): 523–599.
Rind et al(1998). A Meta-Analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples, Psychological Bulletin 124 (1): 22–53.
Risch et al (1993) Male Sexual Orientation and Genetic Evidence, Science 262(5142): 2063-2065. 
Szasz 1960 The Myth of Mental Illness. American Psychologist, 15, 113-118. 
Wilczynski & Morris (1993) Parents Who Kill their children, Criminal Law Review, 31-6.


Hitler, Hicks, Nietzsche and Nazism

Nietzsche and the Nazis: A Personal View by Stephen Hicks (Ockham’s Razor Publishing 2010) 

Scholarly (and not so scholarly) interpretations of Nietzsche always remind me somewhat of biblical interpretation

In both cases, the interpretations always seem to say at least as much about the philosophy, worldview and politics of the person doing the interpretation as they do about the content of the work ostensibly being interpreted. 

Just as Christians can, depending on preference, choose between, say, Exodus 21:23–25 (an eye for an eye) or Matthew 5:39 (turn the other cheek), so authors of diametrically opposed political and philosophical worldviews can almost always claim to find something in Nietzsche’s corpus of writing to support their own perspective. 

Thus, whereas German National Socialists selectively quoted passages from Nietzsche that appear critical of Jews, so modern apologists cite passages that profess great admiration for the Jewish people, and other passages undoubtedly highly critical both of Germans and anti-Semites.  

Similarly, in HL Mencken’s The Philosophy of Friedrich Nietzsche, Nietzsche appears as an aristocratic elitist, opposed to Christianity, Christian ethics, egalitarianism and herd morality, but also as a scientific materialist—much like, well, HL Mencken himself. 

Yet, among leftist postmodernists, Nietzsche’s moral philosophy is largely ignored, and he is cited instead as an opponent of scientific materialism who rejects the very concept of objective truth, including scientific truth—in short, a philosophical precursor to postmodernism. 

There are indeed passages in Nietzsche’s work that, at least when quoted in isolation, can be interpreted as supporting any of these mutually contradictory notions. 

In his book Nietzsche and the Nazis, professor of philosophy Stephen Hicks discusses the association between the thought of Friedrich Nietzsche and the most controversial of the many twentieth century movements to claim Nietzsche as their philosophical precursor, namely the National Socialist movement and regime in early- to mid-twentieth century Germany. 

Since he is a professor of philosophy rather than a historian, it is perhaps unsurprising that Hicks demonstrates a rather better understanding of the philosophy of Nietzsche than he does of the ideology of Hitler and the German National Socialist movement. 

Thus, if the Nazis stand accused of misinterpreting, misappropriating or misrepresenting the philosophy of Nietzsche, Hicks can claim to have outdone even them—for he has managed to misrepresent, not only the philosophy of Nietzsche, but also that of the Nazis as well. 

Philosophy as a Driving Force in History 

Hicks begins his book by making a powerful case for the importance of philosophy as a force in history and as a factor in the rise of German National Socialism in particular. 

Thus, he argues: 

The primary cause of Nazism lies in philosophy… The legacy of World War I, persistent economic troubles, modern communication technologies, and the personal psychologies of the Nazi leadership did play a role. But the most significant factor was the power of a set of abstract, philosophical ideas. National Socialism was a philosophy-intensive movement” (p10-1). 

This claim—namely, that “National Socialism was a philosophy-intensive movement”—may seem an odd one, especially since German National Socialism is usually regarded, not entirely unjustifiably, as a profoundly anti-intellectual movement. 

Moreover, to achieve any degree of success and longevity, all political movements, and political regimes, must inevitably make ideological compromises in the face of practical necessity, such that their actual policies are dictated at least as much pragmatic considerations of circumstance, opportunity and realpolitik as it is by pure ideological dictate.[1]

Yet, up to a point, Hicks is right. 

Indeed, Hitler even saw himself as, in some ways, a philosopher in his own right. Thus,  historian Ian Kershaw, in his celebrated biography of the German Führer, Hitler, 1889-1936: Hubris, observes: 

“In Mein Kampf, Hitler pictured himself as a rare genius who combined the qualities of the ‘programmatist’ and the ‘politician’. The ‘programmatist’ of a movement was the theoretician who did not concern himself with practical realities, but with ‘eternal truth’, as the great religious leaders had done. The ‘greatness’ of the ‘politician’ lay in the successful practical implementation of the ‘idea’ advanced by the ‘programmatist’. ‘Over long periods of humanity,’ he wrote, ‘it can once happen that the politician is wedded to the programmatist.’ His work did not concern short-term demands that any petty bourgeois could grasp, but looked to the future, with ‘aims which only the fewest grasp’… Seldom was it the case, in his view, that ‘a great theoretician’ was also ‘a great leader’… He concluded: ‘the combination of theoretician, organizer, and leader in one person is the rarest thing that can be found on this earth; this combination makes the great man.’ Unmistakably, Hitler meant himself” (Hitler, 1889-1936: Hubris: p251–2). 

Moreover, philosophical ideas have undoubtedly had a major impact on history in other times and places. 

Thus, for example, the French revolution and Bolshevik Revolution may have been triggered and made possible by social and economic conditions then prevailing. But the regimes established in their aftermath were, at least in theory, based on the ideas of philosophers and political theorists.  

Thus, if the French revolution was modelled on the ideas of thinkers such as Locke, Rousseau and Voltaire, and the Bolshevik Revolution on those of Marx, who then were the key thinkers, if any, behind the National Socialist movement in Germany? 

Hicks, for his part, tentatively ventures several leading candidates: 

Georg Hegel, Johann Fichte, even elements from Karl Marx” (p49).[2]

In an earlier chapter, as part of his attempt to argue against the notion that German National Socialism had no intellectual credibility, he also mentions several contemporaneous thinkers who, he claims, “supported the Nazis long before they came to power” and who could perhaps be themselves be considered intellectual forerunners for National Socialism, including Oswald Spengler, Martin Heidegger, and legal theorist Carl Schmitt (p9).[3]

Besides Hitler himself, and Rosenberg, each of whom considered themselves philosophical thinkers in their own right, other candidates who might merit honourable (or perhaps dishonourable) mention in this context include Hitler’s own early mentor Dietrich Eckart, racial theorists Arthur De Gobineau and Houston Stewart Chamberlain, the American Madison Grant, biologist Ernst Haeckel, geopolitical theorist Karl Haushofer, and, of course, the composer Richard Wagner – though most of these are not, of course, philosophers in the narrow sense.

Yet, at least according to Hicks, the best known and most controversial name atop any such list is almost inevitably going to be Friedrich Nietzsche (p49). 

Nietzsche’s Philosophy 

Although the association between Nietzsche with the Nazis continues to linger large in the popular imagination, mainstream Nietzsche scholarship in the years since World War II, especially the work of the influential homosexual Jewish philosopher and poet, Walter Kaufmann, has done much rehabilitate the reputation of Nietzsche, sanitize his philosophy and absolve him of any association with, let alone responsibility for, Fascism or National Socialism. 

Hick’s own treatment is rather more balanced. 

Before directly comparing and contrasting the various commonalities and differences between Nietzsche’s philosophy and that of the National Socialist movement and regime, Hick devotes one chapter to discussing the political philosophy and ideology of the Nazis, another to discussing their policies once in power, and a third to discussion of Nietzsche’s own philosophy, especially his views on morality and religion. 

As I have already mentioned, although Nietzche’s philosophy is the subject of many divergent interpretations, Hicks, in my view, mostly gets Nietzsche’s philosophy right. There are, however, a few problems.

Some are relatively trivial, perhaps even purely semantic. For example, Hicks equates Nietzsche’s Übermensch with Zarathustra himself, writing:

Nietzsche gives a name to his anticipated overman: He calls him Zarathustra, and he names his greatest literary and philosophical work in his honor” (p74)

Actually, as I understood Nietzsche’s Thus Spake Zarathustra (which is to say, not very much at all, since it is a notoriously incomprehensible work, and, in my view, far from Nietzsche’s “greatest literary and philosophical work”, as Hicks describes it), Nietzsche envisaged his fictional Zarathustra, not as himself the Übermensch, but rather as its herald and prophet.

Indeed, to my recollection, not only does Zarathustra never himself even claim to embody the Übermensch, but he also repeatedly asserts that the most contemporary man, Zarathustra himself presumably included, can ever even aspire to be is a ‘bridge’ to the Übermensch, rather than the Übermensch himself.

A perhaps more substantial problem relates to Hick’s understanding of Nietzsche’s contrasting master’ and ‘slave moralities. Hicks associates the former with various traits, including:  

Pride, Self-esteem; Wealth; Ambition, boldness; Vengeance; Justice… Pleasure, Sensuality… Indulgence” (p60). 

Most of these associations are indeed unproblematically associated with Nietzsche’s ‘master morality’, but a few require further elaboration. 

For example, it may be true that Nietzsche’s ‘master morality’ is associated with the idea of “vengeance” as a virtue. However, associating the related, but distinct concept of “justice” exclusively with Nietzsche’s ‘master morality’ as Hicks does (p60; p62) strikes me as altogether more questionable. 

After all, the ‘slave morality’ of Christianity also concerns itself a great deal with “justice”. It just has a different conception of what constitutes justice, and also sometimes defers the achievement of “justice” to the afterlife, or to the Last Judgement and coming Kingdom of God (or, in pseudo-secular modern leftist versions, the coming communist utopia). 

Similarly problematic is Hicks’s characterization of Nietzsche’s ‘master morality’ as championing “indulgence”, as well as “pleasure [and] sensuality”, over “self-restraint” (p62; p60). 

This strikes me as, at best, an oversimplification of Nietzsche’s philosophy 

On the one hand, it is true that Nietzsche disparages and associates with ‘slave morality’ what Hume termed ‘the monkish values’, namely ideals of self-denial and asceticism. He sees them as both a sign of weakness and a denial of life itself, writing in Twilight of the Idols

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The same means, castration and extirpation, are instinctively chosen for waging war against a passion, by those who are too weak of will, too degenerate, to impose some sort of moderation upon it” (Twilight of the Idols: iv:2.). 

The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: ii:4). 

Yet it is clear that Nietzsche does not advocate complete surrender to indulgence, pleasure and sensuality either. 

Thus, in the first of the two passages quoted above, he envisages the strong as also imposing “some sort of moderation” without the need for complete abstinence. 

Indeed, in The Antichrist, Nietzsche goes further still, extolling: 

The most intelligent men, like the strongest [who] find their happiness where others would find only disaster: in the labyrinth, in being hard with themselves and with others, in effort; their delight is in self-mastery; in them asceticism becomes second nature, a necessity, an instinct” (The Antichrist: 57) 

Indeed, advocating complete and unrestrained surrender to indulgence, sensuality and pleasure is an obviously self-defeating philosophy. If someone really completely surrendered himself to indulgence, he would do presumably nothing all day except masturbate, shoot up heroin and eat cake. He would therefore achieve nothing of value. 

Thus, throughout his corpus of writing, Nietzsche repeatedly champions what he calls self-overcoming, which, though it goes well beyond this, clearly entails self-control

In short, to be effectively put into practice, the Nietzschean Will to Power necessarily requires willpower

Individualism vs Collectivism (and Authoritarianism) 

Another matter upon which Hicks arguably misreads Nietzsche is the question the extent to which Nietzsche’s philosophy is to be regarded as either individualist or a collectivist in ethos/orientation. 

This topic is, Hicks acknowledges, a controversial one upon which Nietzsche scholars disagree. It is, however, a topic of direct relevance to the extent of relationship between Nietzsche’s philosophy and the ideology of the Nazis, since the Nazis themselves were indisputably extremely collectivist in ethos, the collective to which they subordinated all other concerns, including individual rights and wants, being that of the nation, Volk or race. 

Hicks himself concludes that Nietzsche was much more of a collectivist than an individualist

“[Although] Nietzsche has a reputation for being an individualist [and] there certainly are individualist elements in Nietzsche’s philosophy… in my judgment his reputation for individualism is often much overstated (p87). 

Yet, elsewhere, Hicks comes close to contradicting himself, for, among the qualities that he associates with Nietzsche’s ‘master morality’, which Nietzsche himself clearly favours over the ‘slave morality’ of Christianity, are “Independence”, “Autonomy” and indeed “Individualism” (p60; p62). Yet these are all clearly individualist virtues.[4]

In reaching his conclusion that Nietzsche is primarily to be considered a collectivist rather than a true individualist, Hicks distinguishes three separate questions and, in the process, three different forms of individualism, namely: 

  1. Do individuals shape their own identities—or are their identities created by forces beyond their control?”; 
  1. Are individuals ends in themselves, with their own lives and purposes to pursue—or do individuals exist for the sake of something beyond themselves to which they are expected to subordinate their interests?”; and 
  1. Do the decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen—or are the decisive events of history a matter of collective action or larger forces at work?” (p88). 

With regard to the first of these questions, Nietzsche, according to Hicks, denies that men are masters of their own fate. Instead, Hicks contends that Nietzsche believes: 

Individuals are a product of their biological heritage” (p88). 

This may be correct, and certainly there is much in Nietzsche’s writing to support this conclusion. 

However, even if human behaviour, and human decisions, are indeed a product of heredity, this does not in fact, strictly speaking, deny that individuals are nevertheless the authors of their own destiny. It merely asserts that the way in which we do indeed shape our own destiny is itself a product of our heredity. 

In other words, our actions and decisions may indeed be predetermined by hereditary factors, but they are still our decisions, simply because we ourselves are a product of these same biological forces. 

However, it is not at all clear that Nietzsche believes that all men determine their own fate. Rather, the great mass of mankind, whom he dismisses as ‘herd animals’, are, for Nietzsche, quite incapable of true individualism of this kind, and it is only men of a superior type who are truly free, membership of this superior caste itself being largely determined by heredity. 

Indeed, for Nietzsche, the superior type of man determines not only his own fate, but also often that of the society in which he lives and of mankind as a whole. 

This leads to the third of Hicks’s three types of individualism, namely the question of whether the “decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen”, or whether they are the consequence of factors outside of individual control such as economic factors, or perhaps the unfolding of some divine plan. 

On this topic, I suspect Nietzsche would side with Thomas Carlyle, and Hegel, that history is indeed shaped, in large part, by the actions of so-called ‘great men, or, in Hegelian terms, world historical figures’. This is among the reasons he places such importance on the emerging Übermensch.

Admittedly, Nietzsche repeatedly disparages Carlyle in many of his writings, and, in Ecce Homo, repudiates any notion of equating of his Übermensch with what he dismisses as Carlyle’s “hero cult” (Ecce Homo (iii, 1).

However, as Will Durant writes in The Story of Philosophy, Nietzsche often reserved his greatest scorn for those contemporaries, or near-contemporaries (e.g. the Darwinians and Social Darwinists), who had independently developed ideas that, in some respects, paralleled or anticipated his own, if only as a means of emphasizing his own originality and claim to priority, or, as Durant puts it, of “covering up his debts” (The Story of Philosophy: p373).

Hitler, on the other hand, would indeed surely have agreed with Carlyle regarding the importance of great men, and indeed saw himself as just such a ‘world historical figure’.

Indeed, for better or worse, given Hitler’s gargantuan impact on world history from his coming to power in Germany in the 1930s arguably right up to the present day, we might even find ourselves reluctantly forced to agree with him.[5]

As I have written previously, it is ironic that the so-called great man theory of history seemingly became perennially unfashionable at almost precisely the same time that, in the persons of first Lenin and then Hitler, it was proven so terribly true.

Just as the October revolution would surely never have occurred without Lenin as driving force and instigator, so the Nazis, though they may have existed, would surely never have come to power, let alone achieved the early diplomatic and military successes that briefly conferred upon them mastery over Europe, without Hitler as leader.

Yet, for Nietzsche, individual freedom is restricted, or at least should be restricted, only to such ‘great men’, or at least to a wider, but still narrow, class of superior types, and not at all extended at all to the great mass of humanity. 

Thus, I believe that we can reconcile Nietzsche’s apparently conflicting statements regarding the merits of, on the one hand, individualism, and, on the other, collectivism, by recognizing that he endorsed individualism only for a small elite cadre of superior men. 

Indeed, for Nietzsche, the vast majority of mankind, namely those whom he disparages as ‘herd animals’, were incapably of such individualism and should hence be subject to a strict authoritarian control in the service of the superior caste of man. They were certainly not ‘ends in themselves as contended by Kant.

Indeed, Nietzsche’s prescription for the majority of mankind is not so much collectivist, as it is authoritarian, since Nietzsche regards the lives of such people, even as a collective, as essentially worthless. 

The mass of men must be controlled and denied freedom, not for the benefit of such men themselves even as a collective, but rather for the benefit of the superior type of man.[6]

Yet Hicks reaches almost the opposite conclusion, namely, rather than the lives of the mass of mankind serving the interests of the higher man, even the individualism accorded the higher type of man, and even the Übermensch himself, ultimately serves the interest of the collective – namely, the human species as a whole.

National Socialist Ideology 

As I have already said, however, Hicks’s understanding of Nietzsche’s philosophy is rather better than his understanding of the ideology of German National Socialism. 

This is not altogether surprising. Hicks is, after all, a professor of philosophy by background, not an historian.

Hicks lack of background in history his especially apparent in his handling of sources, which leaves a great deal to be desired.

For example, several quotations attributed to Hitler by Hicks are sourced, in their associated footnotes, to one of two works – namely Unmasked: Two Confidential Interviews with Hitler in 1931 and The Voice of Destruction (aka Hitler Speaks) by Hermann Rauschning – that are both now widely considered by historians to have been fraudulent, and to contain no authentic or reliable quotations from Hitler whatsoever.[7]

Other quotations are sourced to secondary sources, such as websites and biographies of Hitler, which makes it difficult to determine both the primary source from which the quotation is drawn, and in what context and to whom the remark was originally said or written.

This is an especially important point, not only because some sources (e.g. Rauschning) are very untrustworthy, but also because Hitler often carefully tailored his message to the specific audience he was addressing, and was certainly not above concealing or misrepresenting his real views and long-term objectives, especially when addressing the general public, foreign statesmen and political rivals.

Perhaps for this reason, Hicks seemingly misunderstands the true nature of the National Socialist ideology, and Hitler’s own Weltanschauung in particular.

However, in Hicks’s defence, the core tenets of Nazism are almost as difficult to pin down are those of Nietzsche. 

Unlike in the case of Nietzsche, this is not so much because of either the inherent complexity of the ideas, or the impenetrability of its presentation—though admittedly, while Nazi propaganda, and Hitler’s speeches, tend to be very straightforward, even crude, both Hitler’s Mein Kampf and Rosenberg’s The Myth of the Twentieth Century both make for a difficult read. 

Rather the problem is that German National Socialist thinking, or what passed for thinking among National Socialists, never really constituted a coherent ideology in the first place. 

After all, like any political party that achieves even a modicum of electoral success, let alone actually seriously aspires to win power, the Nazis necessarily represented a broad church.  

Members and supporters included people of many divergent and mutually contradictory opinions on various political, economic and social matters, not to mention ethical, philosophical and religious views and affiliations. 

If they had not done so, then the Party could never have attracted enough votes in order to win power in the first place. 

Indeed, the NSDAP was especially successful in presenting itself as ‘all things to all people’ and in adapting its message to whatever audience was being addressed at a given time. 

Therefore, it is quite difficult to pin down what exactly were the core tenets of German National Socialism, if indeed they had any. 

However, we can simplify our task somewhat by restricting ourselves to an altogether simpler question: namely what were the key tenets of Hitler’s own political philosophy? 

After all, one key tenet of German National Socialism that can surely be agreed upon is the so-called Führerprinzip’, whereby Hitler himself was to be the ultimate authority for all political decisions and policy. 

Therefore, rather than concerning ourselves with the political and philosophical views of the entire Nazi leadership, let alone the whole party, or everyone who voted for them, we can instead restrict ourselves to a much simpler task – namely, determining the views of a single individual, namely the infamous Führer himself. 

This, of course, makes our task substantially easier.

Yet we then encounter yet another problem: namely, it is often quite difficult to determine what Hitler’s real views actually were. 

Thus, as I have already noted, like all the best politicians, Hitler tailored and adapted his message to the audience that he was addressing at any given time. 

Thus, for example, when he delivered speeches before assembled business leaders and industrialists, his message was quite different from the one he would deliver before audiences composed predominantly of working-class socialists, and his message to foreign dignitaries, statesmen and the international community was quite different to the hawkish and militaristic one presented in Mein Kampf, to his leading generals  and before audiences of fanatical German nationalists

In short, like all successful politicians, Hitler was an adept liar, and what he said in public and actually believed in private were often two very different things. 

National Socialism and Religion 

Perhaps the area of greatest contrast between Hitler’s public pronouncements and his private views, as well as Hicks’ own most egregious misunderstanding of Nazi ideology, concerns religion. 

According to Hicks, Hitler and the Nazis were believing Christians. Thus, he reports: 

“[Hitler] himself sounded Christian themes explicitly in public pronouncements” (p84). 

However, the key words here are “in public pronouncements”. Hitler’s real views, as expressed in private among conversations among confidents, seem to have been very different. 

Thus, Hitler was all too well aware that publicly attacking Christianity would be an unpopular stance with the public, and would not only alienate much of his erstwhile support but also provoke opposition from powerful figures in the churches whom he could ill afford to alienate. 

Hitler therefore postponed his eagerly envisaged kirchenkampf, or settling of accounts with the churches, until after the war, if only because he wished to avoid fighting a war on multiple fronts. 

Thus, Speer, in his post-war memoirs, noting that “in Berlin, surrounded by male cohorts, [Hitler] spoke more coarsely and bluntly than he ever did elsewhere”, quotes Hitler as declaring in such company more than once: 

Once I have settled my other problems… I’ll have my reckoning with the church. I’ll have it reeling on the ropes” (Inside the Third Reich: p123). 

Hicks also asserts: 

The Nazis took great pains to distinguish the Jews and the Christians, condemning Judaism and embracing a generic type of Christianity” (p83).  

In fact, the form of Christianity that was, at least in public, espoused by the Nazis, namely what they called Positive Christianity was far from “a generic type of Christianity” but rather a very idiosyncratic, indeed quite heretical, take on the Christian faith, which attempted to divest Christianity of its Jewish influences and portray Jesus as an Aryan hero fighting against Jewish power, while even incorporating elements of Gnosticism and Germanic paganism

Moreover, far from attempting to deny the connection between Christianity and Judaism, there is some evidence that Hitler actually followed Nietzsche in directly linking Christianity to Jewish influence. Thus, in his diary, Goebbels quotes Hitler directly linking Christianity and Judaism:  

“[Hitler] views Christianity as a symptom of decay. Rightly so. It is a branch of the Jewish race. This can be seen in the similarity of religious rites. Both (Judaism and Christianity) have no point of contact to the animal element” (The Goebbels Diaries, 1939-1941: p77). 

Likewise, in his Table Talk, carefully recorded by Bormann and others, Hitler declares on the night of the 11th July: 

The heaviest blow that ever struck humanity was the coming of Christianity. Bolshevism is Christianity’s illegitimate child. Both are inventions of the Jew” (Table Talk: p7). 

Here, in linking Christianity and Judaism, and attributing Jewish origins to Christianity, Hitler is, of course, following Nietzsche, since a central theme of the latter’s The Antichrist is that Christianity is indeed very much a Jewish invention. 

Indeed, the whole thrust of this quotation will immediately be familiar to anyone who has read Nietzsche’s The Antichrist. Thus, just as Hitler describes Christianity as “the heaviest blow that ever struck humanity”, so Nietzsche himself declared: 

Christianity remains to this day the greatest misfortune of humanity” (The Antichrist: 51). 

Similarly, just as Hitler describes “Bolshevism” as “Christianity’s illegitimate child”, so Nietzsche anticipates him in detecting this family resemblance, in The Antichrist declaring: 

The anarchist and the Christian have the same ancestry” (The Antichrist: 57). 

Thus, in this single quoted passage, Hitler aptly summarizes the central themes of The Antichrist in a single paragraph, the only difference being that, in Hitler’s rendering, the implicit anti-Semitic subtext of Nietzsche’s work is made explicit. 

Elsewhere in Table Talk, Hitler echoes other distinctly Nietzschean themes with regard to Christianity.  

Thus, just as Nietzsche famously condemned Christianity as a expression of slave morality and ‘ressentiment’, so Hitler declares: 

Christianity is a prototype of Bolshevism: the mobilisation by the Jew of the masses of slaves with the object of undermining society” (Table Talk: p75-6). 

This theme is classically Nietzschean.

Another common theme is the notion of Christianity as rejection of life itself. Thus, in a passage that I have already quoted above, Nietzsche declares: 

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: iv:1) 

Hitler echoes a similar theme, himself declaring in one passage where he elucidates a social Darwinism ethic

Christianity is a rebellion against natural law, a protest against nature. Taken to its logical extreme, Christianity would mean the systematic cultivation of the human failure” (Table Talk: p51). 

In short, in his various condemnations of Christianity from Table Talk, Hitler is clearly drawing on his own reading of Nietzsche. Indeed, in some passages (e.g.Table Talk: p7; p75-6), he could almost be accused of plagiarism. 

Historians like to belittle the idea that Hitler was at all erudite or well-read, suggesting that, although famously an avid reader, his reading material was likely largely limited to such material Streicher’s Der Stürmer and a few similarly crude antisemitic pamphlets circulating in the dosshouses of pre-War Vienna. 

Hicks rightly rejects this view. From these quotations from Hitler’s Table Talk alone, I would submit that it is clear that Hitler had read Nietzsche.[8]

National Socialism and Socialism 

Another area where Hicks misinterprets Nazi ideology, upon which many other reviewers have rather predictably fixated, is the vexed and perennial question of the extent to which the National Socialist regime, which, of course, in name at least, purported to be socialist, is indeed accurately described as such. 

Mainstream historians generally reject the view that the Nazis were in any sense truly socialist

Partly this rejection of the notion that the Nazis were at all socialist may reflect the fact that many of the historians writing about this period of history are themselves socialist, or at least sympathetic to socialism, and hence wish to absolve socialism of any association with, let alone responsibility for, National Socialism.[9]

Hicks, who, for his part, seems to be something of a libertarian as far as I can make out, has a very different conclusion: namely that the National Socialists were indeed socialists and that socialism was in fact a central plank of their political programme. 

Thus, Hicks asserts: 

The Nazis stood for socialism and the principal of the central direction of the economy for the common good” (p106). 

Certainly, Hicks is correct that the Nazis stood for “the central direction of the economy”, albeit not so much “for the common good” of humanity, nor even of all German citizens, as for the “for the common good” only of ethnic Germans, with this “common good” being defined in Hitler’s own idiosyncratic terms and involving many of these ethnic Germans dying in his pointless wars of conquest. 

Thus, Hayek, who equates socialism with big government and a planned economy, argues in The Road to Serfdom that the Nazis, and the Fascists of Italy, were indeed socialist

However, I would argue that socialism is most usefully defined as entailing, not just the central direction of the economy, but also economic redistribution and the promotion of socio-economic equality.[10]

Yet, in Nazi Germany, the central direction of the economy was primarily geared, not towards promoting socioeconomic equality, but rather towards preparing the nation for war, in addition to various proposed vanity architectural projects.[11]

To prove the Nazis were socialist, Hicks relies extensively on the party’s 25-point programme

Yet this document was issued in 1920, when Hitler had yet to establish full control over the nascent movement, and still reflected the socialist ethos of many of the movement’s founders, whom Hitler was later to displace. 

Thus, German National Socialism, like Italian Fascism, did indeed very much begin on the left, attempting to combine socialism with nationalism, and thereby provide an alternative to the internationalist ethos of orthodox Marxism.  

However, long before either movement had ever even come within distant sight of power, each had already toned down, if not abandoned, much of their earlier socialist rhetoric. 

Certainly, although he declared the party programme as inviolable and immutable and blocked any attempt to amend or repudiate it, Hitler also took few steps whatever to actually implement most of the socialist provisions in the 25-point programme.[12]

Hicks also reports: 

So strong was the Nazi party’s commitment to socialism that in 1921 the party entered into negotiations to merge with another socialist party, the German Socialist Party” (p17). 

Hicks admits “the negotiations fell through”, but what he does not mention is that the deal was scuppered precisely because Hitler himself, then not yet the movement’s leader but already the NSDAP’s most dynamic organizer and speaker, specifically vetoed any notion of a merger, threatening to resign if he did not have his way, and thereby established control over the nascent party. 

To further buttress his claim that the Nazis were indeed socialist, Hicks also quotes extensively from Joseph Goebbels, Hitler’s Minister for Propaganda (p18). 

Goebbels was indeed among the most powerful figures in the Nazi leadership besides Hitler himself, and the quotations attributed to him by Hicks do indeed suggest leftist socialist sympathies

However, Goebbels was, in this respect, something of an exception and outlier among the National Socialist leadership, since he had defected from the Strasserist wing of the Party, which was indeed socialist in orientation, but which was first marginalized then suppressed under Hitler’s leadership long before the Nazis came to power, with most remaining sympathizers, Goebbels excepted, purged or fleeing during the Night of the Long Knives

Goebbels may have retained some socialist sympathies thereafter. However, despite his power and prominence in the Nazi regime, he does not seem to have had any great success at steering the regime towards socialist redistribution or other leftist policies

In short, while National Socialism may have begun on the left, by the time the regime attained power, and certainly while they were in power, their policies were not especially socialist, at least in the sense of being economically redistributive or egalitarian. 

Nevertheless, it is indeed true that, with their centrally-planned economy and large government-funded public works projects, the National Socialist regime probably had more in common with the contemporary left, at least in a purely economic sense, than it would with the neoconservative, neoliberal free market ideology that has long been the dominant force in Anglo-American conservatism. 

Thus, whether the Nazis were indeed ‘socialist’, ultimately depends on precisely how we define the wordsocialist’. 

Nazi Antisemitism 

Yet one aspect of National Socialist ideology was indeed, in my view, left-wing and socialist in origin—namely their anti-Semitism

Of course, anti-Semitism is usually associated with the political right, more especially the so-called ‘far right’. 

However, in my view, anti-Semitism is always fundamentally leftist in nature. 

Thus, Marxists claim that society is controlled by a conspiracy of wealthy capitalists who control the mass media and exploit and oppress everyone else. 

Nazis and anti-Semites, on the other hand, claim that society is controlled by a conspiracy of wealthy Jewish capitalists who control the mass media and exploit and oppress everyone else. 

The distinction between Nazism and Marxism is, then, largely tangential.

Antisemites and Nazis believe that our capitalist oppressors are all, or mostly, Jewish. Marxists, on the other hand, take no stance on the matter either way and generally prefer not to talk about it. 

As a famous German political slogan had it: 

Antisemitism is the socialism of fools.’ 

Indeed, anti-Semites who blame all the problems of the world on the Jews always remind me of Marxists who blame all the problems of the world on capitalism and capitalists, feminists who blame their problems on men, and black people who blame all their problems on ‘the White Man’. 

Interestingly, Nietzsche himself recognized this same parallel, writing of what he calls “ressentiment”, an important concept in his philosophy, connotations of repressed or sublimated envy and inferiority complex, that: 

This plant blooms its prettiest at present among Anarchists and anti-Semites” (On the Genealogy of Morals: ii: 11). 

In other words, Nietzsche seems to be recognizing that both socialism and anti-Semitism reflect what modern conservatives often term ‘the politics of envy’. 

Thus, in The Will to Power, Nietzsche observes: 

The anti-Semites do not forgive the Jews for having both intellectand money’” (The Will to Power: IV:864). 

Nietzschean Antisemitism

Yet Jews themselves are, in Nietzsche’s thinking, by no means immune from the “ressentiment” that he also diagnoses in socialists and antisemites

On the contrary, it is Jewish ressentiment vis a vis successive waves of conquerors—especially the Romans—that, in Nietzsche’s thinking, birthed Christianity, slave morality and the original transvaluation of values that he so deplores. 

Thus, Nietzsche relates in Beyond Good and Evil that: 

The Jews performed the miracle of the inversion of valuations, by means of which life on earth obtained a new and dangerous charm for a couple of millenniums. Their prophets fused into one the expressions ‘rich,’ ‘godless,’ ‘wicked,’ ‘violent,’ ‘sensual,’ and for the first time coined the word ‘world’ as a term of reproach. In this inversion of valuations (in which is also included the use of the word ‘poor’ as synonymous with ‘saint’ and ‘friend’) the significance of the Jewish people is to be found; it is with them that the slave-insurrection in morals commences” (Beyond Good and Evil: V: 195).[13]

Thus, in The Antichrist, Nietzsche talks of “the Christian” as “simply a Jew of the ‘reformed’ confession”, and “the Jew all over again—the threefold Jew” (The Antichrist: 44), concluding: 

Christianity is to be understood only by examining the soil from which it sprung—it is not a reaction against Jewish instincts; it is their inevitable product” (The Antichrist: 24). 

All of this, it is clear from the tone and context, is not at all intended as a complement—either to Jews or Christians

Thus, lest we have any doubts on this matter, Nietzsche declares in Twilight of the Idols

Christianity as sprung from Jewish roots and comprehensible only as grown upon this soil, represents the counter-movement against that morality of breeding, of race and of privilege:—it is essentially an anti-Aryan religion: Christianity is the transvaluation of all Aryan values, the triumph of Chandala values, the proclaimed gospel of the poor and of the low, the general insurrection of all the down-trodden, the wretched, the bungled and the botched, against the ‘race,’—the immortal revenge of the Chandala as the religion of love” (Twilight of the Idols: VI:4). 

Thus, if Nietzsche rejected the anti-Semitism of his sister, brother-in-law and former idol, Wagner, he nevertheless constructed in its place a new anti-Semitism all of his own, which, far from blaming the Jews for the crucifixion of Christ, instead blamed them for the genesis of Christianity itself—a theme that is, as we have seen, directly echoed by Hitler in his Table Talk

Thus, Nietzsche remarks in The Antichrist

“[Jewish] influence has so falsified the reasoning of mankind in this matter that today the Christian can cherish anti-Semitism without realizing that it is no more than the final consequence of Judaism” (The Antichrist: 24). 

An even more interesting passage regarding the Jewish people appears just a paragraph later, where Nietzsche observes: 

The Jews are the very opposite of décadents: they have simply been forced into appearing in that guise, and with a degree of skill approaching the non plus ultra of histrionic genius they have managed to put themselves at the head of all décadent movements (for example, the Christianity of Paul), and so make of them something stronger than any party… To the sort of men who reach out for power under Judaism and Christianity,—that is to say, to the priestly class—décadence is no more than a means to an end. Men of this sort have a vital interest in making mankind sick” (The Antichrist: 24). 

Here, Nietzsche echoes, or perhaps even originates, what is today a familiar theme in anti-Semitic discourse—namely, that Jews champion subversive and destructive ideologies (Marxism, feminism, multiculturalism, mass migration of unassimilable minorities) only to weaken the Gentile power structure and thereby enhance their own power.[14]

This idea finds its most sophisticated (but still flawed) contemporary exposition in the work of evolutionary psychologist and contemporary anti-Semite Kevin MacDonald, who, in his book, The Culture of Critique (reviewed here), conceptualizes a range of twentieth century intellectual movements such as psychoanalysis, Boasian anthropology and immigration reform as what he calls ‘group evolutionary strategies’ that function to promote the survival and success of the Jews in diaspora. 

Nietzsche, however, goes further and extends this idea to the genesis of Christianity itself. 

Thus, in Nietzsche’s view, Christianity, as an outgrowth of Judaism and an invention of Paul and the Jewish ‘priestly class’, is itself a part of what Macdonald would call a ‘Jewish group evolutionary strategy’ designed in order to undermine the goyish Roman civilization under whose yoke Jews had been subjugated. 

Nietzsche, a professed anti-Christian but an admirer of the ancient Greeks (or at least of some of them), and even more so of the Romans, would likely agree with Tertullian that Jerusalem has little to do with Athens – or indeed with Rome. However, Hicks observes: 

As evidence of whether Rome or Judea is winning, [Nietzsche] invites us to consider to whom one kneels down before in Rome today” (p70). 

Racialism and the Germans 

Yet, with regard to their racial views, Nietzsche and the Nazis differ, not only in their attitude towards Jews, but also in their attitude towards Germans. 

Thus, according to Hicks: 

The Nazis believe the German Aryan to be racially superior—while Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

Yet, here, Hicks is only half right. While it certainly true that the Nazis extolled the German people, and the so-called ‘Aryan race’, as a master race, it is not at all clear that Nietzsche indeed believed that the superior type of man can be found among all races. 

Actually, besides a few comments about Jews, mostly favourable, and a few more about the Germans and the English, almost always disparaging, Nietzsche actually says surprisingly little about race

However, on reflection, this is not at all surprising, since, being resident throughout his life in a Europe that was then very much monoracial, Nietzsche probably little if any direct contact with nonwhite races or peoples. 

Moreover, living as he did in the nineteenth century, when European power was at its apex, and much of the world controlled by European colonial empires, Nietzsche, like most of his European contemporaries, probably took white European racial superiority very much for granted. 

It is therefore only natural that his primary concern was the relative superiority and status of the various European subtypes – hence his occasional comments regarding Jews, English, Germans and occasionally other groups such as the French. 

Hicks asserts: 

The Nazis believe contemporary German culture to be the highest and the best hope for the world—while Nietzsche holds contemporary German culture to be degenerate and to be infecting the rest of the world” (p85). 

Yet this is something of a simplification of National Socialist ideology. 

In fact, the Nazis too believed that the Germany of their own time – namely the Weimar Republic – was decadent and corrupt. 

Indeed, a belief in both national degeneration and in the need for national spiritual rebirth and awakening has been identified as a key defining element in fascism.[15]

Thus, Nietzsche’s own belief in the decadence of contemporary western civilization, and arguably also his belief in the coming Übermensch promising spiritual revitalization, is, in many respects, a paradigmatically and prototypically fascist model. [16]

Of course, the Nazis only believed that German culture was corrupt and decadent before they had themselves come to power and hence supposedly remedied this situation.  

In contrast, Nietzsche never had the chance to rejuvenate the German culture and civilization of his own time – and nor did he live to see the coming Übermensch.[17]

The Blond Beast’  

Hicks contends that Nietzsche’s employment of the phrase “the blond beast” in The Genealogy of Morals is not a racial reference to the characteristically blond hair of Nordic Germans, as it has sometimes been interpreted, but rather a reference to the blond mane of the lion. 

Actually, I suspect Nietzsche may have intended a double-meaning or metaphor, referring to both the stereotypically blond complexion of the Germanic warrior and to the mane of the lion. 

Indeed, the use of such a double-meaning or metaphor would be typical of Nietzsche’s poetic, literary and distinctly non-philosophical (or at least not traditionally philosophical) style of writing. 

Thus, even in one of the passages from The Genealogy of Morals employing this metaphor that is quoted by Hicks himself, Nietzsche explicitly refers to the “the blond Germanic beast [emphasis added]” (quoted: p78).[18]

It is true that, in another passage from the same work, Nietzche contends that “the splendid blond beast” lies at “the bottom of all these noble races”, among whom he includes, not just the Germanic, but also such distinctly non-Nordic races as “the Roman, Arabian… [and] Japanese nobility” among others (quoted: p79). 

Here, the reference to the Japanese “nobility”, rather than the Japanese people as a whole, is, I suspect, key, since, as we have seen, Nietzsche clearly regards the superior type of man, if present at all, as always necessarily a minority among all races. 

However, in referring to “noble races”, Nietzsche necessarily implies that certain other races are not so “noble”. Just as to say that certain men are ‘superior’ necessarily implies that others are inferior, since superiority is a relative concept, so to talk of “noble races” necessarily supposes the existence of ignoble races too. 

Thus, if the superior type of man, in Nietzsche’s view, only ever represents a small minority of the population among any race, it does not necessarily follow that, in his view, such types are to be found among all races. 

Hicks is therefore wrong to conclude that: 

Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

In short, just because Nietzsche believed that vast majority of contemporary Germans were poltroons, Chandala, ‘beer drinkers’ and ‘herd animals’, it does not necessarily follow that he also believes that an Australian Aboriginal can be an Übermensch

A Nordicist, Aryanist, Völkisch Milieu? 

Thus, for all his condemnation of Germans and German nationalism, one cannot help forming the impression on reading Nietzsche that he very much existed within, if not a German nationalist milieu, then at least a broader Nordicist, Aryanist and Völkisch intellectual milieu – the same milieu that birthed certain key strands in the National Socialist Weltanschauung

This is apparent in the very opening lines of The Antichrist, where Nietzsche declares himself, and his envisaged readership, as “Hyperboreans”, a term popular among proto-Nazi occultists, such as some members of the Thule Society, the group which itself birthed what was to become the NSDAP, and which had named itself after the supposed capital of the mythical Hyperborea.[19]

It is also apparent when, in Twilight of the Idols, he disparages Christianity as specifically an “anti-Aryan religion… [and] the transvaluation of all Aryan values” (Twilight of the Idols: VI:4). 

Apologists sometimes insist that Nietzsche, as a philologist by training, was only using the word Aryan in the linguistic sense, i.e. where we would today say ‘Indo-European

However, Nietzsche was writing at a time and place, namely Germany in the nineteenth century, when Aryanist ideas were very much in vogue, and it would be naïve to think that Nietzsche was not all too aware of the full connotations of this word. 

Moreover, his references to “Aryan values” and “anti-Aryan religion”, referring, as they do, to values and religion, clearly go beyond merely linguistic descriptors, and, though they may envisage a mere cultural inheritance from the proto-Indo-Europeans, nevertheless seem, in my reading, to envisage, not so much a scientific biological conception of race, including race differences in behaviour and psychology, as much as they anticipate the mystical, quasi-religious and slightly bonkers ‘spiritual racialism’ of Nietzsche’s self-professed disciples, Spengler and Evola

Less obviously, this affinity for Nazi-style ‘Aryanism’ is also apparent in Nietzsche’s extolment for the Law of Manu and Indian caste system, and his adoption of the Sanskrit term Chandala for the ‘herd animals’ he so disparages, since, although South Asians are obviously far from racially Nordic, proto-Nazi Völkisch esotericists (and their post-war successors) nevertheless had a curious obsession with Hindu religion and caste, and it is from India that the Nazis seemingly took both the swastika symbol and the very word ‘Aryan’. 

Indeed, even Nietzsche’s odd decision to name his prophet of the coming Übermensch, and mouthpiece for his own philosophy, after the Iranian religious figure, Zarathustra, despite the fact that the philosophy of the historical Zoroaster, at least as it is remembered today, had little in common with Nietzsche’s own philosophy, but rather represented almost its opposite (which may have been Nietzsche’s point), may have reflected the fact that the historical Zoroaster was, of course, Iranian, and hence quintessentially ‘Aryan’.

Will Durant, in The Story of Philosophy, writes: 

Nietzsche was the child of Darwin and the brother of Bismarck. It does not matter that he ridiculed the English evolutionists and the German nationalists: he was accustomed to denounce those who had most influenced him; it was his unconscious way of covering up his debts” (The Story of Philosophy: p373).[20]

This perhaps goes some way to making sense of Nietzsche’s ambiguous relationship to Darwin, whose theory he so often singles out for criticism. 

Perhaps something similar can be said of Nietzsche’s relationship, not only to German nationalism, but also to anti-Semitism, since, as a former disciple of Wagner, he existed within a German nationalist and anti-Semitic intellectual milieu, from which he sought to distinguish himself but which he never wholly relinquished. 

Thus, if Nietzsche condemned the crude antiSemitism of Wagner, his sister and brother-in-law, he nevertheless constructed in its place a new antiSemitism that blamed the Jews, not merely for the crucifixion of Christ, but rather for the very invention of Christianity, Christian ethics and the entire edifice of what he called ‘slave morality’ and the ‘transvaluation of values’. 

Nietzschean Philosemitism?

Thus, even Nietzsche’s many apparently favorable comments regarding the Jews can often be interpreted as backhanded complements

As a character from a Michel Houellebecq novel observes: 

All anti-Semites agree that the Jews have a certain superiority. If you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113). 

Indeed, Nazi propaganda provides a good illustration of this. 

Thus, in claiming that Jews, who only ever represented only a tiny minority of the Weimar-era German population, nevertheless dominated the media, banking, commerce and the professions, Nazi propaganda often came close to inadvertently implicitly conceding Jewish superiority – since to dominate the economy of a mighty power like Germany, despite only ever representing a tiny minority of the population, is hardly a feat indicative of inferiority. 

Indeed, Nazi propaganda came close to self-contradiction, since, if Jews did indeed dominate the Weimar-era economy to the extent claimed in Nazi propaganda, this not only suggests that the Jews themselves are far from inferior to the German Gentile Goyim whom they had ostensibly oppressed and subjugated, but also that the Germans themselves, in allowing themselves to be so dominated by this tiny minority of Jews in their midst, were something rather less than the Aryan Übermensch and master race of  Hitler’s own demented imagining. 

Many antisemites have praised the Jews for their tenacity, resilience, survival, alleged clannishness and ethnocentrism, and, perhaps most ominously, their supposed racial purity

For example, Houston Stewart Chamberlain, a major influence on Nazi race theory and mentor to Hitler himself, nevertheless insisted:

The Jews deserve admiration, for they have acted with absolute consistency according to the logic and truth of their own individuality and never for a moment have they allowed themselves to forget the sacredness of physical laws because of foolish humanitarian day-dreams which they shared only when such a policy was to their advantage” (Foundations of the Nineteenth Century: p531).[21]

Similarly, contemporary antisemite Kevin MacDonald, arguing that Jews might serve as a model for less ethnocentric white westerners to emulate, professes to:

Greatly admire Jews as a group that has pursued its interests over thousands of years, while retaining its ethnic coherence and intensity of group commitment (Macdonald 2004). 

Indeed, even Hitler himself came close to philosemitism in one passage of Mein Kampf, where he declares: 

“The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self-preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf).[22]

Many of Nietzsche’s own apparently complementary remarks regarding the Jewish people can be interpreted in much the same vein. 

Thus, Hicks himself credits Nietzsche with deploring the slave morality that was their legacy, but nevertheless recognizing that this slave morality was a highly successful strategy in enabling them to survive and prosper in diaspora as a defeated and banished people. Thus, Nietzsche admires them as: 

Inheritors of a cultural tradition that has enabled them to survive and even flourish despite great adversity… [and] would at the very least have to grant, however grudgingly, that the Jews have hit upon a survival strategy and kept their cultural identity for well over two thousand years” (p82). 

Thus, in one of his many backhanded complements, Nietzsche declares:  

The Jews are the most remarkable people in the history of the world, for when they were confronted with the question, to be or not to be, they chose, with perfectly unearthly deliberation, to be at any price: this price involved a radical falsification of all nature, of all naturalness, of all reality, of the whole inner world, as well as of the outer” (The Antichrist: 24). 

Defeating Nazism 

In Hicks’s final chapter, he discusses how best Nazism can be defeated. In doing so, he seemingly presupposes that Nazism is, not only an evil that must be defeated, but moreover the ultimate evil that must be defeated at all costs and that we must therefore structure our entire economic and political system in order to achieve this goal and prevent any possibility of Nazism’s reemergence. 

In doing so, he identifies what he sees as “the direct opposite of what the Nazis stood for” as necessarily “the best antidote to National Socialism we have” (p106-7). 

Yet, to assume that there is a “direct opposite” to each of the Nazis’ central tenets assumes that all political positions can be conceptualized on a single dimensional axis, with the Nazis at one end and Hicks’s own rational free market utopia at the other. 

In reality, the political spectrum is multidimensional and there are many alternatives to each of the tenets identified by Hicks as integral to Nazism, not just a single opposite. 

More importantly, it is not at all clear that the best way to defeat any ideology is necessarily to embrace its polar opposite. 

On the contrary, embracing an opposite form of extremism often only provokes a counter-reaction and is hence counterproductuve. In contrast, often the best way to defeat extremism is to actually address some of the legitimate issues raised by the extremists and offer practical, realistic solutions and compromise – i.e. moderation rather than extremism. 

Thus, in the UK, the two main post-war electoral manifestations of what was arguably a resurgent Nazi-style racial nationalism were the National Front in the 1970s and the British National Party (BNP) in the 2000s, each of whom achieved some rather modest electoral successes, and inspired a great deal of media-led moral-panic, in their respective heydays before quickly fading into obscurity and electoral irrelevance. 

Yet each were defeated, not by the emergence of an opposite extremism of either left or right, nor by the often violent agitation and activism of self-styled ‘anti-fascists’, but rather by the emergence of political figures or movements that addressed some of the legitimate issues raised by the extremist groups, especially regarding immigration, but cloaked them in more moderate language. 

Thus, in the 2000s, the BNP was largely outflanked by the rise of the UKIP, which increasingly echoed many of the BNP’s rhetoric regarding mass immigration, but largely avoided any association with racism, white supremacism or neo-Nazism. In short, UKIP outflanked the BNP by being precisely what the BNP had long pretended to be – namely, a non-racist, anti-immigration civic nationalist party – only, in the case of UKIP, the act actually appeared to be genuine.

Meanwhile, in the 1970s, the collapse and implosion of the National Front was largely credited to the rise of Margaret Thatcher, who, in one infamous interview, empathized with the fear of many British people that their country being “swamped by people with a different culture”, though, in truth, once in power, she did little to arrest or even slow, let alone reverse, this ongoing and now surely irreversible process of demographic transformation

Misreading Nietzsche 

Why, then, has Nietzsche come to be so misunderstood? How is it that this nineteenth-century German philosopher has come to be claimed as a precursor by everyone from Fascists and libertarians to leftist postmodernists. 

The fault, in my view, lies largely with Nietzsche himself, in particular his obscure, esoteric writing style, especially in his infamously indecipherable, Thus Spake Zarathustra, but to some extent throughout his entire body of writing. 

Indeed, Nietzsche, perhaps to his credit, even admits to adopting a deliberately impenetrable prose style, not so much admitting as proudly declaring as much in one parenthesis from Beyond Good and Evil that has been variously translated as: 

I obviously do everything to be ‘hard to understand’ myself


I do everything to be difficultly understood myself”  (Beyond Good and Evil: II, 27).

Admittedly, here, the wording, or at least the various English renderings, is itself not entirely clear in its meaning. However, the fact that even this single seemingly simple sentence lends itself to somewhat different interpretations only illustrates the scale of the problem. 

In my view, as I have written previously, philosophers who adopt an aphoristic style of writing generally substitute bad poetry for good arguments. 

Thus, in one sense at least leftist postmodernists are right to claim Nietzsche as a philosophical precursor: He, like them, delights in pretentious obfuscation and obscurantism

The best writers, in my view, generally present their ideas in the clearest and simplest language that the complexity of their ideas permit. 

Indeed, the most profound thinkers generally have no need increase the complexity of ideas that are already inherently complex through deliberately obscure or impenetrable language. 

In contrast, it is only those with only banal and unoriginal ideas who adopt deliberately complex and confusing language in order to conceal the banality and unoriginality of their ideas. 

Thus, Richard DawkinsFirst Law of the Conservation of Difficulty states: 

Obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity.”  

What applies to an academic subject applies equally to individual writers – namely. as a general rule, the greater the obscurantism, the less the substance and insight. 

Yet, unlike the postmodernists, poststucturalists, deconstructionalists, contemporary continental philosophers and other assorted ‘professional damned fools’ who so often claim him as a precursor, Nietzsche is indeed, in my view, an important, profound and original thinker. 

Moreover, far from replacing good philosophy with bad poetry, Nietzsche is, besides being a profound and original thinker, also a magnificent prose stylist, the brilliance of whose writing shines through even in translation. 

Conclusion – Was Nietzsche a Nazi? 

The Nazis, we are repeatedly reassured by leftists, misunderstood Nietzsche. Either that or they deliberated misrepresented and misappropriated him. At any rate, one thing is clear – they were wrong. 

This argument is largely correct – as far as it goes. 

The Nazis did indeed engage in a disingenuous and highly selective reading of Nietzsche’s work, selectively quoting his words out of context, and conveniently ignoring, or even suppressing, those passages of his writing where he explicitly condemns both antiSemitism and German nationalism

The problem with this view is not that it is wrong – but rather with what it leaves out. 

Nietzsche may not have been a Nazi, but he was certainly an elitist and anti-egalitarian, opposed to socialism, liberalism, democracy and pretty much the entire liberal democratic political and social worldview of the contemporary west.

Indeed, although, today, in America at least, atheism tends to be associated with leftist, or at least liberal views, and Christianity with conservatism and the right, Nietzsche opposed socialism precisely because he saw it as an inheritance of the very JudeoChristianslave morality’ to which his philosophy stood in opposition, albeit divested of the very religious foundation which provided this moral system with its original justification and basis.

Thus, in The Will to Power, he observes that “socialists appeal to the Christian instincts” and bewails “the socialistic ideal” as merely “the residue of Christianity and of Rousseau in the de-Christianised world” (The Will to Power: III, 765; IV, 1017). Likewise, he laments of the English in Twilight of the Idols:

They are rid of the Christian God and therefore think it all the more incumbent upon them to hold tight to Christian morality” (Twilight of the Idols: IX, 5).

While Nietzsche would certainly have disapproved of many aspects of Nazi ideology, it is not at all clear that he would have considered our own twenty-first century western culture as any better. Indeed he may well have considered it considerably worse. 

Thus, it is indeed true that Nietzsche was no National Socialist, but he was also far from a leftist or a liberal, and was far from politically correct by modern standards in his views regarding the Jews, for example. 

Indeed, the worldview of this most elitist and anti-egalitarian of thinkers is arguably even less reconcilable with contemporary left-liberal notions of social justice than is that of the Nazis themselves.  

Thus, if the Nazis did indeed misappropriate Nietzsche’s philosophy, then this misappropriation was as nothing compared to that of those leftists, post-modernists, post-structuralists and other such ‘professional damned fools’ who have vainly, and dishonestly, attempted to claim this most anti-egalitarian and elitist of thinkers on behalf of the left


[1] The claim that the foreign policies of governmental regimes of all ideological persuasions are governed less by their ideology than by power politics, is, of course, a central tenet, indeed perhaps the central tenet of the realist school of international relations theory. Indeed, Hitler himself provided a good example of this when, despite his ideological opposition to Judeo-Bolshevism and desire for lebensraum in the East, not to mention disparaging racial attitude to the Slavic peoples, nevertheless, rebuffed in his efforts to come to an understanding with Britain and France, or form an alliance with Poland, instead sent Ribbentrop to negotiate a non-aggression pact with the Soviet Union. It can even be argued that it was Hitler’s abandonment of pragmatic realpolitik in favour of ideological imperative, when he later invaded the Soviet Union that led to his own, and his regime’s, demise.

[2] Curiously missing from all such lists is Nietzsche’s own early idol, Arthur Schopenhauer. Yet it was Schopenhauer’s The World as Will and Representation, that Hitler claimed to have carried with him in the trenches in his knapsack throughout the First World War, and Schopenhauer even has the dubious distinction of having his antisemitic comments regarding Jews favourably quoted by Hitler in Mein Kampf. Indeed, according to the recollections of filmmaker Leni Riefenstahl, Hitler professed to prefer Schopenhauer over Nietzsche, Hitler being quoted as asserting: 

I can’t really do much with Nietzsche… He is more an artist than a philosopher; he doesn’t have the crystal-clear understanding of Schopenhauer. Of course, I value Nietzsche as a genius. He writes possibly the most beautiful language that German literature has to offer us today, but he is not my guide” (quoted: Hitler’s Private Library: p107). 

Somewhat disconcertingly, this assessment of Nietzsche – namely as “more… artist than philosopher” and far from “crystal-clear” in his writing style, but nevertheless a brilliant prose stylist, the beauty of whose writing shines through even in English translation – actually rather reflects my own assessment. Moreover, I too am an admirer of Schopenhauer’s writings, albeit not so much his philosophy, let alone his metaphysics, but more his theory of human behaviour and psychology.
Yet, on reflection, Schopenhauer is surely rightly omitted from lists of the philosophical influences on Nazism. Save for the antisemitic remarks quoted in Mein Kampf, which are hardly an integral part of Schopenhauer’s philosophy, there is little in Schopenhauer’s body of writing, let alone in his philosophical writings, that can be seen to jibe with National Socialism policy or ideology.
Indeed, Schopenhauer’s philosophy, to the extent it is prescriptive at all, advocates an ascetic withdrawal from worldly temptation, and championed art as a form of escapism.
Hitler did indeed, in some respects, even as dictator, live a frugal, spartan life. He was, in later lifereportedly, a vegetarian, who also abstained from alcohol, and also an art lover who found escapism in both movies and the operas of Wagner (the latter himself a disciple of Schopenhauer), and seems, for most of his adult life, to have had little active sex life. However, the NSDAP programme, like all political programmes, necessarily involved active engagement with the world, something Schopenhauer would have dismissed as largely futile.
Indeed, Hitler himself aptly summarized why Schopenhauer’s philosophy could never be a basis for any type of active political programme, let alone that of the NSDAP, in a comment quoted by Hanfstaengl, where he bemoans Schopenhauer’s influence on his former mentor Eckart, remarking: 

Schopenhauer has done Eckart no good. He has made him a doubting Thomas, who only looks forward to a Nirvana. Where would I get if I listened to all his [Schopenhauer’s] transcendental talk? A nice ultimate wisdom that: To reduce on[e]self to a minimum of desire and will. Once will is gone all is gone. This life is War” (quoted in: Hitler’s Philosophers: p24). 

Modern left-liberal apologists for Nietzsche often attempt to characterize Nietzsche as a largely apolitical thinker. This is, of course, deluded apologetics. However, as applied to Schopenhauer, the claim is indeed largely valid. 

[3] Hicks does not mention the figure who was, in my perhaps eccentric view, the greatest thinker associated with the NSDAP, namely Nobel Prize winning ethologist Konrad Lorenz, perhaps because, unlike the other thinkers whom he does discuss, Lorenz only joined the NSDAP several years after they had come to power, and his association with the NSDAP could therefore be dismissed as purely opportunistic. Alternatively, Hicks may have overlooked Lorenz simply because Lorenz was a biologist rather than a philosopher, though it should be noted that Lorenz also made important contributions to philosophy as well, in particular his pioneering work in evolutionary epistemology.

[4] It is true that Nietzsche does not actually envisage or advocate a return to the ‘master morality’ of an earlier age, but rather the construction of a new morality, the outline of which could, at the time he wrote, only be foreseen in rough outline. Nevertheless, it is clear he favoured this ‘master morality’ over the ‘slave morality’ that he associated with Christianity and our own post-Christian ethics, and also that he viewed the coming morality of the Übermensch as having much more in common with the ‘master morality’ of old than with the Christian ‘slave morality’ he so disparages. 

[5] Hitler exerted a direct impact on world history from 1933 until his death in 1945. Yet Hitler, or at least the spectre of Hitler, continues to exert an indirect but not insubstantial impact on contemporary world politics to this day, as a kind of ‘bogeyman’, whom we define our views in opposition to, and invoke as a kind of threat or form of guilt-by-association. This is most obvious in the familiar ‘reductio ad Hitlerum’. Of course, in considering the question of whether Hitler may indeed qualify as a ‘great man’, we are not using the word ‘great’ in a moral sense. Rather, we are employing the term in the older sense, meaning ‘large in size’. This exculpatory clarificiation we might aptly term the Farrakhan proviso

[6] Collectivists are, almost by definition, authoritarian, since collectivism necessarily demands that individual rights and freedoms be curtailed, restricted or abrogated for the benefit of the collective, and this invariably requires coercion because people have evolved to selfishly promote their own inclusive fitness at the expense of that of rivals and competitors. However, authoritarianism can also be justified on non-collectivist grounds. Nietzsche’s proposed restrictions of the individual liberty of the ‘herd animal’ and ‘Chandala’ seem to me to be justified, not by reference to the individual or collective interests of such ‘Chandala’, but rather by reference to the interests of the superior man and of the higher evolution of mankind.

[7] The first of these is a pair of interviews that were supposedly conducted with Hitler by German journalist Richard Breiting in 1931, to which Hicks sources several supposed quotations from Hitler (p117; p122; p124; p125; p133). Unfortunately, however, the interviews, only published in 1968 by Yugoslavian journalist Edouard Calic several decades after they were supposedly conducted, contain anachronistic material and are hence almost certainly post-war forgeries. Richard Evans, for example, described them as having obviously been in large part, if not completely, made up by Calic himself (Evans 2014).
The other is Hermann Rauschning’s The Voice of Destruction, published in Britain under the title Hitler Speaks, to which Hicks sources several quotations from Hitler (p120; p125; 126; p134). This is now widely recognised as a fraudulent work of wartime propaganda. Historians now believe that Rauschning actually only met with Hitler on a few occasions, was certainly not a close confident and that most, if not all, of the conversations with Hitler recounted in The Voice of Destruction are pure inventions.
Thus, for example, Ian Kershaw in the first volume of his Hitler biography, Hitler, 1889–1936: Hubris, makes sure to emphasize in his preface: 

I have on no single occasion cited Hermann Rauschning’s Hitler Speaks [the title under which The Voice of Destruction was published in Britain], a work now regarded to have so little authenticity that it is best to disregard it altogether” (Hitler, 1889–1936: Hubris: pxvi). 

Similarly, Richard Evans definitively concludes:

Nothing was genuine in Rauschning’s book: his ‘conversations with Hitler’ had no more taken place than his conversations with Göring. He had been put up to writing the book by Winston Churchill’s literary agent, Emery Reeves, who was also responsible for another highly dubious set of memoirs, the industrialist Fritz Thyssen’s I Paid Hitler” (Evans 2014).

Admittedly, Rauschning’s work was once taken seriously by mainstream historians, and The Voice of Destruction is cited repeatedly in such early and still-celebrated works as Trevor-Roper’s The Last Days of Hitler, first published in 1947, and Bullock’s Hitler: A Study in Tyranny, first published in 1952.  However, Hicks’s own book was published in 2006, by which time Rauschning’s work had already long previously been exposed as a hoax. 
Indeed, it is something of an indictment of the standards, not to mention the politicized and moralistic tenor, of what we might call ‘Hitler historiography’ that this work was ever taken seriously by historians in the first place. First published in the USA in 1940, it was clearly a work of anti-Nazi wartime propaganda and much of the material is quite fantastic in content.
For example, there are bizarre passages about Hitler having been “long been in bondage to a magic which might well have been described, not only in metaphor but in literal fact, as that of evil spirits” and of Hitler “wak[ing] at night with convulsive shrieks”, and one such passage describes how Hitler: 

Stood swaying in his room, looking wildly about him. “He! He! He’s been here!” he gasped. His lips were blue. Sweat streamed down his face. Suddenly he began to reel off figures, and odd words and broken phrases, entirely devoid of sense. It sounded horrible. He used strangely composed and entirely un-German word-formations. Then he stood quite still, only his lips moving. He was massaged and offered something to drink. Then he suddenly broke out — “There, there! In the comer! Who’s that.?” He stamped and shrieked in the familiar way. He was shown that there was nothing out of the ordinary in the room, and then he gradually grew calm” (The Voice of Destruction: p256) 

Yet, oddy, the first doubts regarding the authenticity of the conversations reported in The Voice of Destruction were raised, not by mainstream historians studying the Third Reich, but rather by an obscure Swiss researcher, Wolfgang Haenel, who first presented his thesis at a conference organized by a research institute widely associated with so-called ‘holocaust denial’. Moreover, other self-styled ‘holocaust revisionists’ were among the first to endorse Haenel’s critique of Rauschning’s work. Yet his conclusions are now belatedly accepted by virtually all mainstream scholars in the field. This perhaps suggests that such ‘revisionist’ research is not always without value.

[8] It must be acknowledged here that the question of the religious views of Hitler is a matter of some controversy. It is sometimes suggested that the hostile view of Christianity expressed in Hitler’s Table Talk reflect less the opinion of Hitler, and more those of of Hitler’s private secretary, Martin Bormann, who was responsible for transcribing much of this material. Bormann is indeed known to have been hostile to Christianity, and Speer, who disliked Bormann, indeed remarks in his memoirs that:

If in the course of such a monologue Hitler had pronounced a more negative judgment upon the church, Bormann would undoubtedly have taken from his jacket pocket one of the white cards he always carried with him. For he noted down all Hitler’s remarks that seemed to him important; and there was hardly anything he wrote down more eagerly than deprecating comments on the church” (Inside the Third Reich: p95). 

However, it is important to note that Speer does not deny that Hitler himself did indeed make such remarks. Indeed, it is hardly likely that Bormann, a faithful, if not obsequious, acolyte of the Fürher, would ever dare to falsely attribute to Hitler remarks which the latter had never uttered or views to which he did not subscribe. At any rate, the views attributed to Hitler in Table Talk are amply corroborated in other sources, such as in Goebbels’s diaries and indeed in Speer’s memoirs, both of which I have also quoted above.
It is also true that, elsewhere in Table Talk, Hitler talks approvingly of Jesus as “most certainly not a Jew”, and as fighting “against the materialism of His age, and, therefore, against the Jews”. This is, of course, a very odd and eccentric, not to mention historically unsupported, perspective on the historical Jesus.
However, it is interesting to note that, despite his disdain for Christianity, Nietzsche too, despite his more orthodox view of the historical Jesus, nevertheless professes to admire Jesus in The Antichrist. Indeed, in repeatedly placing the blame for Christianity not on Jesus himself, but rather on Paul of Tarsus, whom he accuses, again echoing Nietzsche, of transforming Christianity into “a rallying point for slaves of all kinds against the élite, the masters and those in dominant authority” (Table Talk: p722), Hitler is therefore again following Nietzsche, who, in The Antichrist, similarly condemns Paul as the true founder of modern Christianity and of the Christian slave morality that infected western man.
Just to clarify, I am not here suggesting that Hitler’s views with respect to Christianity are identical to those of Nietzsche. On the contrary, they clearly differ in several respects, not least in their differing historical perspectives on the historial Jesus.
Nevertheless, Hitler’s religious views, as expressed in his Table Talk, clearly mirror those of Nietzsche in certain key respects, not least in seeing Christianity as the greatest tragedy to befall humanity, as inimical to life itself, and as a malign invention of or inheritance from Jews and Judaism. Given these parallels, it seems almost certain that the German Führer had read the works of Nietzsche and, to some extent, been influenced by his ideas.
Interestingly, elsewhere in his Table Talk, Hitler also condemns atheism, describing it as “a return to the state of the animal” and argues that “the notion of divinity gives most men the opportunity to concretise the feeling they have of supernatural” (Table Talk: p123; p61). Hitler also often referred to God, and especially providence, in a metaphoric sense. Indeed, he even himself professes a belief in a God, albeit of a decidedly non-Chrisitian Pantheistic form, defining God as “the dominion of natural laws throughout the whole universe” (Table Talk: p6).
However, this only demonstrates that there are other forms of theism, and deism, besides Christianity, and that one can be opposed to Christianity without being opposed to all religion. Thus, Goebbels declares in his Diary: 

The Fuhrer is deeply religious, though completely anti-Christian” (The Goebbels diaries, 1939-1941: p77). 

The general impression from Table Talk is that Hitler sees himself, perhaps surprisingly, as a scientific materialist, albeit one who, like, it must be said, no few modern scientific materialists, actually often knows embarrassingly little about science. (For example, in Table Talk, Hitler repeatedly endorses Hörbiger’s World Ice Theory, comparing Hörbiger to Copernicus in his impact on cosmology, and even proposing opposing the “pseudo-science of the Catholic Church” with the ‘science’ of PtolemyCopernicus, and, yes, Hörbiger: Table Talk: p249; p324; p445.)

[9] After all, socialists already have the horrors of Mau, Stalin, Pol Pot and communist North Korea among many others on their hands. To be associated also with National Socialism in Germany as well would effectively make socialism responsible for, or at least associated with, virtually all of the great atrocities of the twentieth century, rather than merely the vast majority of them. 

[10] Interestingly, although dictionary definitions available on the internet vary considerably, most definitions of ‘socialism tend to be much narrower than my definition, emphasizing, in particular, common or public ownership of the means of production. Partly, this reflects, I suspect, the different connotations of the word in British- and American-English. Thus, in America, where, until recently, socialism was widely seen as anathema, the term was associated with, and indeed barely distinguished from, communism or Marxism. In Britain, however, where the Labour Party, one of the two main parties of the post-war era, traditionally styled itself ‘socialist’, despite generally advocating and pursuing policies that would be closer to what would be called, on continental Europe, ‘social democracy’, the word has much less radical connotations.

[11] Admittedly, reducing unemployment also seems to have been a further objective of some of the large public works projects undertaken under the Nazis (e.g. the construction of the autobahns), and this can indeed be seen as a socialist objective. However, socialists are, of course, not alone in seeing job creation as desirable and high rates of unemployment as undesirable. On the contrary, the desirability of job creation and of reducing unemployment is widely accepted across the political spectrum. Politicians differ primarily on the best way to achieve this goal. Those on the left are more likely to favour increasing public sector employment, including through the sorts of public works projects employed by the Nazis. Neo-liberals are more likely to favour cutting taxes, in order to increase spending and investment, which they theorize will increase private sector employment.

[12] It is possible Hitler’s own views evolved over time, and he too may initially have been more sympathetic to socialist policies. Thus, still largely unexplained is the full story of Hitler’s apparent involvement with the short-lived revolutionary communist regime in Munich in 1919, led by the Jewish communist Kurt Eisner. Ron Rosenbaum writes:

One piece of evidence adduced for this view documents Hitler’s successful candidacy for a position on the soldier’s council in a regiment that remained loyal to the short-lived Bolshevik regime that ruled Munich for a few weeks in April 1919. Another is a piece of faded, scratchy newsreel footage showing the February 1919 funeral procession for Kurt Eisner, the assassinated Jewish leader of the socialist regime then in power. Slowed down and studied, the funeral footage shows a figure who looks remarkably like Hitler marching in a detachment of soldiers, all wearing armbands on their uniforms in tribute to Eisner and the socialist regime that preceded the Bolshevik one” (Explaining Hitler: pxxxvii). 

If Hitler was indeed briefly a supporter of the Peoples’ State of Bavaria, which remains far from proven, and this supported reflected more than mere opportunism, then it remains to be proven when his later antiSemitic and anti-Marxist views became crystalized. It is clear that, by the time he joined the nascent DAP, Hitler was already a confirmed anti-Semite. However, perhaps he still remained something of a socialist at this time. Indeed, this might explain why he ever joined the German Workers’ Party, which, at that early time, indeed seems to have had a broadly socialist, as well as nationalist, orientation. 

[13] In fact, Nietzsche is wrong to credit the Jews as the first to perform this transvaluation of values that elevated asceticism, poverty and abstinence from worldly pleasures into a positive value. On the contrary, similar and analogous notions of asceticism seem to have had an entirely independent, and apparently prior, origin in the Indian subcontinent, in the form of both Buddhism and especially Jainism

[14] The supposed proof of this theory in to be found in the state of Israel, where Jews find themselves as a majority, and where, far from embodying the sort of ideals of multiculturalism and tolerance that Jews have typically been associated with championing in the west, there is an apartheid state, the persecution of the country’s Palestinian minority, an immigration policy that overtly discriminates against non-Jews, not to mention increasing levels of conservatism and religiosity, proving that Jewish subversive iconoclasm is intended only for external Gentile consumption. 

[15] This is, for example, an integral part of the influential definition of fascism espoused by historian and political theorist Roger Griffin in his book, The Nature of Fascism.

[16] In fact, whether Nietzsche indeed envisaged the Übermensch in this way – namely as a real-world coming savior promising a new transvaluation of values and revitablization of society and civilization that would restore the warrior ethos of the ancients – is not at all clear. In fact, the concept of the Übermensch is mentioned quite infrequently in his writings, largely in Thus Spake Zarathustra and Ecce Homo, and is neither fully developed nor clearly explained. It has even been suggested that the importance of this concept in Nietzsche’s thought has been exaggerated, partly on account of its use in use in the title of George Bernard Shaw’s famous play, Man and Superman, which explores Nietzschean themes.
Elsewhere in his writing, Nietzsche is seemingly resolutely ‘blackpilled’ regarding the inevitability of moral and spiritual decline and the impossibility of any recovery. Thus, in Twilight of the Idols, he reproaches the conservatives for attempting to turn back the clock, declaring that an arrest, let alone a reverse, in the degeneration of mankind and civilization is an impossibility:

It cannot be helped: we must go forward,—that is to say step by step further and further into decadence (—this is my definition of modern ‘progress’). We can hinder this development, and by so doing dam up and accumulate degeneration itself and render it more convulsive, more volcanic: we cannot do more” (Twilight of the Idols: VIII, 43).

In other words, not only is God indeed dead (as are Zeus, Jupiter, Thor and Wotan), but, unlike Jesus in the Gospels, he can never be resurrected.

[17] Of course, another difference between Nietzsche and the Nazis is that the contemporary German culture that each regarded as decadent were separated from each other by several decades. Thus, while Hitler may have despised the German culture of the 1920s as, in many respects, decadent, he nevertheless admired in many respects the German culture of Nietzsche’s time and certainly regarded this Germany as superior to the Weimar-era Germany in which he found himself after the First World War. 
Nevertheless, Hitler did not regard the Germany of Nietzsche’s own time as any kind of ‘golden age’ or ‘lost Eden’. On the contrary, he would have deplored the Germany of Nietzsche’s day both for its alleged domination by Jews and the fact that, even after Bismarck’s supposed unification of Germany, Hitler’s own native Austria remained outside the German Reich.
Thus, neither Nietzsche nor Hitler were mere reactionaries nostalgically looking to turn back the clock. On the contrary, Nietzsche considers this an imposibility, writing:

It cannot be helped: we must go forward,—that is to say step by step further and further into decadence (—this is my definition of modern ‘progress’). We can hinder this development, and by so doing dam up and accumulate degeneration itself and render it more convulsive, more volcanic: we cannot do more” (Twilight of the Idols: VIII, 43).

Thus, just as Nietzsche does not yearn for a return to the master morality or paganism of pre-Christian Europe and classical antiquity, but rather for the coming Übermensch and new transvaluation of values that he would deliver, so Hitler’s own ‘golden age’ was to be found, not in the nineteenth century, nor even in classical antiquity, but rather in the new thousand year Reich he envisaged and sought to construct.

[18] Other English translations render the German as the “blond Teutonic beast [emphasis added]”. At any rate, regardless of the precise translation, it is clear that a reference to the ancient Germanic peoples is intended. 

[19] The influence of such occult ideas on the Nazi leadership is much exaggeraged in some popular, sensationalist histories (or pseudohistory) of the Nazi period. However, the influence of Völkisch occultism on the development of the National Socialist movement is not entirely a myth, and is evident, not only in the name of the Thule Society, which birthed the NSDAP, but also, for example, in the adoption by the movement of the swastika symbol as an emblem and later a flag. Indeed, although generally regarded as dismissive of such bizarre esoteric notions, and wary of their influence on some of his followers (notably Himmler and Hess) who did not share his skepticism, even Hitler himself professed belief in World Ice Theory in his Table Talk (p249; p324; p445).

[20] Nietzsche has an odd attitude to Darwinism and social Darwinism. On the one hand, he frequently disparages Darwin and Darwinism. On the other hand, his moral philosophy directly parallels that of the social Darwinists, albeit bereft of the Darwinian theory that provides the ostensible justification and basis for this moral philosophy
Interestingly, Hitler too has an ambiguous, and, in some respects, similar, relationship with both Darwinism and social Darwinism. On the one hand, Hitler, like Nietzsche, frequently espouses views that read very much like social Darwinism. For example, in Mein Kampf, Hitler writes:

Those who want to live, let them fight, and those who do not want to fight in this world of eternal struggle do not deserve to live” (Mein Kampf).

Similarly, in his Table Talk, Hitler is quoted as declaring:

By means of the struggle, the elites are continually renewed. The law of selection justifies this incessant struggle, by allowing the survival of the fittest” (Hitler’s Table Talk).

Both these quotations definitely sound like social Darwinism. Yet, interestingly, Hitler never actually mentions Darwin or Darwinism, his reference to the law of selection” being the closest he comes to referencing the theory of evolution, and even this is ambiguous, at least in the English rendering. Moreover, in a different passage from Table Talk, Hitler seemingly emphatically rejects the theory of evolution, demanding: 

Where do we acquire the right to believe that man has not always been what he is now? The study of nature teaches us that, in the animal kingdom just as much as in the vegetable kingdom, variations have occurred. They’ve occurred within the species, but none of these variations has an importance comparable with that which separates man from the monkey — assuming that this transformation really took place” (Hitler’s Table Talk: p248). 

What are we to make of this? Clearly, Hitler often contradicted himself and seemingly expressed contradictory and inconsistent views.
Moreover, both Hitler and Nietzsche didn’t really understand Darwin’s theory of evolution. Thus, Nietzsche suggested that the struggle between individuals concerns, not mere survival, but rather power. In fact, it concerns neither survival nor power as such – but rather reproductive success (which tends to correlate with power, especially among men,which is why men, in particular, are known to seek power). Spencer’s phrase, survival of the fittest, is useful only once we recognise that the ‘survival’ promoted by selection is the survival of genes rather than of individual organisms themsevles.
But we must recognize that it is possible, and quite logically consistent, to espouse something very similar in content to a social Darwinist moral framework without actually justifying this moral framework by reference to Darwinism.
In short, both Nietzsche and Hitler seem to be advocating something akin to ‘social Darwinism without the Darwinism’.

[21] If Hitler was influenced by Chamberlain, then Chamberlain himself was a disciple of Arthur de Gobineau. The latter, though considered by many as the ultimate progenitor of Nazi race theory, was, far from anti-Semitic, actually positively effusive in his praise for and admiration of the Jewish people. Even Chamberlain, though widely regarded as an anti-Semite, at least with respect to the Ashkenazim, nevertheless professed to admire Sephardi Jews, not least on account of their supposed ‘racial purity’, in particular their refusal to intermingle and intermarry with the Ashkenazim.

[22] The exact connotations of this passage may depend on the translation. The version I have quoted comes from the Manheim edition. However, a different translation renders the passage, not as The mightiest counterpart to the Aryan is represented by the Jew, but rather The Jew offers the most striking contrast to the Aryan”. This alternative translation has rather different, and less flattering, connotations, given that Hitler famously extolled Aryans as the master race. 

The Biology of Beauty

Nancy Etcoff, Survival of the Prettiest: The Science of Beauty (New York: Anchor Books 2000) 

Beauty is in the eye of the beholder.  

This much is true by very definition. After all, the Oxford English Dictionary defines beauty as: 

A combination of qualities, such as shape, colour, or form, that pleases the aesthetic senses, especially the sight’. 

If beauty is in the eye of the beholder, then the ‘eye of the beholder’ has been shaped by a process of natural, and sexual, selection to find certain things beautful — and, if beauty is in the ‘eye of the beholder’, then sexiness is located in a different part of the male anatomy but similarly subjective

Thus, beauty is defined as that which is pleasing to an external observer. It therefore presupposes the existence of an external observer, separate from the person, or thing, that is credited with beauty, from whose perspective the thing or individual is credited with beauty.[1]

Moreover, perceptions of beauty do indeed differ.  

To some extent, preferences differ between individuals, and between different races and cultures. More obviously, and to a far greater extent, they also differ as between species.  

Thus, a male chimpanzee would presumably consider a female chimpanzee as more beautiful than a woman. The average human male, however, would likely disagree – though it might depend on the woman. 

As William James wrote in 1890: 

To the lion it is the lioness which is made to be loved; to the bear, the she-bear. To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her” (Principles of Psychology (vol 2): p387). 

Beauty is therefore not an intrinsic property of the person or object that is described as beautiful, but rather a quality attributed to that person or object by a third-party in accordance with their own subjective tastes. 

However, if beauty is then indeed a subjective assessment, that does not mean it is an entirely arbitrary one. 

On the contrary, if beauty is indeed in the ‘eye of the beholder’ then it must be remembered that the ‘eye of the beholder’—and, more importantly, the brain to which that eye is attached—has been shaped by a process of both natural and sexual selection

In other words, we have evolved to find some things beautiful, and others ugly, because doing so enhanced the reproductive success of our ancestors. 

Thus, just as we have evolved to find the sight of excrement, blood and disease disgusting, because each were potential sources of infection, and the sight of snakes, lions and spiders fear-inducing, because each likewise represented a potential threat to our survival when encountered in the ancestral environment in which we evolved, so we have evolved to find the sight of certain things pleasing on the eye. 

Of course, not only people can be beautiful. Landscapes, skylines, works of art, flowers and birds can all be described as ‘beautiful’. 

Just as we have evolved to find individuals of the opposite sex attractive for reasons of reproduction, so these other aspects of aesthetic preference may also have been shaped by natural selection. 

Thus, some research has suggested that our perception of certain landscapes as beautiful may reflect psychological adaptations that evolved in the context of habitat selection (Orians & Heerwagen 1992).  

However, Nancy Etcoff does not discuss such research. Instead, in ‘Survival of the Prettiest’, her focus is almost exclusively on what we might term ‘sexual beauty’. 

Yet, if beauty is indeed in the ‘in the eye of the beholder’, then sexiness is surely located in a different part of the male anatomy, but equally subjective in nature. 

Indeed, as I shall discuss below, even in the context of mate preferences, ‘sexiness’ and ‘beauty’ are hardly synonyms. As an illustration, Etcoff herself quotes that infamous but occasionally insightful pseudo-scientist and all-round charlatan, Sigmund Freud, whom she quotes as observing:  

The genitals themselves, the sight of which is always exciting, are nevertheless hardly ever judged to be beautiful; the quality of beauty seems, instead, to attach to certain secondary sexual characters” (p19: quoted from Civilization and its Discontents). 

Empirical Research 

Of the many books that have been written about the evolutionary psychology of sexual attraction (and I say this as someone who has read, at one time or another, a good number of them), a common complaint is that they are full of untested, or even untestable, speculation – i.e. what that other infamous scientific charlatan Stephen Jay Gould famously referred to as just so stories

This is not a criticism that could ever be levelled at Nancy Etcoff’s ‘Survival of the Prettiest’. On the contrary, as befits Etcoff’s background as a working scientist (not a mere journalist or popularizer), it is, from start to finish, it is full of data from published studies, demonstrating, among other things, the correlates of physical attractiveness, as well as the real-world payoffs associated with physical attractiveness (what is sometimes popularly referred to as ‘lookism’). 

Indeed, in contrast to other scientific works dealing with a similar subject-matter, one of my main criticisms of this otherwise excellent work would be that, while rich in data, it is actually somewhat deficient in theory. 

Youthfulness, Fertility, Reproductive Value and Attractiveness 

A good example of this deficiency in theory is provided by Etcoff’s discussion of the relationship between age and attractiveness. Thus, one of the main and recurrent themes of ‘Survival of the Prettiest’ is that, among women, sexual attractiveness is consistently associated with indicators of youth. Thus, she writes: 

Physical beauty is like athletic skill: it peaks young. Extreme beauty is rare and almost always found, if at all, in people before they reach the age of thirty-five” (p63). 

Yet Etcoff addresses only briefly the question of why it is that youthful women or girls are perceived as more attractive – or, to put the matter more accurately, why it is that males are sexually and romantically attracted to females of youthful appearance. 

Etcoff’s answer is: fertility

Female fertility rapidly declines with age, before ceasing altogether with menopause

There is, therefore, in Darwinian terms, no benefit in a male being sexually attracted to an older, post-menopausal female, since any mating effort expended would be wasted, as any resulting sexual union could not produce offspring. 

As for the menopause itself, this, Etcoff speculates, citing scientific polymath, popularizer and part-time sociobiologist Jared Diamond, evolved because human offspring enjoy a long period of helpless dependence on their mother, without whom they cannot survive. 

Therefore, after a certain age, it pays women to focus on caring for existing offspring, or even grandchildren, rather than producing new offspring whom they will likely not be around to care for (p73).[2]

This theory has sometimes been termed the grandmother hypothesis.

However, the decline in female fertility with age is perhaps not sufficient to explain the male preference for youth. 

After all, women’s fertility is said to peak in their early- to mid-twenties.[3]

However, men’s (and boy’s) sexual interest, if anything, seems to peak in respect of females, if anything, somewhat younger, namely in their late-teens (Kenrick & Keefe 1992). 

To explain this, Douglas Kenrick and Richard Keefe propose, following a suggestion of Donald Symons, that this is because girls at this age, while less fertile, have higher reproductive value, a concept drawn from ecology, population genetics and demography, which refers to an individual’s expected future reproductive output given their current age (Kenrick & Keefe 1992). 

Reproductive value in human females (and in males too) peaks just after puberty, when a girl first becomes capable of bearing offspring. 

Before then, there is always the risk she will die before reaching sexual maturity; after, her reproductive value declines with each passing year as she approaches menopause. 

Thus, Kenrick and Keefe, like Symons before them, argue that, since most human reproduction occurs within long-term pair-bonds, it is to the evoluionaryadvantage of males to form long-term pair-bonds with females of maximal reproductive value (i.e. mid to late teens), so that, by so doing, they can monopolize the entirety of that woman’s reproductive output over the coming years. 

Yet the closest Etcoff gets to discussing this is a single sentence where she writes: 

Men often prefer the physical signs of a woman below peak fertility (under age twenty). Its like signing a contract a year before you want to start the job” (p72). 

Yet the theme of indicators of youth being a correlate of female attractiveness is a major theme of her book. 

Thus, Etcoff reports that, in a survey of traditional cultures: 

The highest frequency of brides was in the twelve to fifteen years of age category… Girls at this age are preternaturally beautiful” (p57). 

It is perhaps true that “girls at this age are preternaturally beautiful” – and Etcoff, being female, can perhaps even get away with saying this without being accused of being a pervert or ‘pedophile’ for even suggesting such a thing. 

Nevertheless, this age “twelve to fifteen” seems rather younger than most men’s, and even most teenage boys, ideal sexual partners, at least in western societies. 

Thus, for example, Kenrick and Keefe inferred from their data that around eighteen was the preferred age of sexual partner for most males, even those somewhat younger than this themselves.[4]

Of course, in primitive, non-western cultures, women may lose their looks more quickly, due to inferior health and nutrition, the relative unavailability of beauty treatments and because they usually undergo repeated childbirth from puberty onward, which takes a toll on their health and bodies. 

On the other hand, however, obesity is more prevalent in the West, decreases sexual attractiveness and increases with age. 

Moreover, girls in the west now reach puberty somewhat earlier than in previous centuries, and perhaps earlier than in the developing world, probably due to improved nutrition and health. This suggests that females develop secondary sexual characteristics (e.g. large hips and breasts) that are perceived as attractive because they are indicators of fertility, and hence come to be attractive to males, rather earlier than in premodern or primitive cultures. 

Perhaps Etcoff is right that girls “in the twelve to fifteen years of age category… are preternaturally beautiful” – though this is surely an overgeneralization and does not apply to every girl of this age. 

However, if ‘beauty’ peaks very early, I suspect ‘sexiness’ peaks rather later, perhaps late-teens into early or even mid-twenties. 

Thus, the latter is dependent on secondary sexual characteristics that develop only in late-puberty, namely larger breasts, buttocks and hips

Thus, Etcoff reports, rather disturbingly, that: 

When [the] facial proportions [of magazine cover girls] are fed into a computer, it guesstimates their age to be between six and seven years of age” (p151; citing Jones 1995). 

But, of course, as Etcoff is at pains to emphasize in the next sentence, the women pictured do not actually look like they are of this age, either in their faces let alone their bodies. 

Instead, she cites Douglas Jones, the author of the study upon which this claim is based, as arguing that the neural network’s estimate of their age can be explained by their display of “supernormal stimuli”, which she defines as “attractive features… exaggerated beyond proportions normally found in nature (at least in adults)” (p151). 

Yet much the same could be said of the unrealistically large, surgically-enhanced breasts favored among, for example, glamour models. These abnormally large breasts are likewise an example of “supernormal stimuli” that may never be found naturally, as suggested by Doyle & Pazhoohi (2012)

But large breasts are indicators of sexual maturity that are rarely present in girls before their late-teens. 

In other words, if the beauty of girls’ faces peaks at a very young age, the sexiness of their bodies peaks rather later. 

Perhaps this distinction between what we can term ‘beauty’ and ‘sexiness’ can be made sense of in terms of a distinction between what David Buss calls short-term and long-term mating strategies

Thus, if fertility peaks in the mid-twenties, then, in respect of short-term mating (i.e. one-night stands, casual sex, hook-ups and other one-off sexual encounters), men should presumably prefer partners of a somewhat greater age than their preferences in respect of long-term partners – i.e. of maximal fertility rather than maximum reproductive value – since in the case of short-term mating strategies there is no question of monopolizing the woman or girl’s long-term future reproductive output. 

In contrast, cues of beauty, as evinced by relatively younger females, might trigger a greater willingness for males to invest in a long-term relationship. 

This ironically suggests, contrary to contemporary popular perception, males’ sexual or romantic interest in respect of relatively younger women and girls (i.e. those still in their teens) would tend to reflect more ‘honourable intentions’ (i.e. more focussed on marriage or a long-term relationship rather than mere casual sex) than does their interest in older women. 

However, as far as I am aware, no study has ever demonstrated differences in men’s preferences regarding the preferred age-range of their casual sex partners as compared to their preferences in respect of longer-term partners. This is perhaps because, since commitment-free casual sex is almost invariably a win-win situation for men, and most men’s opportunities in this arena likely to be few and far between, there has been little selection acting on men to discriminate at all in respect of short-term partners. 

Are There Sex Differences in Sexiness? 

Another major theme of ‘Survival of the Prettiest’ is that the payoffs for good-looks are greater for women than for men. 

Beauty is most obviously advantageous in a mating context. But women convert this advantage into an economic one through marriage. Thus, Etcoff reports: 

The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (p65; see also Udry & Eckland 1984; Hamermesh & Biddle 1994). 

However, there is no such advantage accruing to better-looking male students. 

On the hand, according to Catherine Hakim, in her book Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom (which I have reviewed here, here and here) in the workplace, the wage premium associated with being better looking is actually, perhaps surprisingly, greater for men than for women. 

For Hakim herself: 

This is clear evidence of sex discrimination… as all studies show women score higher than men on attractiveness” (Money, Honey: p246). 

However, as I explain in my review of her book, the better view is that, since beauty opens up so many other avenues to social advancement for women, notably through marriage, relatively more beautiful women corresponding reduce their work-effort in the workplace since they have need of pursuing social advancement through their careers when they can far more easily achieve it through marriage. 

After all, by bother to earn money when you can simply marry it instead. 

According to Etcoff, there is only one sphere where being more beautiful is actually disadvantageous for women, namely in respect of same-sex friendships: 

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (p50; citing Krebs & Adinolfy 1975). 

She does not speculate as to why this is so. An obvious explanation is envy and dislike of the sexual competition that beautiful women represent. 

However, an alternative explanation is perhaps that beautiful women do indeed come to have less likeable personalities. Perhaps, having grown used to receiving preferential treatment from and being fawned over by men, beautiful women become entitled and spoilt. 

Men might overlook these flaws on account of their looks, but, other women, immune to their charms, may be a different story altogether.[5]

All this, of course, raises the question as to why the payoffs for good looks are so much greater for women than for men? 

Etcoff does not address this, but, from a Darwinian perspective, it is actually something of a paradox which I have discussed previously

After all, among other species, it is males for whom beauty affords a greater payoff in terms of the ultimate currency of natural selection – i.e. reproductive success. 

It is therefore male birds who usually evolve more beautiful plumages, while females of the same species are often quite drab, the classic example being the peacock and peahen

The ultimate evolutionary explanation for this pattern is called Bateman’s principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972). 

The basis of this theory is this: Females must make a greater minimal investment in offspring in order to successfully reproduce. For example, among humans, females must commit themselves to nine months pregnancy, plus breastfeeding, whereas a male must contribute, at minimum, only a single ejaculate. Females therefore represent the limiting factor in mammalian reproduction for access to whom males compete. 

One way in which they compete is by display (e.g. lekking). Hence the evolution of the elaborate tail of the peacock

Yet, among humans, it is females who seem more concerned with using their beauty to attract mates. 

Of course, women use makeup and clothing to attract men rather than growing or evolving long tails. 

However, behavior is no less subject to selection than morphology, so the paradox remains.[6]

Indeed, the most promising example of a morphological trait in humans that may have evolved primarily for attracting members of the opposite sex (i.e. a ‘peacock’s tail’) is, again, a female trait – namely, breasts

This is, of course, the argument that was, to my knowledge, first developed by ethologist Desmond Morris in his book The Naked Ape, which I have reviewed here, and which I discuss in greater depth here

As Etcoff herself writes: 

Female breasts are like no others in the mammalian world. Humans are the only mammals who develop rounded breasts at puberty and keep them whether or not they are producing milk… In humans, breast size is not related to the amount or quality of milk that the breast produces” (p187).[7]

Instead, human breasts are, save during pregnancy and lactation, composed predominantly of, not milk, but fat. 

This is in stark contrast to the situation among other mammals, who develop breasts only during pregnancy. 

Breasts are not sex symbols to other mammals, anything but, since they indicate a pregnant or lactating and infertile female. To chimps, gorillas and orangutans, breasts are sexual turn-offs” (p187). 

 Why then does sexual selection seem, at least on this evidence, to have acted more strongly on women than men? 

Richard Dawkins, in The Selfish Gene (which I have reviewed here), first alluded to this anomaly, lamenting: 

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (The Selfish Gene: p165). 

Yet this is surely not the case with regard to casual sex (i.e. hook-ups and one-night stands). Here, it is very much men who ardently pursue and women who are sought after. 

For example, in one study at a University campus, 72% of male students agreed to go to bed with a female stranger who propositioned them to this effect, yet not a single one of the 96 females approached agreed to the same request from a male stranger (Clark and Hatfield 1989). 

(What percentage of the students sued the university for sexual harassment was not revealed.) 

Indeed, patterns of everything from prostitution to pornography consumption confirm this – see The Evolution of Human Sexuality (which I have reviewed here). 

Yet humans are unusual among mammals in also forming long-term pair-bonds where male parental investment is the norm. Here, men have every incentive to be as selective as females in their choice of partner. 

In particular, in Western societies practising what Richard Alexander called socially-imposed monogamy (i.e. where there exist large differentials in male resource holdings, but polygynous marriage is unlawful) competition among women for exclusive rights to resource-abundant alpha males may be intense (Gaulin and Boser 1990). 

In short, the advantage to a woman in becoming the sole wife of a multi-millionaire is substantial. 

This, then, may explain the unusual intensity of sexual selection among human females. 

Why, though, is there not evidence of similar sexual selection operating among males? 

Perhaps the answer is that, since, in most cultures, arranged marriages are the norm, female choice actually played little role in human evolution. 

Instead, male mating success may have depended less upon what Darwin called intersexual selection and more upon intrasexual selection – i.e. less upon female choice and more upon male-male fighting ability (see Puts 2010). 

Male Attractiveness and Fighting Ability 

Paradoxically, this is reflected even in the very traits that women find attractive in men. 

Thus, although Etcoff’s book is titled ‘The Evolution of Prettiness’, and ‘prettiness’ is usually an adjective applied to women, and, when applied to men, is—perhaps tellingly—rarely a complement, Etcoff does discuss male attractiveness too.  

However, Etcoff acknowledges that male attractiveness is a more complex matter than female attractiveness: 

We have a clearer idea of what is going on with female beauty. A handsome male turns out to be a bit harder to describe, although people reach consensus almost as easily when they see him” (p155).[8]

Yet what is notable about the factors that Etcoff describes as attractive among men is that they all seem to be related to fighting ability. 

This is most obviously true of height (p172-176) and muscularity (p176-80). 

Indeed, in a section titled “No Pecs, No Sex”, though she focuses on the role of pectoral muscles in determining attractiveness, Etcoff nevertheless acknowledges: 

Pectoral muscles are the human male’s antlers. Their weapons of war” (p177). 

Thus, height and muscularity have obvious functional utility. 

This in stark contrast to traits such as the peacock’s tail, which are often a positive handicap to their owner. Indeed, one influential theory of sexual selection contends that it is precisely because they represent a handicap that they have evolved as a sexually-selected fitness indicator, because only a genetically superior male is capable of bearing the handicap of such an unwieldy ornament, and hence possession of such a handicap is paradoxically an honest signal of health. 

Yet, if men’s bodies have evolved more for fighting than attracting mates, the same is perhaps less obviously true of their faces. 

Thus, anthropologist David Puts proposes: 

Even [male] facial structure may be designed for fighting: heavy brow ridges protect eyes from blows, and robust mandibles lessen the risk of catastrophic jaw fractures” (Puts 2010: p168). 

Indeed, looking at the facial features of a highly dominant, masculine male face, like that of Mike Tyson, for example, one gets the distinct impression that, if you were foolish enough to try punching it, it would likely do more damage to your hand than to his face. 

Thus, if some faces are, as cliché contends, highly ‘punchable’, then others are presumably at the opposite end of this spectrum. 

This also explains some male secondary sexual characteristics that otherwise seem anomalous, for example, beards. These have actually been found in some studies “to decrease attractiveness to women, yet have strong positive effects on men’s appearance of dominance” (Puts 2010: p166). 

David Puts concludes: 

Men’s traits look designed to make men appear threatening, or enable them to inflict real harm. Men’s beards and deep voices seem designed specifically to increase apparent size and dominance” (Puts 2010: p168). 

Interestingly, Etcoff herself anticipates this theory, writing: 

Beautiful ornaments [in males] develop not just to charm the opposite sex with bright colors and lovely songs, but to intimidate rivals and win the intrasex competition—think of huge antlers. When evolutionists talk about the beauty of human males, they often refer more to their weapons of war than their charms, to their antlers rather than their bright colors. In other words, male beauty is thought to have evolved at least partly in response to male appraisal” (p74) 

Of course, these same traits are also often attractive to females. 

After all, if a tall muscular man has higher reproductive success because he is better at fighting, then it pays women to preferentially mate with tall, muscular men so that their male offspring will inherit these traits and hence themselves have high reproductive success, helping the spread the women’s own genes by piggybacking on the superior male’s genes.  

This is a version of sexy son theory

In addition, males with fighting prowess are better able to protect and provision their mates. 

However, this attractiveness to females is obviously secondary to the primary role in male-male fighting. 

Moreover, Etcoff admits, highly masculine faces are not always attractive. 

Thus, unlike the “supernormal” or “hyperfeminine” female faces that men find most attractive in women, women rated “hypermasculine” faces as less attractive (p158). This, she speculates, is because they are perceived as overaggressive and unlikely to invest in offspring

As to whether such men are indeed less willing to invest in offspring, this Etcoff does not discuss and there appears to be little evidence on the topic. But the association of testosterone with both physiological and psychological masculinization suggests that the hypothesis is at least plausible

Etcoff concludes: 

For men, the trick is to look masculine but not exaggeratedly masculine, which results in a ‘Neanderthal’ look suggesting coldness or cruelty” (p159). 

Examples of males with perhaps overly masculine faces are perhaps certain boxers, who tend to have highly masculine facial morphology (e.g. heavy brow ridges, deep set eyes, wide muscular jaws), but are rarely described as handsome. 

For example, I doubt anyone would ever call Mike Tyson handsome. But, then, no one would ever call him exactly ugly either – at least not to his face. 

An extreme example might be the Russian boxer Nikolai Valuev, whose extreme neanderthal-like physiognomy was much remarked on. 

Another example that sprung to mind was the footballer Wayne Rooney (also, perhaps not uncoincidentally, said to have been a talented boxer) who, when he first became famous, was immediately tagged by the newspapers, media and comedians as ugly despite – or indeed because of – his highly masculine, indeed thuggish, facial physiognomy

Likewise, Etcoff reports that large eyes are perceived as attractive in men, but these are a neotenous trait, associated with both immature infants and indeed with female beauty (p158). 

This odd finding Etcoff attributes to the fact that large eyes, as an infantile trait, evoke women’s nurturance, a trait that evolved in the context of parental investment rather than mate choice

Yet this is contrary to the general principle in evolutionary psychology of modularity of mind and the domain specificity of psychological adaptations, whereby it is assumed that that psychological adaptations for mate choice and for parental investment represent domain-specific modules with little or no overlap. 

Clearly, for psychological adaptations in one of these domains to be applied in the other would result in highly maladaptive behaviours, such as sexual attraction to infants and to your own close biological relatives.[9]

In addition to being more complex and less easy to make sense of than female beauty, male physical attractiveness is also less importance in determining female mate choice than is female beauty in male mate choice

In particular, she acknowledges that male status often trumps handsomeness. Thus, she quotes a delightfully cynical, not especially poetic, line from the ancient Roman poet Ovid, who wrote: 

Girls praise a poem, but go for expensive presents. Any illiterate oaf can catch their eye, provided he’s rich” (quoted: p75). 

A perhaps more memorable formulation of the same idea is quoted on the same page from a less illustrious source, namely boxing promoter, numbers racketeer and convicted killer Don King, on a subject I have already discussed, namely the handsomeness (or not) of Mike Tyson, King remarking: 

Any man with forty two million looks exactly like Clark Gable” (quoted: p75). 


[1] I perhaps belabor this rather obvious point only because one evolutionary psychologist, Satoshi Kanazawa, argues that, since many aspects of beauty standards are cross-culturally universal, beauty standards are not ‘in the eye of the beholder’. I agree with Kanazawa on the substantive issue that beauty standards are indeed mostly cross-culturally universal among humans, albeit not entirely so). However, I nevertheless argue, perhaps somewhat pedantically, that beauty remains strictly in the ‘eye of the beholder’, but it is simply that the ‘eye of the beholder’ (and the brain to which is attached) has been shaped by a process of natural selection so as to make different humans share the same beauty standards. 

[2] While Jared Diamond has indeed made many original contributions to many fields, this idea does not in fact originate with him, even though Etcoff oddly cites him as a source. Indeed, as far as I am aware, it is even especially associated with Diamond. and may actually have been originated by another, lesser known, but arguable even more brilliant evolutionary biologist, namely George C Williams (Williams 1957). 

[3] Actually, pregnancy rates peak surprisingly young, perhaps even disturbingly young, with girls in their mid- to late-teens being most likely to become pregnant from any single act of sexual intercourse, all else being equal. However, the high pregnancy rates of teenage girls are said to be partially offset by their greater risk of birth complications. Therefore, female fertility is said to peak among women in their early- to mid-twenties.

[4] This the authors of the study inferred from, among other evidence, an analysis of lonely hearts advertisements, wherein, although the age of the female sexual/romantic partner sought was related to the advertised age of the man placing the ad (which Kenrick and Keefe inferred was a reflection of the fact that their own age delimited the age-range of the sexual partners whom they would be able to attract, and whom it would be socially acceptable for them to seek out) nevertheless the older the man, the greater the age-difference he sought in a partner. In addition, they reported evidence of surveys suggesting that, in contrast to older men, younger teenage boys, in an ideal world, actually preferred somewhat older sexual partners, suggesting that the ideal age of sexual partner for males of any age was around eighteen years of age. 

[5] Etcoff also does not discuss whether the same is true of exceptionally handsome men – i.e. do exceptionally handsome men, like beautiful women, also have problems maintaining same-sex friendships. I suspect that this is not so, since male status and self-esteem is not usually based on handsomeness as such – though it may be based on things related to handsomeness, such as height, athleticism, earnings, and perceived ‘success with women’. Interestingly, however, French novelist Michel Houellebecq argues otherwise in his novel, Whatever, in which, after describing the jealousy of one of the main characters, the short ugly Raphael Tisserand, towards an particularly handsome male colleague, writes: 

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63) 

[6] Thus, in other non-human species, behaviour is often subject to sexual selection, in, for example, mating displays, or the remarkable, elaborate and often beautiful, but non-functional, nests built by male bowerbirds, which Geoffrey Miller sees as analogous to human art. 

[7] An alternative theory for the evolution of human breasts is that they evolved, not as a sexually selected ornament, but rather as a storehouse of nutrients, analogous to the camel’s humps, upon which women can draw during pregnancy. On this view, the sexual dimorphism of their presentation (i.e. the fact that, although men do have breasts, they are usually much less developed than those of women) reflects, not sexual selection, but rather the calaric demands of pregnancy. 
However, these two alternative hypotheses are not mutually incompatible. On the contrary, they may be mutually reinforcing. Thus, Etcoff herself mentions the possibility that breasts are attractive precisely because: 

Breasts honestly advertise the presence of fat reserves needed to sustain a pregnancy” (p178.) 

On this view, men see fatty breasts as attractive in a sex partner precisely because only women with sufficient reserves of fat to grow large breasts are likely to be capable of successfully gestating an infant for nine months. 

[8] Personally, as a heterosexual male, I have always had difficulty recognizing ‘handsomeness’ in men, and I found this part of Etcoff’s book especially interesting for this reason. In my defence, this is, I suspect, partly because many rich and famous male celebrities are celebrated as ‘sex symbols’ and described as ‘handsome’, even though their status as ‘sex symbols’ owes more to the fact they are rich and famous than their actual looks. Thus, male celebrities sometimes become ‘sex symbols’ despite their looks, rather than because of it. Many famous rock stars, for example, are not, I feel especially handsome. In contrast, men did not suddenly start idealizing physically unattractive female celebrities as ‘sex symbols’, or as beautiful, simply because they are famous celebrities, howsoever rich and famous they may become. 
Add to this the fact that much of what passes for good looks in both sexes is, ironically, normalness – i.e. a lack of abnormalities and averageness – and identifying which men women consider ‘handsome’ had, before reading Etcoff’s book, always escaped me.
However, Etcoff, for her part, might well call me deluded. Men, she reports, only claim they cannot tell which men are handsome and which are not, perhaps to avoid being accused of homosexuality: 

Although men think they cannot judge another man’s beauty, the agree among themselves and with women about which men are the handsomest” (p138). 

Nevertheless, there is indeed some evidence that judging male handsomeness is not as clear cut as Etcoff seems to suggests. Thus, it has been found that, not only do men claim to have difficulty telling handsome men from ugly men, but also women themselves are more likely to disagree among themselves about the physical attractiveness of members of the opposite sex as compared to men (Wood & Brumbaugh 2009Wake Forest University 2009). 
Indeed, not only do women not always agree with one another regarding the attractiveness of men, sometimes they can’t even agree with themselves. Thus, Etcoff reports: 

A woman makes her evaluations of men more slowly, and if another woman offers a different opinion, she may change her mind” (p76). 

This indecisiveness, for Etcoff, actually makes good evolutionary sense:

If women take a second look, compare notes with other women, or change their minds after more thought, it is not out of indecisiveness but out of wisdom. Mate choice is not just about fertility—most men are fertile most or all of their lives—but about finding a helpmate to bring up the baby” (p77). 

Another possible reason why women may consult other women as to whether a given man is attractive or not is sexy son theory
On this view, it pays for women to mate with men who are perceived as attractive by other women because then any offspring whom they bear by these men will likely inherit the very traits that made the father attractive to women, and hence themselves be attractive to women and hence be successful in spreading the woman’s own genes to subsequent generations. 
In other words, being attractive to other women is itself an attractive trait in a male. However, sexy son theory is not discussed by Etcoff.

[9] Another study discussed by Etcoff also reported anomalous results, finding that women actually preferred somewhat feminized male faces over both masculinized and average male faces (Perrett et al 1998). However, Etcoff cautions that: 

The Perrett study is the only empirical evidence to date that some degree of feminization may be attractive in a man’s face” (p159). 

Other studies concur that male faces that are somewhat, but not excessively, masculinized as compared to the average male face are preferred by women. 
However, one study published just after the first edition of ‘Survival of the Prettiest’ was written, holds the possibility of reconciling these conflicting findings. This study reported cyclical changes in female preferences, with women preferring more masculinized faces only when they are in the most fertile phase of their cycle, and at other times preferring more feminine features (Penton-Voak & Perrett 2000). 
This, together with other evidence, has been controversially interpreted as suggesting that human females practice a so-called dual mating strategy, preferring males with more feminine faces, supposedly a marker for a greater willingness to invest in offspring, as social partners, while surreptitiously attempting to cuckold these ‘beta providers’ with DNA from high-T alphas, by preferentially mating with the latter when they are most likely to be ovulating (see also Penton-Voak et al 1999Bellis & Baker 1990). 
However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology
While the intensity of women’s sex drive does indeed seem to fluctuate cyclically, the evidence for more fine-grained changes in female mate preferences should be treated with caution. 


Bateman (1948), Intra-sexual selection in DrosophilaHeredity, 2(3): 349–368. 
Bellis & Baker (1990). Do females promote sperm competition?: Data for humansAnimal Behavior, 40: 997-999. 
Clark & Hatfield (1989) Gender differences in receptivity to sexual offers. Journal of Psychology & Human Sexuality, 2(1), 39–55 
Doyle & Pazhoohi (2012) Natural and Augmented Breasts: Is What is Not Natural Most Attractive? Human Ethology Bulletin 27(4):4-14. 
Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005. 
Gildersleeve et al (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic reviewPsychological Bulletin 140(5):1205-59. 
Hamermesh & Biddle (1994) Beauty and the Labor Market, American Economic Review 84(5):1174-1194.
Jones 1995 Sexual selection, physical attractiveness, and facial neoteny: Cross-cultural evidence and implications, Current Anthropology, 36(5):723–748. 
Kenrick & Keefe (1992) Age preferences in mates reflect sex differences in mating strategies. Behavioral and Brain Sciences 15(1):75-133. 
Orians & Heerwagen (1992) Evolved responses to landscapes. In Barkow, Cosmides & Tooby (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 555–579). Oxford University Press. 
Penton-Voak et al (1999) Menstrual cycle alters face preferencesNature 399 741-2. 
Penton-Voak & Perrett DI (2000) Female preference for male faces changes cyclically: Further evidence. Evolution and Human Behavoir 21(1):39–48. 
Perrett et al (1998) Effects of sexual dimorphism on facial attractiveness. Nature 394(6696):884-7. 
Puts (2013) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31(3):157-175. 
Wake Forest University (2009) Rating Attractiveness: Consensus Among Men, Not Women, Study Finds. ScienceDaily. ScienceDaily, 27 June 2009. 
Trivers (1972) Parental investment and sexual selectionSexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 
Williams (1957) Pleiotropy, natural selection, and the evolution of senescence. Evolution. 11(4): 398–411. 
Wood & Brumbaugh (2009) Using Revealed Mate Preferences to Evaluate Market Force and Differential Preference Explanations for Mate Selection, Journal of Personality and Social Psychology 96(6):1226-44.
Udry & Eckland (1984) Benefits of Being Attractive: Differential Payoffs for Men and Women, Psychological Reports 54(1):47–56.
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.  

Selwyn Raab’s ‘Five Families’: A History of the New York Mafia, Heavily Slanted Towards Recent Times

Selwyn Raab, Five Families: The Rise, Decline and Resurgence of American’s Most Powerful Mafia Empires (London: Robson Books 2006) 

With Italian-American organized now surely in terminal decline, the time is ripe for a definitive history of the New York Mafia. Unfortunately, Selwyn Raab’s ‘Five Families: The Rise, Decline, and Resurgence of America’s Most Powerful Mafia Empires’ is not it.[1]

In particular, despite its length, it gives only cursory coverage to the early history of the New York Mafia. 

Instead, it is heavily weighted towards the recent history of the five families. 

This is perhaps unsurprising. After all, the author, Selwyn Raab is, by background, a journalist not an historian. 

Indeed, it is surely no coincidence that Raab’s history only starts to become in-depth at about the time he began covering the activities of the New York mob in real-time as reporter for The New York Times in 1974.

To give an idea of this bias I will cite page numbers. 

The book comprises over 700 pages, plus title pages, ‘Prologue’, ‘Introduction’, ‘Afterword’, ‘Epilogue’, two appendices, ‘Bibliography’ and ‘Index’, themselves comprising a further 100 or so pages. 

The first two chapters are introductory, and mostly cite examples of Mafia activities from the mid- to late twentieth century. 

The chronological narrative begins in in Chapter 3, titled ‘Roots’, which purports to cover both the origin of the New York Mafia and its prehistory. 

In doing so, Raab repeats uncritically the Sicilian Mafia’s own romantic foundation myth, claiming that the Mafia began during Sicily’s long history of foreign occupation as a form of  “self-preservation against perceived corrupt oppressors (p14). 

Indeed, even his supposedly “less romantic and more likelyetymology for the word ‘Mafia is that it derives from “a combined Sicilian-Arabic slang expression that means acting as a protector against the arrogance of the powerful” (p14). 

Actually, according to historian, John Dickie, rather than protecting the common people against corrupt oppression by outsiders, the Sicilian Mafia was itself corrupt, exploitative and oppression from the very beginning (see Dickie’s books, Costa Nostra and Blood Brotherhoods). 

Raab is vague on the precise origins of the Sicilian Mafia, but does insist that mafia cosche evolved “over hundreds of years” (p14).

This is, again, likely a Mafia myth. The Mafia, like the Freemasons (from whom its initiation rituals are, at least according to Dickie, likely borrowed), exaggerates its age to enhance its venerability and mystique.[2]

Of course, Raab’s text is a history of the New York Mafia. One can therefore overlook his inadequate treatment of its Sicilian prehistory. 

Unfortunately, his treatment of early Mafia activity in New York itself is barely better. 

Early turn-of-the-century New York Mafiosi like Giuseppe Morello and Lupo the Wolf are not even mentioned. Nor are their successors, the Terranova brothers. Neither is there any mention of the barrel murders, counterfeiting trial or Mafia-Camorra War

Even their nemesis, Italian-born NYPD officer, Joe Petrosino, murdered in Sicily while investigating the backgrounds of transplanted Mafiosi with a view to deportation, merits only a cursory two and a bit pages – something almost as derisory as the “bare, benchless concrete slab serv[ing] as a road divider and pedestrian-safety island” that ostensibly commemorates him in Lower Manhattan today (p19- 21). 

There are just nineteen pages in Raab’s chapter on the New York Mafia’s ‘Roots’. The next chapter is titled ‘The Castallammarese War’, and focuses upon the gang war of that name, which began in 1930, although the chapter begins with a discussion of the effects of the national Prohibition law that came into force in 1920. 

Therefore, since the Morello Family seems to have had its roots in the 1890s, that’s over twenty years of New York Mafia history (not to mention, according to Raab, some several centuries of Sicilian Mafia history) passed over in less than twenty pages. 

Readers interested in the origins of the five families, and indeed how there came to be five families in the first place, should look elsewhere. I would recommend instead Mike Dash’s The First Family, which uncovers the formerly forgotten history of the first New York Mafia family, the Morello family, the ancestor of today’s Genovese Family, arguably still the most powerful mafia family in America to this day. 

Although I have yet to read it, James Jacobs’ The Mob and the City also comes highly recommended in many quarters. 

Whereas Raab’s account of the first few decades of American Mafia history is particularly inadequate, the coverage of the next few decades of organized crime history, is barely better. 

Here, we get the familiar potted history of the New York Mafia with the each of the usual suspects – Luciano, Anastasia, Costello, Genovese – successively assuming center stage. 

Moreover, despite his ostensible focus on Italian-American organized crime, unlike the Mafia itself (which, though it has survived countless RICO prosecutions, which surely would never survive a class-action lawsuit for racial discrimination), non-Italians are not arbitrarily excluded from Raab’s account.

On the contrary, each of make their usual, almost obligatory, cameos—Bugsy Siegel assassinated in his  Vegas casino-hotel, Abe ‘Kid Twist’ Relesaccidentally’ falling from a sixth-floor window, and, of course, the shadowy and much-mythologized Meyer Lansky always lurking in the background like a familiar anti-Semitic conspiracy theory

It is not that Raab actually misses anything out, but rather that he doesn’t really add much. 

Instead, we get another regurgitation of the familiar Mafia history with which anyone who has had the misfortunate of reading any of the countless earlier popular histories of the American Mafia will be all too familiar. 

Then, after just 100 pages, we are already at the Appalachin meeting in 1957. 

That’s over fifty years of twentieth century American Mafia history condensed in less than 100 pages. More to the point, it’s over half the entire period of American Mafia history covered by Raab’s book (which was published in 2005) covered in less than a seventh of the total text. 

After a brief diversion, namely two chapters discussing supposed Mafia involvement in the Kennedy assassination, we are into the 1970s, and now Raab’s coverage becomes in-depth and authoritative. 

But, although this period may have marked the height of the mafia’s mystique, with blockbuster movies like the overrated Godfather’ trilogy glamourizing Italian-American organized crime like never before, it arguably also marked the beginning of the Mafia’s decline.[3]

Indeed, the mafia’s notoriety during this period may even have been a factor in its decline. After all, publicity and media infamy are, for a criminal organization, at best a mixed blessing.  

True, a media-cultivated aura of power and untouchability may discourage victims from running to the police, and also deter rival criminals from attempting to challenge mafia hegemony. 

However, criminal conspiracies operate best when they are outside the public eye, let alone the scrutinizing glare of the journalists, movie-makers, government and law enforcement. 

There is, after all, a reason why the Mafia is a secret society whose very existence is, at least in theory, a closely-guarded secret.

It is no accident, then, that those crime bosses who openly courted the limelight and revelled in their own notoriety did not enjoy long and successful careers. 

Prominent Italian American examples of criminals who made the mistake of openly courting press attention are John Gotti and Al Capone.[4]

Thus, John Gotti inevitably takes up more than his share of chapters in Raab’s book, just as, during his lifetime, he enjoyed more than his share of headlines in Raab’s own New York Times

The so-called ‘Dapper Don’ invariably made for good copy. 

However, courting the media is rarely a sensible way to run a crime empire. 

A famous adage of the marketing industry supposedly has it that all publicity is good publicity.

This may be true, or at least close to being true, in, say, the realm of rock or rap music, where controversy is often a principal selling point.

However, in the world of organized crime, almost the exact opposite could be said to be true. 

Thus, much of the press coverage of Gotti may have been flattering, even fawning, or at least perceived by Gotti as such. Certainly he himself often seemed to revel in his own infamy and also became something of a folk hero to some sections of the public. 

However, the more he became a folk hero by thumbing his nose at the authorities, the more of a threat he posed to those authorities, in part precisely because he had become something of a folk hero.

The result was that, although the press initially dubbed him ‘The Teflon Don’, because, supposedly, no charges would ever stick, Gotti actually enjoyed less than a decade of freedom as Gambino family boss before being convicted and imprisoned. 

By courting the limelight, he also invited the attention of, not just the media, but also of law enforcement and thereby ensured that his fifteen minutes of fame would immediately be followed by a lifetime of incarceration. 

A rather more sensible approach was perhaps that adopted by a lesser-known contemporary of and rival to Gotti, Genovese family boss Vincent ‘The Chin’ Gigante, who, far from courting publicity like Gotti, let ‘front bossFat Tony Salerno take the bulk of law enforcement heat, himself, for many years, largely passing under the radar. 

While fictional Mafia boss Tony Soprano spent the bulk of the television series in which he played the leading role attempting to conceal his visits to a psychiatrist from his Mafia colleagues, Gigante made sure his own (supposed) mental health difficulties were as public as possible, feigning mental illness for decades in order to avoid law enforcement attention. 

Nicknamed ‘The Oddfather’ by the press for his bizarre antics, he was regularly pictured walking the streets of Greenwich Village in a bathrobe and was said to regularly check into a local psychiatric hospital whenever law enforcement heat was getting too much.[5]

Wary of phone taps and bugs, Gigante also insisted that other members of the crime family of which he was head never mention him by name, but rather, if they had to refer to him, simply to point towards their chin or curl their fingers into the shape of a letter ‘c’. 

These precautions had law enforcement fooled for years, and it was long believed in law enforcement circles that Gigante was retired and the real boss was indeed front boss Tony Salerno. 

Largely as a result, Gigante enjoyed at least a decade and a half as Genovese boss before he too belatedly joined his erstwhile rival John Gotti behind bars. 

Of course, the secrecy with which mafiosi like Gigante took pains to veil their affairs presents a challenge, not just to law enforcement, but also to the historian. 

After all, criminals are, almost by definition, dishonest.[6]

Even those mafiosi who did break rank, and the code of omertà, by providing testimony to the authorities, or sometimes publishing memoirs and giving interviews on television (or, most recently, even starting their own youtube channels), are notoriously unreliable sources of information, being prone to both exaggerate their own role and importance in events, while also (rather contradictorily) minimizing their role in any serious prosecutable offences for which they have yet to serve time. 

Perhaps a more trustworthy source of information—or so one would hope—is law enforcement.  

Yet, relying on the latter as a source, Raab’s account inevitably ends up being as much a history of law enforcement efforts to bring the Mafiosi to justice as it is of the Mafia itself. 

Thus, for example, a whole chapter, entitled ‘The Birth of RICO’, is devoted to the development and passage into law of the Racketeer Influenced and Corrupt Organizations Act or RICO Act of 1970

Indeed, amusingly, but not especially plausibly, Raab even suggests that the name of this act, or rather the acronym by which the Act, and prosecutions under it, became known may have been inspired by the once-famous final line of seminal 1930s Warner Brothers gangster movie, Little Ceasar, Raab reporting that George Robert Blakely, the lawyer largely responsible for the drafting of the Act: 

Refuses to explain the reason for the RICO acronym. But he is a crime-film buff and admits that one of his favorite movies is Little Caesar, a 1931 production loosely modeled on Al Capone’s life… Dying in an alley after a gun battle with the police, Little Caesar gasps one of Hollywood’s famous closing lines—also Blakey’s implied message to the Mob: ‘Mother of Mercy—is this the end of Rico?’” (p177). 

Of course, the passage into law of the RICO statute, as it turned out, was indeed a seminal event in American Mafia history, facilitating, as it did, the successful prosection and incarceration of countless Mafia bosses and other organized crime figures.

Nevertheless, in this chapter, and indeed elsewhere in the book, the five families themselves inevitably fade very much into the background, and Raab concentrates instead on the tactics of and conflicts among law enforcement themselves. 

Yet, in Raab’s defence such material is often no less interesting than the stories of mafiosi themselves. 

Indeed, one thing to emerge from portions of Raab’s narrative is that conflicts and turf wars between different branches, levels and layers of law enforcement—local, state and federal—were often as fiercely, if less bloodily, fought over as were territorial disputes among mafiosi themselves. 

After all, mafiosi rarely take the trouble to commit crimes in only the jurisdiction of a single police precinct. Therefore, the jurisdiction of different branches and levels of law enforcement frequently overlapped.  

Yet, such was the fear of police corruption and mafia infiltration, different branches of law enforcement rarely trusted one another enough to share intelligence with other branches of law enforcement, lest a confidential source, informant, undercover agent, phone tap, bug or wire be thereby compromised, let alone to allow a rival branch of law enforcement to take the lion’s share of the credit, and newspaper headlines, for bringing a high-profile mafia scalp to justice. 

In contrast, territorial disputes between crime families actually seem to have been surprisingly muted, and were usually ironed out through ‘sit-downs’ (i.e. effectively an appeal to arbitration by a higher authority) rather than resort to violence. 

Thus, despite its familiarity as a formulaic cliché of mafia movies from The Godfather onwards, there appears to have never actually been another war between rival Mafia families in New York after the Castallammarese War ended in 1931. 

Mafia wars did ocassionally occur—e.g. the Banana War, First, Second and Third Colombo Wars. However, these were all intra-family affairs, involving control over a single family, rather than conflict between different families.[7]

The Castallammarese War therefore stands as the New York Mafia equivalent of the First World War, with each of the nascent five family factions joined together in two grand coalitions, just as, before and during the World War One, the great powers (and a host of lesser powers) joined together in two grand alliances. 

However, whereas the First World War only promised to be the war to end all wars, the Castallammarese War actually has some claim to actually delivering on this promise, with the independent sovereignty of each of the five families thenceforth mutually respected in a sort of Westphalian Peace, or ‘Pax Mafiosa’ that lasted for the better part of a century. 

In The Godfather (the novel, not the film), Michael Corleone quotes his father as claiming, had “the [five] Families had been running the State Department there would never have been World War II”, since they would have been smart enough to iron out their problems without resort to unnecessary bloodshed and economic expense. 

On the evidence of New York Mafia history as recounted by Raab in ‘The Five Families’, Don Corleone may, perhaps surprisingly, have had a point. 

Perhaps, then, our world leaders and statesmen could learn indeed something from lowlife criminals about the importance of avoiding the unnecessary bloodshed and expense of war. 

Honor Among Thieves – and Men of Honor? 

Another general conclusion that can be drawn from Raab’s history is that, if there is, as cliché contends, but little honor among thieves, there is seemingly scarcely any more honor even among self-styled ‘men of honor’. 

This is even true of the most influential figure in American Mafia history, Charles ‘Lucky’ Luciano, described by Raab in one photo caption as “the visionary godfather and designer of the modern Mafia”, and elsewhere as “the Mafia’s visionary criminal genius”, who is even credited, in some tellings, with creating the Commission and even the five families themselves.[8]

Yet Luciano was a serial traitor. 

First, he betrayed his ostensible ally, Joe ‘The Boss’ Masseria, in the Castellammarese War, setting him up for assassination by his rival Salvatore Maranzano. Then, just a few months later, he betrayed and arranged the murder of Maranzano himself, leaving Luciano free to take the position of, if not capo di tutti capi, then at least the most powerful mafiosi in New York, and probably in America, if not the world. 

In this series of betrayals, Luciano set the pattern for the twentieth century mob. 

The key is to make sure that you betray what turns out to be the losing side, if only on account of your betrayal.

The powerful Gambino crime family provides a particularly good illustration of this. Indeed, for much of the twentieth century, staging an interal coup or arranging the assassination of the current incumbent seems to have been almost the accepted means of securing the succession.

Later, John Gotti famously became boss of the family by arranging the murder of his own boss, Paul Castellano, just as Castellano’s predecessor, the eponymous Carlo Gambino had himself allegedly been complicit in the murder of his own predecessor, Albert Anastasia, who was himself the main suspect in the murder of his own predecessor, Vincent Mangano

However, such treachery was by no means limited to the Gambinos. On the contrary, Joe Colombo became boss of the crime family now renamed in his honor by betraying his own boss Joe Magliocco (and Bonanno boss Joe Bonnano) to the bosses of the three other families whom he had been ordered by them to to kill. 

Meanwhile, one of Colombo’s successors, Carmine ‘The Snake’ Gigante, had also been at war with his own boss, Joe Profaci, in the First Colombo War, but then, in a further betrayal, switched allegiances, setting up his former allies, the Gallo brothers, for assassination by the Profaci leadership. For his trouble, Gigante earned himself the perhaps unflattering sobriquet of ‘The Snake’, but also ultimately the leadership of the crime family.

As for Luciano himself, not only was he a serial traitor, he was also guilty of what was, in Mafia eyes, an even more egregious and unpardonable transgression—namely, he was a police informer

Thus, during his trial for prostitution offences, Raab reveals: 

The most embarrassing moment for the proud Mafia don was Dewey’s disclosure that in 1923, when he was twenty-five, Luciano had evaded a narcotics arrest by informing on a dealer with a larger cache of drugs. 
‘You’re just a stool pigeon,’ Dewey belittled him. ‘Isn’t that it?’ 
‘I told them what I knew,’ a downcast Luciano replied” (p55). 

In this, Luciano was again to set a pattern that, later in the century, many other mafiosi would follow. 

Indeed, by the end of the century, the fabled Mafia code of omertà seems to have been, rather like its earlier ban on drug-dealing, almost as often honored in the breach as actually complied with, at least for mafiosi otherwise facing long spells of incarceration with little prospect of release.

At least since Abe ‘Kid Twist’ Reles, who, being non- Italian, was not, of course, a ‘made man’, and who, at any rate, died under mysterious circumstances, none, to my recollection, ever paid the ultimate price for their betrayal. 

Instead, the main consequence of their breaking the code of omerta seems to have been reduced sentences, government protection under the witness protection program and an end to their Mafia careers.

Yet an end to their mafia careers rarely meant an end to their criminal careers, and few turncoat mafiosi seem to have gone straight, let alone been genuinely repentant.

The most famous case is that of Gambino underboss, and Gotti nemesis, Sammy ‘The Bull’ Gravano, then the highest-ranking New York mafiosi ever to become a cooperating witness, who helped put John Gotti and a score of other leading mafiosi behind bars with his testimony.

In return for this testimony, Gravano was to serve less than five years in prison, despite admitting involvement in as many as nineteen murders.

In defence of this exceptionally lenient sentence, Leo Glasser, the judge responsible for sentencing both Gravano and Gotti, naïvely insisted that Gravano’s craven treachery was “the bravest thing I have ever seen” and declared “there has never been a defendant of his stature in organized crime who has made the leap he has made from one social planet to another” (p449). 

In fact, however, just a few years after his release, Gravano was convicted of masterminding a multi-million-dollar ecstasy ring in Arizona, where the authorities had relocated him for his own protection. 

His status as a notorious mafia stoolie seems to have impeded his reentry into the crime world hardly at all. 

On the contrary, it seems to have been precisely his status as a famed former Gambino family underboss that recommended him to the starstruck young ecstasy trafficking crew who, having befriended his son, were only too happy to allow the infamous Sammy Gravano to assume leadership of the crime ring they themselves had established and built up. 

By the end of the century, only the secretive and close-knit Bonnano Family, long the only New York family still to restrict membership to those of full-Sicilian (not just Southern Italian) ancestry, could brag that they were, perhaps for this reason, the only New York family never to have had a fully-inducted member become a cooperating government witness.  

Yet even this claim, though technically true, was largely disingenuous. 

Indeed, the Bonannos had actually been expelled from the Commission for reportedly being on the verge of inducting undercover FBI agent Joe Pistone (alias ‘Donnie Brasco’) into the family just before his status as an undercover FBI agent and infiltrator had been revealed by the authorities.

Nevertheless, this did not stop Bonanno boss Joe Massino:

Proudly inform[ing] the new soldiers of the family’s unique record among all of the nation’s borgatas as the only American clan that had never spawned a stool pigeon or cooperative government witness” (p640).

It is therefore somewhat ironic that, in 2004, Massino himself who would become the first ever actual boss of a New York family to become a cooperating witness. 

Mafia Decline 

Besides its inadequate treatment of early New York Mafia history (see above), the other main reason that Raab’s ‘Five Families’ cannot be regarded as the definitive history of the New York Mafia is that Raab himself evidently doesn’t believe the story is over. On the contrary, in his subtitle, he predicts, and, in his Afterword, reports a ‘resurgence’.

The reason Raab wrongly predicts a Mafia revival is that he fails to understand the ultimate reason behind mafia malaise, attributing it primarily to law enforcement success: 

The combined federal and state campaigns were arguably the most successful anticrime expedition in American history. Over a span of two decades, twenty-four Mob families, once the best-organized and most affluent criminal associations in the nation, were virtually eliminated or seriously undermined” (p689). 

The real reason for Mafia decline is demographic. 

Italian-Americans no longer live in close-knit urban ghettos. Indeed, outside of Staten Island, few even live in New York City proper (i.e. the five boroughs). 

Italian Harlem has long previously transformed into Spanish Harlem and, beyond the tourist trap, restaurants and annual parade, there is now little of Italy left in what little remains of Manhattan’s Little Italy

Even Bensonhurst, perhaps the last neighborhood in New York to be strongly associated with Italian-Americans, was never really an urban ghetto, being neither deprived nor monoethnic, and is now majority nonwhite.[9]

Italian-Americans are now often middle-class, and the smart ambitious ones now aspire to be professionals and legitimate businessmen rather than criminals.

Indeed, I would argue that Italian-Americans no longer even still exist as a distinct demographic. They are now fully integrated into the American mainstream. 

Indeed, I suspect that, as with the infamous plastic paddy phenomenon with respect to Irish ancestry, few self-styled ‘Italian-Americans’ are even of 100% Italian ancestry. Thus, as far back as 1985, the New York Times reported: 

8 percent of Americans of Italian descent born before 1920 had mixed ancestry, but 70 percent of them born after 1970 were the children of intermarriage… Among Americans of Italian descent under the age of 30, 72 percent of men and 64 percent of women married someone with no Italian background” (Collins, The Family: A new look at intermarriage in the US, New York Times, Feb 11 1985). 

Thus, almost of necessity, the five families relaxed their traditional requirement for inductees to be of full-Italian ancestry, since otherwise so few Americans would be eligible, the Gambinos acting first, inducting, and eventually promoting to acting-boss, John Gotti’s son, Gotti Junior, at the behest of his father, despite the (part-) Russian, or possibly Russian-Jewish, ancestry of his mother (p462). 

Recently, Raab reports, in an attempt to restore discipline, the earlier requirement has been reimposed.  

However, in the absence of a fresh infusion of zips fresh off the boat from Sicily (which Raab also anticipates: p703), this will only further dry up the supply of potential recruits, since so few native-born Americans now qualify as 100% Italian in ancestry.

Raab reports that the supposed Mafia revival has resulted from a reduction in FBI scrutiny, owing to: 

1) The perception that the Mafia threat is extinguished;

2) A change in FBI priorities post-9/11, with the FBI increasingly focusing on domestic terror at the expense of Mafia investigation.  

The lower public profile of the five families in recent years, Raab believes, only shows that Mafiosi have been slipping below the radar, quietly returning to their roots:  

Gambling and loan-sharking—the Mafia’s symbiotic bread-and-butter staples—appear to be unstoppable” (p692).[10]

But, in the aftermath of the Supreme Court decision in Murphy v. National Collegiate Athletic Association, sports betting is now legal throughout the New York Metropolitan area (i.e. in New York, New Jersey and Connecticut), and indeed most of the US, one of these two staples is now likely off the menu for the foreseeable future. 

Moreover, the big money is increasingly in narcotics, and, as Raab concedes, in contrast with their success in taking down the Mafia, the FBI’s “more costly half-century campaign against the narcotics scourge remains a Sisyphean failure” (p689). 

This has meant that non-Italian criminals have increasingly taken over the drug-trade, especially Latin-American cartels, who have taken over importation and wholesale, and black and Latino street gangs, who control most distribution at the street-level. 

In truth, the replacement of Italian-Americans in organized crime is only the latest in an ongoing process of  ethnic succession—in New York, the Italians had themselves replaced Jews, who had dominated organized crime in New York in the early twentieth century into the prohibition era, and who had themselves replaced the Irish gangs and political bosses of the nineteenth century (see Ianni, Black Mafia: Ethnic Succession Organized Crime). 

The future likely belongs to blacks and Hispanics. The belief that the latter are somehow incapable of operating with the same level of organization and sophistication as the Mafia is, not only racist, but also likely wrong. 

Indeed, the fact that, prior to recent times, the Mafia in particular, not organized crime in general, was a major FBI priority may even have acted as a form of racially-based ‘affirmative action for black and Hispanic criminals. 

Raab may be right that the shift in FBI priorities post-9/11 has permitted a resurgence of organized crime. Indeed, in truth, organized crime, like the drug problem that fuels it, never really went away.

However, there is no reason to anticipate any resurgence will come with an Italian surname or wearing a fedora and Italian suit.


[1] Indeed, since Italian-American crime is in terminal decline – not just in New York – the time is also ripe for a definitive history of Italian-American organized crime in general. Of course, Raab’s book does not purport to be a history Italian-American organized crime in general. It is a history only of the famed ‘five families’ operating in the New York metropolitan area, and hence only of Italian-American organized crime in this city. 
However, it does purport, in its subtitle, to be a history of ‘America’s most Powerful Mafia Empires’. Probably the only Italian-American crime syndicate (or at least predominantly Italian-American crime syndicate) outside of New York which had a claim to qualifying as one of ‘America’s most Powerful Mafia Empires’ during most of the twentieth century is the Chicago Outfit. The Chicago outfit, however, barely gets a mention in Raab’s mamouth book, and then only in passing.
Raab extends his gaze beyond the New York families to Mafia families based in other cities only during an extended, and probably misguided, discussion of the supposed role of the Mafia, in particular Florida boss, Santo Trafficante Jr., and New Orleans boss, Carlos ‘The Little Man’ Marcello, in the assassination of John F Kennedy.
However, even here, the Chicago Outfit receive short shrift, with infamous Chicago boss, Sam ‘Momo’ Giancana receiving only passing mention by Raab, even though he features as prominently in JFK conspiracy theories as either Trafficante and Morello.

[2] Of course, most mafiosi themselves likely believe this myth, just as many Freemasons probably themselves believe the exaggerated tales of their own venerability and spurious historical links to groups such as the Knights Templar. They are, in short, very much in thrall of their own mystique. This is among the reasons they are led to join the mafia in the first place. If claims of ancient origins were originally a myth cynically invented by mafiosi themselves, rather than presumed by outsiders, then modern mafiosi have certainly come to very much fall for their own propaganda.

[3] This is certainly the suggestion of Francis Ianni in Black Mafia: Ethnic Succession in Organized Crime, who argues that the American Mafia was already ceding power to black and Hispanic organized crime by at least the 1970s. This view seems to have some substance. 
Early to mid-twentieth century black Harlem crime Bumpy Johnson, for all his infamy, was said to be very much subservient to the Italian mafia families. Indeed, in the 1920s, a white criminal like Owney Madden was able to run the famous Cotton Club, initially with a whites-only door policy, in the heart of black Harlem.
However, by the 1970s, Harlem was mostly a no-go area for whites, Italian-Americans very much included. Therefore, even if the Mafia had the upper-hand in any negotiations, they nevertheless had to delegate to blacks any criminal activities in black areas of the city.
Thus, Nicky Barnes, the major heroin distributer in Harlem, was said to buy his heroin from mafia importers and wholesalers, especially Crazy’ Joe Gallo, whom he was said to formed a relationship with while they were both in prison together. Similarly, unlike his portrayal in the movie American GangsterFrank Lucas also seems to have bought his heroin primarily through mafia wholesalers. However, he may also have had an indirect link to the Golden Triangle through his associate Ike Anderson, a serving soldier in the Vietnam War.
However, both Lucas and Barnes necessarily had their own crew of black dealers to distribute the drugs on the street. The first black criminal in New York to supposedly operate entirely independently of the Mafia in New York was said to have been Frank Matthews, who disappeared under mysterious circumstances while on parole.

[4] Intriguingly, Professor of Criminal Justice, Howard Abadinsky, in his textbook on organized crime, links the higher public profile adopted by Capone and Gotti to the fact that both trace their ancestry, not to Sicily, but rather to Naples, where the local Camorra have long cultivated a higher public profile, and typically adopted a flashier style of dress and demeanor, than their Sicilian Mafia equivalents (Organized Crime, 4th Edition: p18).
Thus, historian John Dickie refers to a “longstanding difference between the public images of the two crime fraternities”: 

The soberly dressed Sicilian Mafioso has traditionally had a much lower public profile than the Camorrista. Mafiosi are so used to infiltrating the state and the ruling elite that they prefer to blend into the background rather than strike poses of defiance against the authorities. The authorities, after all, were often on their side. Camorista, by contrast, often played to an audience” (Mafia Republic: p248). 

Abadinsky concurs that: 

While even a capomafioso exuded an air of modesty in both dress and manner of speaking, the Camorrista was a flamboyant actor whose manner of walking and style of dress clearly marked him out as a member of the società” (Organized Crime, 4th Edition: p18). 

Adabinsky therefore tentatively observes: 

In the United States the public image of Italian-American organized crime figures with Neapolitan heritage has tended towards Camorra, while their Sicilian counterparts have usually been more subdued. Al Capone, for example, and, in more recent years, John Gotti, are both of Neapolitan heritage” (Organized Crime, 4th Edition: p18). 

However, while true, I cannot see how this could be anything other than a coincidence, since both Capone and Gotti were born and spent their entire lives in the USA, Gotti being fully two generations removed from the old country, and neither seem to have had parents or other close relatives who were involved in crime and could somehow have passed on this cultural influence from Naples – unless perhaps Abadinsky is proposing some sort of innate, heritable, racial difference between Neapolitans and Sicilians, which seems even more unlikely.

[5] Gigante is not the only organized crime boss accused of malingering. Neapolitan Camorra boss, Raffaele Cutolo, alias ‘The Professor’, also stood accused of faking mental illness. However, whereas Gigante did so in order to avoid prison, Cutolo, apart from eighteen months living on the run from the authorities after escaping, spent virtually the entirety of his career as a crime boss locked up, being periodically shuttled between psychiatric hospitals and prisons. 

[6] Actually, not all crimes necessarily involve dishonesty – e.g. crimes of passion, some crimes of violence. However, any mafiosa necessarily has to be dishonest, since otherwise he would admit his crimes to the authorities and hence not enjoy a long career. Indeed, the very code of omertà, though conceptualized as a code of honour, demands dishonesty, at least in one’s dealings with the authorities, since it forbids both informing to the authorities regarding the crimes of others, and admitting the existence of, or one’s membership of, the criminal fraternity itself. 

[7] On the other hand, if there was never outright war between families after the Castallammarese War, nevertheless bosses of some families did sometimes attempt to sponsor ‘regime change’ in other families, by deposing other bosses, both in New York and beyond. For example, as discussed above, Bonanno family boss Joe Bonnano, acting in concert with Joe Magliocco, the then-boss of what was then known as the Profaci family, supposedly conspired to assasinate the bosses of the other three New York families, only to have their scheme betrayed by the assigned assassin, Joe Colombo, who was then himself rewarded for his betrayal by being appointed as boss of the family that thenceforth came to be named after him.
Similarly, Genovese boss Vincent ‘The Chin’ Gigante and Lucchese boss Tony ‘Ducks’ Corallo together attempted unsuccessfully assassinate Gambino boss John Gotti as revenge for Gotti’s own unauthorised assassination of his predecessor, Paul Castellano, which they saw as a violation of Mafia rules, whereby the assassination of a boss was, at least in theory, only permissible with the prior consent and authority of the Commission. The attempted assassination, carried out by Vittorio ‘Little Vic’ Amuso and Anthony ‘Gaspipe’ Casso, themselves later to become boss and underboss of the Luccheses, resulted in the death of Gambino underboss, Frank DeCicco in a car bomb, but not Gotti himself.

[8] In truth, Luciano seems to have invented neither the five families nor the commission. According to Mike Dash in his excellent The First Family, the Commission, under the earlier name ‘the Council’, actually existed long before Luciano came to prominance. 
As for the five families, surely if Luciano, or indeed Maranzano before him (as other versions relate), were to invent afresh the structure of the New York Mafia in a ‘top down’ process, they would surely have created a more unitarycentralized structure in order to maximize their own power and control as overall boss of bosses, rather than devolving power to the bosses of the individual families, who themselves issued orders to capos and soldiers.
As I have discussed previously, the power of the so-called National Commission was, to draw an analogy with international relations, largely intergovernmental rather than federal, let alone unitary or centralized, in its powers. Its power lay in its perverse perceived ‘legitimacy among mafiosi. As Stalin is said to have contemptuously remarked of the Pope, the Commision commanded no divisions (nor any ‘crews’, capos or soldiers) of its own.
In reality, Maranzano and Luciano surely at most merely give formal recognition to factions which long predated the Castallammarese War and its aftermath and whose independent power demanded recognition. Indeed, the Commission, was even initially said to have included non-Italians such as Dutch Schultz, if only because the power of the ‘Bronx Beer Baron’ simply demanded his inclusion if the Commission were to be at all effective in regulating organized crime in New York.

[9] Raab, for his part, anticipates that Mafia rackets will increasingly, like Italian-Americans themselves, migrate to the suburbs: 

A strategic shift could be exploiting new territories. Although big cities continue to be glittering attractions, there are signs that the Mafia, following demographic trends, is deploying more vigorously in suburbs. There, the families might encounter police less prepared to resist them than federal and big-city investigators. ‘Organized crime goes where the money is, and there’s money and increasing opportunities in the suburbs,’ Howard Abadinsky, the historian, observes. Strong suburban fiefs have already been established by the New York, Chicago, and Detroit families” (p707). 

However, organized crime tends to thrive in poor close-knit communities in deprived areas who lack a trust in the police and authorities and are hence unwilling to turn to the latter for protection. If the Mafia attempts to make inroads in the suburbs, it will likely come up against assimilated, middle-class Americans only too willing to turn the to police to protection. In short, there is a reason why organized crime has largely been absent in middle-class suburbia.

[9] Although he wrote ‘Five Families’ several years before the legalization of sports betting in most of America, New York City included, Raab seems to anticipate that legalization will have little if any effect on Mafia revenue from illegal sports books, writing: 

Sensible gamblers will always prefer wagering with the Mob rather than with state-authorized Off-Track Betting parlors and lotteries. Bets on baseball, football, and basketball games placed with a bookie have a 50 percent chance of winning, without the penalty of being taxed, while the typical state lottery is considered a pipe dream because the chance of winning is infinitesimal” (p694). 

It is, of course, true that lotteries, almost by definition, involve long odds and little realistic chance of winning. However, the same was also true of the illegal numbers rackets that were a highly lucrative source of income for predominantly black ‘policy kings’ (and queens) in early twentieth century America. Indeed, this racket was so lucrative so eventually major white organized crime figures like Dutch Schultz in New York and Sam Guancana in Chicago sought to take it over.
Yet, if winning a state lottery is indeed a ‘pipe dream’, the same is not true of legalized sports betting. On the contrary, here, the odds are as good as in illegal Mafia-controlled sports betting, and, given the legal regulation, prospective gamblers will probably be more confident that they are not likely to be ripped off by the bookies.
Thus, in most jurisdications where off-track sports betting is legal and subject to few legal restrictions, there is little if any market for illegal sports betting. Hence the legalization of sports betting in most of America will likely mean that sports betting is no longer controlled by organized crime, let alone the Mafia, just as the end of Prohibition in 1933 similarly similarly led to the decline in the the market for moonshine and bootleg alcohol.

In Defence of Physiognomy

Edward Dutton, How to Judge People by What they Look Like (Wrocław: Thomas Edward Press, 2018) 

Never judge a book by its cover’ – or so a famous proverb advises. 

However, given that Edward Dutton’s ‘How to Judge People by What they Look Like’, represents, from its provocative title onward, a spirited polemic against this received wisdom, one is tempted, in the name of irony, to review his book entirely on the basis of its cover. 

I will resist this temptation. However, it is perhaps worth pointing out that two initial points are apparent, if not from the book’s cover alone, then at least from its external appearance. These are: 

1) It is rather cheaply produced and apparently self-published; and

2) It is very short – a pamphlet rather than a book.[1]

Both these facts are probably excusable by reference to the controversial and politically-incorrect nature of the book’s title, theme and content.

Thus, on the one hand, the notion that we can, with some degree of accuracy, judge people by appearances alone is a very politically-incorrect idea and hence one that many publishers would be reluctant to associate themselves with or put their name to.

On the other hand, the fact that the topic is so controversial may also explain why the book is so short. After all, relatively little research has been conducted on this topic for precisely this reason.

Moreover, even such research as has been conducted is often difficult to track down. 

After all, physiognomy, the field of research which Dutton purports to review, is no longer a recognized science. On the contrary, most people today dismiss it as a discredited pseudoscience.

Therefore, there is no ‘International Journal of Physiognomy’ available at the click of a mouse on ScienceDirect. 

Neither are there any Departments of Physiognomy or Professors of Physiognomy at major universities, or a recent undergraduate, or graduate-level textbook on physiognomy collating all important research on the subject. Indeed, the closest thing we have to such a textbook is Dutton’s own thin, meagre pamphlet. 

Therefore, not only has relatively little research has been conducted in this area, at least in recent years, but also such research as has been conducted is spread across different fields, different journals and different researchers, and hence not always easy to track down. 

Moreover, such research rarely actually refers to itself as ‘physiognomy’, in part precisely because physiognomy is widely regarded as a pseudoscience and hence something to which researchers, even those directly researching correlations between morphology and behaviors, are reluctant to associate themselves.[2]

Therefore, conducting a key word search for the term ‘physiognomy’ in one or more of the many available databases of scientific papers would not assist the reader much, if at all, in tracking down relevant research.[3]

It is therefore not surprising that Dutton’s book is quite short. 

For this same reason, it is perhaps also excusable that Dutton has evidently failed to track down some interesting studies relevant to his theme. 

For example, a couple of interesting studies not cited by Dutton purported to uncover an association between behavioural inhibition and iris pigmentation in young children (Rosenberg & Kagan 1987; Rosenberg & Kagan 1989). 

Another interesting study not mentioned by Dutton presents data apparently showing that subjects are able to distinguish criminals from non-criminals at better than chance levels merely from looking at photographs of their faces (Valla, Ceci & Williams 2011).[4]

Such omissions are inevitable and excusable. More problematically however, Dutton also seems to have omitted at least one entire area of research relevant to his subject-matter – namely research on so-called minor physical anomalies or MPAs

These are certain physiological traits, interpreted as minor abnormalities, probably reflecting developmental instability and mutational load, which have been found in several studies to be associated with various psychiatric and developmental conditions, as well as being a correlate of criminal behaviour (see below).

Defining the Field 

Yet Dutton not only misses out on several studies relevant to the subject-matter of his book, he also is not entirely consistent in identifying just what the precise subject-matter of his book actually is. 

It is true that, at many points in his book, he talks about physiognomy

This term is usually defined as the science (or, according to many people, the pseudoscience) of using a person’s morphology in order to determine their character, personality and likely behaviour. 

However, the title of Dutton’s book, ‘How to Judge People by What They Look Like’, is potentially much broader. 

After all, what people look like includes, not just our morphology, but also, for example, how we dress and what clothes we wear.

For example, we might assess a person’s job from their uniform, or, more generally, their socioeconomic status and income level from the style and quality of their clothing, or the designer labels and brand names adorning it. 

More specifically, we might even determine their gang allegiance from the color of their bandana, and their sexuality and fetishes from the colour and positioning of their handkerchief

We also make assessments of character from clothing style. For example, a person who is sloppily dressed and is hence perceived not take care in his or her appearance (e.g. whose shirt is unironed or unclean) might be interpreted as lacking in self-worth and likely to produce similarly sloppy work in whatever job s/he is employed at. On the other hand, a person always kitted out in the latest designer fashions might be thought shallow and materialistic. 

In addition, certain styles of dress are associated with specific youth subcultures, which are often connected, not only to taste in music, but also with lifestyle (e.g. criminality, drug-use, political views).[5]

Dutton does not discuss the significance of clothing choice in assessments of character. However, consistent with this broader interpretation of his book’s title, Dutton does indeed sometimes venture beyond physiognomy in the strict sense. 

For example, he discusses tattoos (p46-8) and beards (p60-1). 

I suppose the decision to get tattooed or grow a beard reflects both genetic predispositions and environmental influence, just as all aspects of phenotype, including morphology, reflect the interaction between genes and environment. 

However, this is also true of clothing choice, which, as I have already mentioned, Dutton does not discuss.  

On the other hand, both tattoos and, given that they take time to grow, even beards are relatively more permanent than whatever clothes we are wearing at any given time. 

However, Dutton also discusses the significance of what he terms a “blank look” or “glassy eyes” (p57-9). But this is a mere facial expression, and hence even more transitory than clothing. 

Yet Dutton omits discussion of other facial expressions which, unlike his wholly anecdotal discussion of “glassy eyes”, have been researched by ethologists at least since Charles Darwin’s seminal The Expression of the Emotions in Man and Animals was published in 1872. 

Thus, Paul Ekman famously demonstrated that the meanings associated with at least some facial expressions are cross-culturally universal (e.g. smiling being associated with happiness). 

Indeed, some human facial expressions even appear to be homologues of behaviour patterns among non-human primates. For example, it has been suggested that the human smile is homologous with an appeasement gesture, namely the baring of clenched teeth (aka a ‘fear grin’), among chimpanzees. 

Of particular relevance to the question posed in Dutton’s book title, namely ‘How to Judge People by What They Look Like’, it is suggested some facial expressions lie partly outside of conscious control – e.g. blushing when embarrassed, going pale when shocked or fearful.  

Indeed, even a fake smile is said to be distinguishable from a Duchenne smile

This then explains the importance of reading facial expressions when playing poker or interrogating suspects, as people often inadvertently give away their true feelings through their facial expressions, behaviour and other mannerisms (e.g. so-called microexpressions). 

Somatotypes and Physique 

Dutton begins his book with a remarkable attempt to resurrect William Sheldon’s theory that certain types of physiques (or, as Sheldon called them, somatotypes) are associated with particular types of personality (or as Sheldon called them, constitutions). 

Although the three dimensions by which Sheldon classified physiques – endomorphy, ectomorphy and mesomorphy – have proven useful as dimensions for classifying body-type, Sheldon’s attempt to equate these ideal types with personality is now widely dismissed as pseudoscience. 

Dutton, however, argues that physique is indeed associated with character, and moreover provides what was conspicuously lacking in Sheldon’s own exposition – namely, compelling theoretical reasons for the postulated associations. 

Yet, interestingly, the associations suggested by Dutton do indeed to some extent mirror those first posited by William Shelton over half a century previously.

Whereas, elsewhere, Dutton draws on previously published research, here, Dutton’s reasoning is, to my knowledge, largely original to himself, though, as I show below, psychometric studies do support the existence of at least some of the associations he postulates. 

This part of Dutton’s book represents, in my view, the most important and convincing original contribution in the book. 

Endomorphy/Obesity, Self-Control and Conscientiousness

First, he discusses what Sheldon called endomorphy – namely, a body-type that can roughly be equated with what we would today call fatness or obesity

Dutton points out that, at least in contemporary Western societies, where there is a superabundance of food, and starvation is all but unknown even among the relatively less well-off, obesity tends to correlate with personality. 

In short, people who lack self-control and willpower will likely also lack the self-control and willpower to diet effectively. 

Endomorphy (i.e. obesity) is therefore a reliable correlate of the personality factor known to psychometricians as conscientiousness (p31-2).  

Although Dutton himself cites no data or published studies in support of this conclusion, nevertheless several published studies confirm an association between BMI and conscientiousness (Bagenjuk et al 2019; Jokela et al 2012; Sutin et al 2011). 

Obesity is also, Dutton claims, inversely correlated with intelligence

This is, first, because IQ is, according to Dutton, correlated with time-preference – i.e. a person’s willingness to defer gratification by making sacrifices in the short-term in return for a greater long-term pay-off. 

Therefore, low-IQ people, Dutton claims: 

Are less able to forego the immediate pleasure of ice cream for the future positive of not being overweight and diabetic” (p31). 

However, far from being associated with a short-time preference, some evidence, not discussed by Dutton, suggests that intelligence is actually inversely correlated with conscientiousness, such that more intelligent people are actually on average less conscientious (e.g. Rammstedt et al 2016; cf. Murray et al 2014). 

This would suggest that low IQ people might, all else being equal, actually be more successful at dieting than their high IQ counterparts. 

However, according to Dutton, there is a second reason that low-IQ people are more likely to be fat, namely: 

They are likely to understand less about healthy eating and simply possess less knowledge of what constitutes healthy food or a reasonable portion” (p31). 

This may be true. 

However, while there are some borderline cases (e.g. foods misleadingly marketed by advertisers as healthy), I suspect that virtually everyone knows that, say, eating lots of cake is unhealthy. Yet resisting the temptation to eat another slice is often easier said than done. 

I therefore suspect conscientiousness is a better predictor of weight than is intelligence

Interestingly, a few studies have investigated the association between IQ and the prevalence of obesity. However, curiously, most seem to be premised on the notion that, rather than low intelligence causing obesity, obesity somehow contributes to cognitive decline, especially in children (e.g. Martin et al 2015) and the elderly (e.g. Elias et al 2012). 

In fact, however, longitudinal studies confirm that, as contended by Dutton, it is low IQ that causes obesity rather than the other way around (Kanazawa 2014). 

At any rate, people lacking in intelligence and self-control also likely lack the intelligence and self-discipline to excel in school and gain promotions into high-income jobs, since both earnings and socioeconomic status correlate with both intelligence and conscientiousness.[6]

One can also, then, make better than chance assessments of a person’s socioeconomic status  and income from their physique. 

In other words, whereas in the past (and perhaps still in the developing world) the poor were more likely to starve or suffer from malnutrition and only the rich could afford to be fat, in the affluent west today it is the relatively less well-off who are, if anything, more likely to suffer from obesity and diseases of affluence such as diabetes and heart disease

This, then, all rather confirms the contemporary stereotype of the fat, lazy slob. 

However, Dutton also provides a let-off clause for offended fatties. Obesity is associated, not only with conscientiousness, but also with the factor of personality known as extraversion. This refers to the tendency to be outgoing, friendly and talkative, traits that are generally viewed positively. 

Several studies, again not cited by Dutton, do indeed suggest an association between extraversion and BMI (Bagenjuk et al 2019; Sutin et al 2011). Dutton, for his part, explains it this way: 

Extraverts simply enjoy everything positive more, and this includes tasty (and thus unhealthy) food” (p32). 

Dutton therefore provides theoretical support to the familiar stereotype of, not only the fat, lazy slob, but also the jolly and gregarious fat man, and the ‘bubbly’ fat woman.[7]

Mesomorphy/Muscularity and Testosterone

Mesomorphs were another of Sheldon’s supposed body-types. Mesomorphy can roughly be equated with muscularity. 

Here, Dutton concludes that: 

Sheldon’s theory… actually fits quite well with what we know about testosterone” (p33). 

Thus, mesomorphy is associated with muscularity, and muscularity with testosterone

Yet testosterone, as well as masculinizing the body, also masculinizes brain and behaviour. 

This is why anabolic steroids, not only increase muscularity, but are also said to be associated with roid rage.[8]

Testosterone, at least during development, may also be associated, not only with muscularity, but also with certain aspects of facial morphology, such as a wide and well-defined jawline, prominent brow ridges, deep-set eyes and facial width.  

I therefore wonder if this might go some way towards explain the finding, not mentioned by Dutton (but clearly relevant to his subject-matter), that observers are apparently able to identify convicted criminals at better than chance levels from a facial photograph alone (Valla, Ceci & Williams 2011).[9]

Testosterone and Autism 

Further exploring the effects of testosterone on both psychology and morphology, Dutton also proposes: 

We would also expect the more masculine-looking person to have higher levels of autism traits” (p34). 

This idea seems to be based on Simon Baron-Cohen’s extreme male brain theory of autism

However, the relationship between, on the one hand, levels of androgens such as testosterone and, on the other, degree of masculinization in respect of a given sexually-dimorphic trait may be neither one-dimensional nor linear

Thus, interestingly, Kingsley Browne in his excellent Biology at Work: Rethinking Sexual Equality (which I have reviewed here) reports: 

The relationship between spatial ability and [circulating] testosterone levels is described by an inverted U-shaped curve… Spatial ability is lowest in those with the very lowest and the very highest testosterone levels, with the optimal testosterone level lying in the lower end of the normal male range. Thus, males with testosterone in the low-normal range have the highest spatial ability” (Biology at Work: p115; Gouchie & Kimura 1991). 

In contrast, however, Dutton claims: 

There is evidence that testosterone level in healthy males is positively associated with spatial ability” (p36). 

However, the only study he cites in support of this assertion was, according to its methodology section and indeed its very title, conducted among “older males”, reported as having been between the ages of 60 and 75 years of age (Janowsky et al 1994). 

Therefore, since testosterone levels are known to decline with age, this finding is not necessarily inconsistent with the relationship between testosterone and spatial ability described by Browne (see Moffat & Hampson 1996). 

This, of course, accords with the anecdotal observation that math nerds and autistic males are rarely athletic, square-jawed ‘alpha male’-types.[10]

Testosterone and Baldness 

Another trait associated with testosterone levels, according to Dutton, is male pattern baldness. Thus, Dutton contends: 

Baldness is yet another reflection of high testosterone… [B]aldness in males known as androgenic apolecia, is positively associated with levels of testosterone” (p55). 

As evidence, he cites a study both a review (Batrinos 2014) and some indirect anecdotal evidence: 

It is widely known among doctors – I base this on my own discussions with doctors – that males who come to them in their 60s complaining of impotence tend to have full heads of fair or only very limited hair loss” (p55).[11]

If male pattern baldness is indeed associated with testosterone levels then this is somewhat surprising, because our perceptions regarding men suffering from male pattern baldness seem to be that they are, if anything, less masculine than other males. 

Thus, Nancy Etcoff, in Survival of the Prettiest (which I have reviewed here), reports that one study  found that: 

Both sexes assumed that balding men were weaker and found them less attractive” (Survival of the Prettiest: p121; Cash 1990).[12]

Yet, if the main message of Dutton’s book is that individual differences in morphology and appearance do indeed predict individual differences in behaviour, psychology and personality, then a second implicit theme seems also to be that our intuitions and stereotypes regarding the association between appearance and behaviors are often correct.  

True, it is likely that few people notice, say, digit ratios, or make judgements about people based on them either consciously or unconsciously. However, elsewhere, Dutton cites studies showing that subjects are able to estimate the IQ of male students at better than chance levels simply by viewing a photograph of their faces (Kleisner et al 2014; discussed at p50); and identify homosexuals and heterosexual men at better than chance levels from a facial photograph alone (Kosinski & Wang 2017; discussed at p66). 

Yet, according to Etcoff and Cash, perceptions regarding the personalities of balding men are almost the opposite of what would be expected if male pattern balding were indeed a reflection of high testosterone levels, as suggested by Dutton. 

In fact, however, although a certain level of testosterone is indeed a necessary condition for male pattern hair loss (this is why neither women nor castrated eunuchs experience the condition, though their hair does thin with age), this seems to be a threshold effect, and among non-castrated males with testosterone levels within the normal range levels of circulating testosterone do not seem to significantly predict either the occurrence, or severity, of male pattern baldness

Thus, healthline reports: 

It’s not the amount of testosterone or DHT that causes baldness; it’s the sensitivity of your hair follicles. That sensitivity is determined by genetics. The AR gene makes the receptor on hair follicles that interact with testosterone and DHT. If your receptors are particularly sensitive, they are more easily triggered by even small amounts of DHT, and hair loss occurs more easily as a result. 

In other words, male pattern baldness is yet another trait that is indeed related to testosterone, but does not evince a simple linear relationship

2D:4D Ratio

Another presumed correlate of prenatal androgens is 2D:4D ratio (aka digit ratio). 

Over the last two decades, a huge body of research has reported correlations between 2D:4D ratio and a variety of psychiatric conditions and behavioural propensities, including autism (Manning et al 2001), ADHD (Martel et al 2008; Buru 2020; Işık 2020), psychopathy (Blanchard & Lyons 2010), aggressive behaviours (Bailey & Hurd 2005; Benderlioglu & Nelson 2005), sports and athletic performance (Manning & Taylor 2001Hönekopp & Urban 2010; Griffin et al 2012; Keshavarz et al 2017), criminal behaviour (Ellis & Hoskin 2015; Hoskin & Ellis 2014) and homosexuality (Williams et al 2000; Lippa 2003; Kangassalo et al 2011; Li et al 2016; Xu & Zheng 2016). 
Unfortunately, and slightly embarrassingly, Dutton apparently misunderstands what 2D:4D ratio actually measures. Thus, he writes: 

If the profile of someone’s fingers is smoother, more like a shovel, then it implies high testosterone. If, by contrast, the little finger is significantly smaller than the middle finger, which is highly prevalent among women, then it implies lower testosterone exposure” (p69). 

Actually, however, both the little finger and middle finger are irrelevant to 2D:4D ratio.

Indeed, for virtually everyone, “the little finger is significantly smaller than the middle finger”. This is, of course, why the latter is called “the little finger”.

Actually, 2D:4D ratio concerns the ratio between index finger and the ring finger – i.e. the two fingers on either side of the middle finger

These fingers are, of course, the second and fourth digit, respectively, if you begin counting from your thumb outwards, hence the name ‘2D:4D ratio’. 

In evidently misnumbering his digits, I can only conclude that Dutton began counting at the correct end, but missed out his thumb. 

At any rate, the evidence for any association between digit ratios and measures of behavior and psychology is, at best, mixed

Skimming the literature on the subject, one finds many conflicting findings – for example, sometimes significant effects are found only for one sex, while other studies find the same correlations limited to the other sex (e.g. Bailey & Hurd 2005; Benderlioglu & Nelson 2005; see also Hilgard et al 2019), and also many failures to replicate earlier reported associations (e.g. Voracek et al 2011; Fossen et al 2022; Kyselicová et al 2021). 

Likewise, meta-analyses of published studies have generally found, at best, only small and inconsistent associations (e.g Voracek et al 2011 ; Pratt et al 2016). Thus, 2D:4D ratio has been a major victim of the recent so-called replication crisis in psychology

Indeed, it is not entirely clear that 2D:4D ratio represents a useful measure of prenatal androgens in the first place (Hollier et al 2015), and even the universality of the sex difference that originally led researchers to posit such a link is has been called into question (Apicella 2015; Lolli et al 2017).  

In short, the usefulness of digit ratio as a measure of exposure to prenatal androgens, let alone an important correlate of behaviour, psychology, personality or athletic performance, is questionable. 

Testosterone and Height 

The examples of male pattern baldness and spatial ability demonstrate that the effect of testosterone on some sexually-dimorphic traits is not necessarily always linear. Instead, it can be quite complex. 

Therefore, just because men are, on average, higher for a given trait than are women, which is ultimately a consequence of androgens such as testosterone, this does not necessarily mean that men with relatively higher levels of testosterone are necessarily higher for this trait than are men with relatively lower levels of testosterone. 

Indeed, Dutton himself provides another example of such a trait – namely height

Thus, although men, in general, are taller than women, nevertheless, according to Dutton: 

Men who are high in testosterone… tend to be of shorter stature than those who are low in it. High levels of testosterone at a relatively early age have been shown to reduce stature” (p34).[13]

In evolutionary terms, Dutton explains this in terms of the controversial Life History Theory of Philippe Rushton, of whom Dutton seems to be, with some reservations, something of a disciple (p22-4). 

If true, this might explain why eunuchs who were castrated before entering puberty are said to grow taller, on average, than other men. 

Further corroboration is provided by the fact that, in the Netherlands, whose population is among the tallest in the world, excessively tall boys are sometimes treated with testosterone in order to prevent them growing any taller (de Waal et al 1995).[14]

This is said to occur because additional testosterone speeds up puberty, and produces a growth spurt, but it also brings this to an end when height stabilizes and we cease to grow any taller. This is discussed in Carole Hooven’s book Testosterone: The Story of the Hormone that Dominates and Divides Us.

Short Man Syndrome’?

Interestingly, although Dutton does not explore the idea, the association between testosterone levels and height among males may even explain the supposed phenomenon of short man syndrome (also referred to, by reference to the supposed diminutive stature of the French emperor Napoleon, as a Napoleon complex), whereby short men are said to be especially aggressive and domineering. 

This is something that is usually attributed to a psychological need among shorter men to compensate for their diminutive stature. However, if Dutton is right, then the supposed aggressive predilections of short men might simply reflect differences between short and taller man in testosterone levels during adolescence. 

Actually, however, so-called short man syndrome is likely a myth – and yet another way society in general demeans and belittles short men. Certainly, it is very much a folk-psychiatric diagnosis with no empirical or real evidential basis, besides the merely anecdotal.  

Indeed, far from short men being, on average, more aggressive and domineering than taller men, one study commissioned by the BBC actually found that short men were less likely to respond aggressively when provoked

Given that tall men have an advantage in combat, it would actually make sense for relatively shorter men to avoid potentially violent confrontations with other men where possible, since, all else being equal, they would be more likely to come off worse in any such altercation.  

Consistent with this, some studies have found a link between increased stature and anti-social personality disorder, which is associated with aggressive behaviours (e.g. Ishikawa et al 2001; Salas-Wright & Vaughn 2016), while another study found a positive association between height and dominance, especially among males (Malamed 1992).[15]

Height and Intelligence 

Height is also, Dutton reports, correlated with intelligence, with taller people having, on average, slightly higher IQs than shorter people.  

The association between height and IQ is, like most if not all of those discussed by Dutton in this book, modest in magnitude or effect size.[16]

However, unlike many other associations reported by Dutton, many of which are based on just a single published study, or sometimes by purely theoretical arguments, the association between height and intelligence is robust and well-established.[17] Indeed, there is even wikipedia page on the topic

Dutton’s explanation for this phenomenon is that intelligence and height “have been sexually selected for as a kind of bundle” (p46). 

Females have sexually selected for intelligent men (because intelligence predicts social status and they have been specifically selected for this) but they have also selected for taller men, realising that taller men will be better able to protect them. This predilection for tall but intelligent men has led to the two characteristics being associated with one another” (p46). 

Actually, as I see it, this explanation would only work, or at least work much better, if both men and women had a preference for partners who are both tall and intelligent

This is indeed Arthur Jensen’s explanation for the association between height and IQ: 

Probably represents a simple genetic correlation resulting from cross-assortative mating for the two traits. Both height and ‘intelligence’ are highly valued in western culture. There is also evidence for cross-assortative mating for height and IQ. There is some trade-off between them in mate selection. When short and tall women are matched on IQ, educational level and social class of origin, for example, it is found that taller women tend to marry men of higher socioeconomic status… than do shorter women” (The G Factor: The Science of Mental Ability: p146). 

An alternative explanation might be that both height and intelligence reflect developmental stability and a lack of deleterious mutations. On this view, both height and intelligence might represent indices of genetic quality and lack of mutational load. 

However, this alternative explanation is inconsistent with the finding that there is no ‘within-family’ correlation between height and intelligence. In other words, when one looks at, say, full-siblings from the same family, there is no tendency for the taller sibling to have a higher IQ (Mackintosh, IQ and Human Intelligence: p6). 

This suggests that the genes that cause greater height are different from those that cause greater intelligence, but that they have come to be found in the same individuals through assortative mating, as suggested by Jensen and Dutton.[18]

Height and Earnings 

Although not discussed by Dutton, there is also a correlation between height and earnings. Thus, economist Steven Landsburg reports that: 

In general, an extra inch of height adds roughly an extra $1,000 a year in wages, after controlling for education and experience. That makes height as important as race or gender as a determinant of wages” (More Sex is Safer Sex: p53). 

This correlation could be mediated by the association between height and intelligence, since intelligence is known to be correlated with earnings (Case & Paxson 2009). 

However, one interesting study found that it was actually height during adolescence that accounted for the association, and that, once this was controlled for, adult height had little or no effect on earnings (Persico, Postlewaite & Silverman 2004). 

Controlling for teen height essentially eliminates the effect of adult height on wages for white males. The teen height premium is not explained by differences in resources or endowments” (Persico, Postlewaite & Silverman 2004). 

Thus, Landsburg reports: 

Tall men who were short in high school earn like short men, while short men who were tall (for their age) in high school” (More Sex is Safer Sex: p54). 

This suggests that it is height during a key formative period (a critical period’) in adolescence that increases self-confidence, which self-confidence continues into adulthood and ultimately contributes to higher adult earnings of men who were relatively taller as adolescents. 

On the other hand, however, Case and Paxon report that, in addition to being associated with adult height, intelligence is also associated with an earlier growth spurt. This leads them to conclude that adolescent height might be a better marker for cognitive ability than adult height, thereby providing an alternative explanation for Persico et al’s finding (Case & Paxson 2009). 

Head Size and Intelligence 

Dutton also discusses the finding that there is an association between intelligence and head-size. This is indeed true and is a topic I have written about elsewhere

However, Dutton’s illustration of this phenomenon seems to me rather unhelpful. Thus, he writes: 

Intelligent people have big heads in comparison to the size of their bodies. This association is obvious at the extremes. People who suffer from a variety of conditions that reduce their intelligence, including fetal alcohol syndrome or the zika virus, have noticeably very small heads” (p56). 

However, to me, this seems to be the wrong way to think about it. 

While it is indeed true that microcephaly (i.e. a smaller than usual head size) is usually associated with lower than normal intelligence levels, the reverse is not true. Thus, although head-size is indeed correlated with IQ, people suffering from macrocephaly (i.e. abnormally large heads) do not generally have exceptionally high IQs.  

Neither do people afflicted with forms of disproportionate dwarfism, such as achondroplasia, have higher than average IQs even though their heads are larger relative to their body-size than are those of ordinary-sized people.  

In short, rather than being, as Dutton puts it “obvious at the extremes”, the association between head-size and intelligence is obvious at only one of the extremes and not at all apparent at the other extreme. 

In general, species, individuals and races with larger brains have higher intelligence because, because brain-size is highly metabolically expensive and therefore unlikely to evolve without some compensating advantage (i.e. higher intelligence). 

However, conditions such achondroplasia and macrocephaly did not evolve through positive selection. On the contrary, they are pathological and maladaptive. Therefore, in these cases, the additional brain tissue may indeed be wasted and hence confer no cognitive advantage. 

Mate Choice 

In evolutionary psychology, there is a large literature on human mate-choice and beauty/attractiveness standards. Much of this depends on the assumption that the physical characteristics favoured as mate-choice criteria represent fitness-indicators, or otherwise correlate with traits desirable in a mate. 

For example, a low waist-to-hip ratio (or ‘WHR’) is said to be perceived as attractive among females because it is supposedly a correlate of both health and fertility. Similarly, low levels of fluctuating asymmetry are thought to be perceived as attractive by members of the opposite sex in both humans and other animals, supposedly because it is indicative of developmental stability and hence indirectly of genetic quality

Dutton reviews some of this literature. However, an introductory textbook on evolutionary psychology (e.g. David Buss’s Evolutionary Psychology: The New Science of the Mind), or on the evolutionary psychology of mating behaviour in particular (e.g. David Buss’s The Evolution of Desire), would provide a more comprehensive review. 

Also, some of Dutton’s speculations are rather unconvincing. He claims: 

Hipsters with their Old Testament beards are showcasing their genetic quality… Beards are a clear advertisement of male health and status. They are a breeding ground for parasites” (p61). 

However, if this is so, then it merely raises the question as to why have beards come back into fashion very recently? Indeed, until the last few years, beards had not been in fashion for men in the west to my knowledge since the 1970s.[19]

Moreover, it is not at all clear that beards do increase attractiveness (e.g. Dixson & Vasey 2012). Rather, it seems that beards increase perceptions of male age, dominance, social status and aggressiveness, but not their attractiveness.[20]

This suggests that beards are more likely to have evolved through intrasexual selection (i.e. dominance competition or fighting between males) than by intersexual selection (i.e. female choice). 

This is actually consistent with a recently-emerging consensus among evolutionary psychologists that human male physiology (and behaviour) has been shaped more by intrasexual selection than by intersexual selection (Puts 2010; Kordsmeyer et al 2018). 

Consistent with this, Dutton notes: 

“[Beards] have been found to make men look more aggressive, of higher status, and older… in a context in which females tend to be attracted to slightly older men, with age tending to be associated with status in men” (p61). 

However, this raises the question as to why, today, most men prefer to look younger.[21]

Are Feminine Faces More Prone to Infidelity?

Another interesting idea discussed by Dutton is that mate-choice criteria may vary depending on the sort of relationship sought. For example, he suggests: 

A highly feminine face is attractive, in particular in terms of a short term relationship… [where] a healthy and fertile partner is all that is needed” (p43). 

In contrast, however, he concludes that for a long-term relationship a less feminine face may be desirable, since he contends “being extremely feminine in terms of secondary sexual characteristics is associated with an r-strategy” and hence supposedly with a greater risk of infidelity (p43).[22]

However, Dutton presents no evidence in favour of the claim that less feminine women are less prone to sexual infidelity. 

Actually, on theoretical grounds, I would contend that the precise opposite relationship is more likely to exist. 

After all, less feminine and more masculine females, having been subjected to higher levels of androgens, would presumably also have a more male-typical sexuality, including a high sex drive and preference for promiscuous sex with multiple partners

Indeed, there is data in support of this conclusion, from studies of women afflicted with a rare condition, congenital adrenal hyperplasia, which results in their having been exposed to abnormally high levels of masculinizing androgens such as testosterone both in the womb and sometimes in later life as compared to other females, and who, as a consequence, exhibit a more male-typical psychology and sexuality than other females. 

Thus, Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here) reports:  

There is evidence that certain aspects of adult male sexuality result from the effects of prenatal and postpubertal androgens: before the discovery of cortisone therapy women with andrenogenital syndrome [AGS] were exposed to abnormally high levels of androgens throughout their lives, and clinical data on late-treated AGS women indicate clear-cut tendencies toward a male pattern of sexuality” (The Evolution of Human Sexuality: p290). 

Thus, citing the work of, among others the much-demonized John Money, Symons reports that women suffering from andrenogenital syndrome

Tended to exhibit clitoral hypersensitivity and an autonomous, initiatory, appetitive sexuality which investigators have characterized as evidencing a high sex drive or libido” (The Evolution of Human Sexuality: p290). 

This suggests that females with a relatively more masculine appearance, having been subject, on average, to higher levels of masculinizing androgens, will also evidence a more male-typical sexuality, including greater promiscuity and hence presumably a greater proclivity towards infidelity, rather than a lesser tendency as theorized by Dutton. 

Good Looks, Politics and Religion 

Dutton also cites studies showing that conservative politicians, and voters, are more attractive than liberals (Peterson & Palmer 2017; Berggren et al 2017). 

By way of explanation for these findings, Dutton speculates that in ancestral environments: 

Populations… so low in ethnocentrism as to espouse Multiculturalism and reject religion would simply have died out… Therefore… the espousal of leftist dogmas would partly reflect mutant genes, just as the espousal of atheism does. This elevated mutational load… would be reflected in their bodies as well as their brains” (p76). 

However, this seems unlikely, since atheism and possibly socially liberal political views as well have usually been associated with higher intelligence, which is probably a marker for good genes.[23]

Moreover, although mutations might result in suboptimal levels of both ethnocentrism and religiosity, these suboptimal levels would presumably also manifest in the form of excessive levels of religiosity and ethnocentrism

This would suggest that religious fundamentalists and extreme xenophobes and racial supremacists would be just as mutated, and hence just as ugly, as atheists and extreme leftists supposedly are. 

Yet Dutton instead insists that religious fundamentalists, especially Mormons, tend to be highly attractive (Dutton et al 2017). However, he and his co-authors cite little evidence for this claim beyond the merely anecdotal.[24]

The authors of the original paper, Dutton reports, themselves suggested an alternative explanation for the greater attractiveness of conservative politicians, namely: 

Beautiful people earn more, which makes them less inclined to support redistribution” (p75). 

This, to me seems, both simpler more plausible. However, in response, Dutton observes: 

There is far more to being… right-wing… than not supporting redistribution” (p75). 

Here, he is right. The correlation between socioeconomic status/income and political ideology and voting is actually quite modest (see What’s Your Bias). 

However, earnings do still correlate with voting patterns, and this correlation is perhaps enough to explain the modest association between physical attractiveness and political opinions. 

Nevertheless, other factors may also play a role. For example, a couple of studies have found, among men, an association between grip strength and support for policies that benefit oneself economically (Peterson et al 2013; Peterson & Laustsen 2018). 

Grip strength is associated with muscularity, which is generally considered attractive in males

Since most leading politicians mostly come from middle-class, well-to-do, if not elite backgrounds, this would suggest that conservative male politicians are likely to be, on average, more attractive than liberal or leftist politicians.

Indeed, Noah Carl has even purported to observe, and presents evidence suggesting, a general, and widening, masculinity gap between the political left and right, and some studies have found evidence that more physically formidable males have more conservative and less egalitarian political views (Price et al 2017; Kerry & Murray 2018). 

Since masculinity in general (e.g. not just muscularity, but also square jaws etc.) is associated with attractiveness in males (see discussion here), this might explain at least part of the association between political views and physical attractiveness. 

On the other hand, among females, an opposite process may be at work. 

Among women, leftist politics seem to be strongly associated with feminist views

Since feminists reject traditional female sex roles, it is likely they would be relatively less ‘feminine’ than other women, perhaps having been, on average, subjected to relatively higher levels of androgens in the womb, masculinizing both their behaviour and appearance. 

Yet it is relatively more feminine women, with feminine, sexually-dimorphic traits such as large breasts, low waist to hip ratios, and neotenous facial features, who are perceived by men as more attractive.

It is therefore unsurprising that feminist women in particular tend to be less attractive than women who are attracted to traditional sex roles.[25]

Developmental Disorders and MPAs

One study cited by Dutton found that observers are able to estimate a male’s IQ from a facial photograph alone at better than chance level (Kleisner 2014). To explain this, Dutton speculates: 

Having a small nose is associated with Downs [sic] Syndrome and Foetal Alcohol Syndrome and this would have contributed to our assuming that those with smaller noses were less intelligent” (p51). 

Thus, he explains: 

“[Whereas] Downs [sic] Syndrome and Foetal Alcohol Syndrome are major disruptions of developmental pathways and they lead to very low intelligence and a very small nose… even minor disruptions would lead to slightly reduced intelligence and a slightly smaller nose” (p51-2). 

Indeed, foetal alcohol syndrome itself seems to exist on a continuum and is hence a matter of degree. 
Indeed, going further than Dutton, I would agree with publisher/blogger Chip Smith, who observes in his blog

Dutton only mention[s] trisomy 21 (Down syndrome) in passing, but I think that’s a pretty solid place to start if you want to establish the baseline premise that at least some mental traits can be accurately inferred from external appearances.” 

Thus, the specific ‘look associated with Down Syndrome is a useful counterexample to cite to anyone who dismisses the idea of physiognomy, and the existence of any association between looks and ability or behaviour, a priori

Indeed, other developmental disorders and chromosomal abnormalities, not mentioned by Dutton, are also associated with a specific specific ‘look’ – for example, Williams Syndrome, the distinctive appearance, and personality, associated with which has even been posited as the basis for the elf figure in folklore.[26]

Less obviously, it has even been suggested that there are also subtle facial features that distinguish autistic children from neurotypical children, and which also distinguish boys with relatively more severe forms of autism from those who are likely to be diagnosed as higher functioning (Aldridge et al 2011; Ozgen et al 2011). 

However, Dutton neglects to mention that there is in fact a sizable literature regarding the association between so-called minor physical anomalies (aka MPAs) and several psychiatric conditions including autism (Ozgen et al 2008), schizophrenia (Weinberg et al 2007; Xu et al 2011) and paedophilia (Dyshniku et al 2015). 

MPAs have also been identified in several studies as a correlate of criminal behaviour (Kandel et al 1989; see also Criminology: A Global Perspective: p70-1). 

Yet these MPAs are often the very same traits – the single transverse palmar crease; sandal toe gap; fissured tongue – that are also used to diagnose Down Syndrome in nenates.

The Morality of Making Judgements

But is it not superficial to judge a book by its cover? And, likewise, by extension, isn’t it morally wrong to judge people by their appearance? 

Indeed, it is not only morally wrong to judge people by their appearance, but also, worse still, isn’t it racist

After all, skin colour is obviously a part of our appearance, and did not our Lord and Saviour, Dr Martin Luther King, himself advocate for a world in which people would be judged “not be judged by the color of their skin but by the content of their character.” 

Here, Dutton turns from science to morality, and convincingly contends that, at least in certain circumstances, it is indeed morally acceptable to judge people by appearances. 

It is true, he acknowledges, that most of the correlations that he has uncovered or reported are modest in magnitude. However, he is at pains to emphasize, the same is true of almost all correlations that are found throughout psychology and the social sciences. Thus, he exhorts: 

Let us be consistent. It is very common in psychology to find a correlation between, for example, a certain behaviour and accidents (or health) of 0.15 or 0.2 and thus argue that action should be taken based on the results. These sizes are considered large enough to be meaningful and even for policy to be changed” (p82). 

However, Dutton also includes a few sensible precautions and caveats to be borne in mind by those readers who might be tempted overenthusiastically apply some of his ideas. 

First, he warns against regarding making inferences regarding “people from a racial group with which you have relatively limited contact”, where the same cues used with respect to your own group may be inapplicable, or must be applied relative to the group averages for the other group, something we may not be adept at doing (p82-3). 

Thus, to give an obvious example, among Caucasians, epicanthic folds (i.e. so-called ‘slanted’ eyes) may be indicative of a developmental disorder such as Down syndrome. However, among East Asians, Southeast Asians and some other racial groups (notably the Khoisan of Southern Africa), such folds are entirely normal and not indicative of any pathology. 

He also cautions regarding people’s ability to disguise their appearance, both by makeup and plastic surgery. However, also notes that the tendency to wear excessive makeup, or undergo cosmetic surgery, is itself indicative of a certain personality type, and indeed often, Dutton asserts, of psychopathology (p84-5). 

Using physical appearance to make assessments is particularly useful, Dutton observes, “in extreme situations when a quick decision must be made” (p80). 

Thus, to take a deliberately extreme reductio ad absurdum, if we see someone stabbing another person, and this first person then approaches us in an aggressive manner brandishing the knife, then, if we take evasive action, we are, strictly speaking, judging by appearances. The person appears as if they are going to stab us, so we assume they are and act accordingly. However, no one would judge us morally wrong for so doing. 

However, in circumstances where we have access to greater individualizing information, the importance of appearances becomes correspondingly smaller. Here, a Bayesian approach is useful. 

In 2013, evolutionary psychologist Geoffrey Miller caused predictable outrage and hysteria when he tweeted

Dear obese PhD applicants: if you didn’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth.” 

According to Dutton, as we have seen above, willpower is indeed likely correlated with obesity, because, as Miller argues, people lacking in willpower also likely lack the willpower to diet. 

However, a PhD supervisor surely has access to far more reliable information regarding a person’s personality and intelligence, including their conscientiousness and willpower, in the form of their application and CV, than is obtainable from their physique alone. 

Thus, the outrage that this tweet provoked, though indeed excessive and a reflection of the intolerant climate of so-called cancel culture’ and public shaming in the contemporary west, was not entirely unwarranted. 

Similarly, if geneticist James Watson did indeed say, as he was rather hilariously reported as having said, that “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them”, he was indeed being prejudiced, because, again, an employer has access to more reliable information regarding applicants than their physique, namely, again, their application and CV. 

Obesity may often—perhaps even usually—be indicative of low levels of conscientiousness, willpower and intelligence. But, it is not always indicative of low levels of conscientiousness, willpower and intelligence. Instead, it may instead, as Dutton himself points out, reflect only high extraversion, or indeed an unusual medical condition. 

However, even at job interviews, employers do still, in practice, judge people partly by their appearance. Moreover, we often regard them as well within their rights to do so. 

This is, of course, why we advise applicants to dress smartly for their interviews.


[1] If ‘How to Judge People by What They Look Like’ is indeed a very short book, then, it must be conceded that this is, by comparison, a rather long and detailed book review. While, as will become clear in the remainder of this review, I have many points of disagreement with Dutton (as well as many points of agreement) and there are many areas where I feel he is mistaken, nevertheless the length of this book review is, in itself, testament to the amount of thinking that Dutton’s short pamphlet has inspired in this reader. 

[2] In addition, I suspect few of the researchers whose work Dutton cites ever even regarded themselves as working within, or somehow reviving, the field of physiognomy. On the contrary, despite researching and indeed demonstrating robust associations between morphology and behavior, this idea may never even have occurred to them.
Thus, for example, I was already familiar with some of this literature even before reading Dutton’s book, but it never occurred to me that what I was reading was a burgeoning literature in a revived science of physiognomy. Indeed, despite being familiar with much of this literature, I suspect that, if questioned directly on the matter, I may well have agreed with the general consensus that physiognomy was a discredited pseudoscience.
Thus, one of the chief accomplishments of Dutton’s book is simply to establish that this body of research does indeed represent a revived science of physiognomy, and should be recognized and described as such, even if the researchers themselves rarely if ever use the term.

[3] Instead, it would surely uncover mostly papers in the field of ‘history of science’, documenting the history of physiognomy as a supposedly discredited pseudoscience, along with such other real and supposed pseudosciences as phrenology and eugenics.

[4] The studies mentioned in the two paragraphs that precede this endnote are simply a few that I happen to have stumbled across that are relevant to Dutton’s theme and which I happen to have been able to recall. No doubt, any list of relevant studies that I could compile would be just as inexhaustive as that of Dutton and my own list would be longer than Dutton’s only because I have the advantage of having read Dutton’s book beforehand.

[5] Thus, a young person dressed as a hippy in the 60s and 70s was more likely to ascribe to certain (usually rather silly and half-baked) political beliefs, and also more likely to engage in recreational drug-use and live on a commune, while a young man dressed as a teddy boy in Britain in the 1950s, a skinhead in the 1970s and 80s, a football casual in the 1990s, or indeed a chav today, may be perceived as more likely to be involved in violent crime and thuggery. The goth subculture also seems to be associated with a certain personality type, and also with self-harm and suicide.

[6] The association between IQ and socioeconomic status is reviewed in The Bell Curve: Intelligence and Class Structure in American Life (which I have reviewed here). The association between conscientiousness and socioeconomic status is weaker, probably because personality tests are a less reliable measure of conscientiousness than IQ tests are of IQ, since the former rely on self-report. This is the equivalent of an IQ test that, instead of asking test-takers to solve logical puzzles, simply asked them how good they perceived themselves to be at solving logical puzzles. Nevertheless, conscientiousness, as measured in personality tests, does indeed correlate with earnings and career advancement, albeit less strongly than does IQ (Spurk & Abele 2011Wiersma & Kappe 2016).

[7] If some fat people are low in conscientiousness and intelligence, and others merely high in extraversion, there may, I suspect, also be a third category of people who do have self-control and self-discipline, but simply do not much care about whether they are fat or thin. However, given both the social stigma and health implications of obesity, this group is, I suspect, small. It is also likely young, since health dangers of obesity increase with age, and male, since both the social stigma of fatness, and especially its negative impact on mate value and attractiveness, seems to be greater for females. 

[8] Actually, whether roid rage is a real thing is a matter of some dispute. Although users of anabolic steroids do indeed have higher rates of violent crime, it has been suggested that this may be at least in part because the type of people who choose to use steroids are precisely those already prone to violence. In other words, there is a problem of self-selection bias.
Moreover, the association between testosterone and aggressive behaviours is more complex than this simple analysis assumes. One leading researcher in the field, Allan Mazur, argues that testosterone is not associated with aggression or violence per se, but only with dominance behaviours, which only sometimes manifest themselves through violent aggression. Thus, for example, a leading politician, business tycoon or chief executive of a large company may have high testosterone and be able to exercise dominance without resort to violence. However, a prisoner, being of low status in the legitimate world, is likely only able to assert dominance through violence (see Mazur & Booth 1998; Mazur 2009).

[9] Here, however, it is important to distinguish between the so-called organizing and ‘activating’ effects of testosterone. The latter can be equated with levels of circulating testosterone at any given time. The former, however, involves androgen levels at certain key points during development, especially in utero (i.e. in the womb) and during puberty, which thenceforth have long-term effects on both morphology and behaviour (and a person’s degree of susceptibility to circulating androgens).
Facial bone structure is presumable largely an effect of the ‘organizing’ effects of testosterone during development, though jaw shape is also affected by the size of the jaw muscles, which can be increased, it has been claimed, by regularly chewing gum. Bodily muscularity, on the other hand, is affected by both levels of circulating testosterone (hence the effects of anabolic steroids on muscle growth) but also levels of testosterone during development, not least because high levels of androgens during development increases the number and sensitivity of androgen receptors, which affect the potential for muscular growth.

[10] In this section, I have somewhat conflated spatial ability, mathematical ability and autism traits. However, these are themselves, of course, not the same, though each is probably associated with the others, albeit again not necessarily in a linear relationship.

[11] I have been unable to discover any evidence for this supposed association between lack of balding and impotence in men. On the contrary, googling the terms ‘male pattern baldness’ and ‘impotence’ finds only a results, mostly people speculating whether there is a positive correlation between balding and impotence in males, if only on the very unpersuasive ground that the two conditions tend to have a similar age of onset (i.e. around middle-age).

[12] In contrast, the shaven-head skinhead-look, or close-cropped military-style induction cut, buzz cut or high and tight is, of course, perceived as a quintessentially masculine, and even thuggish, hairstyle. This is perhaps because, in addition to contrasting with the long hair typically favoured by females, it also, by reducing the size of the upper part of the head, makes the lower part of the face e.g. the jaw and body, appear comparatively larger, and large jaws are a masculine trait, Thus, Nancy Etcoff observes:

The absense of hair on the head serves to exaggerate signals of strength. The smaller the head the bigger the look of the neck and body. Bodybuilders often shave or crop their hair, the size contrast between the head and neck and shoulders emphasizing the massiveness of the chest” (Survival of the Prettiest: p126).

[13] The source that Dutton cites for this claim is (Nieschlag & Behr 2013).

[14] In America, it has been suggested, especially tall boys are not treated with testosterone to prevent their growing any taller. Instead, they are encouraged to attempt to make a successful career in professional basketball

[15] On the other hand, one Swedish study investigating the association between height and violent crime found that the shortest men in Sweden had almost double convictions for violent crimes as compared to the tallest men in Sweden. However, after controlling for potential confounds (e.g. socioeconomic status and intelligence, both of which positively correlate with height), the association was reversed, with taller man having a somewhat higher likelihood of being convicted of a violent crime (Beckley et al 2014). 

[16] According to Dutton, the correlation between height and IQ is only about r = 0.1. This is a modest correlation even by psychology and social science standards.

[17] In other words, although modest in magnitude, the association between height and IQ has been replicated in so many studies with sufficiently large and representative sample sizes that we can be certain that it represents a real association in the population at large, not an artifact of small, unrepresentative or biased sampling in just one or a few studies. 

[18] An alternative explanation for the absence of a within-family correlation between height and intelligence is that some factor that differs as between families causes both increased height and increased intelligence. An obvious candidate would be malnutrition. However, in modern western economies where there is an superabundance of food, starvation is almost unknown and obesity is far more common than undernourishment even among the ostensible poor (indeed, as noted by Dutton, especially among the ostensible poor), it is doubtful that undernourishment is a significant factor in explaining either small stature or low IQs, especially since height is mostly heritable, at least by the time a person reaches adulthood.

[19] The conventional wisdom is that beards went out of fashion during the twentieth century precisely because their role as in spreading germs came to be more widely known. Thus, Nancy Etcoffwrites:

Facial hair has been less abundant in this century than in centuries past (except in the 1960s) partly because medical opinion turned against them. As people became increasingly aware of the role of germs in spreading diseases, beards came to be seen as repositories of germs. Previously, they had been advised by doctors as a means to protect the throat and filter air to the lungs” (Survival of the Prettiest: p156-7). 

Of course, this is not at all inconsistent with the notion that beards are perceived as attractive by women precisely because they represent a potential vector of infection and hence advertise the health and robustness of the male whom they adorn, as contended by Dutton. On the contrary, the fact that beards are indeed associated with infection, is consistent with and supportive of Dutton’s theory. 

[20] It would be interesting to discover whether these findings generalize to other, non-western cultures, especially those where beards are universal or the norm (e.g. among Muslims in the Middle East). It would also be discover whether women’s perceptions regarding the attractiveness of men with beards have changed as beards have gone in and out of fashion. 

[21] Perhaps this is because, although age is still associated with status, it is no longer as socially acceptable for older men to marry, or enter sexual relationships with, much younger women or girls as it was in the past, and such relationships are now less common. Indeed, in the last few years, this has become especially socially unacceptable. Therefore, given that most men are maximally attracted to females in this age category, they prefer to be thought of as younger so that it is more acceptable for them to seek relationships with younger, more attractive females.
Actually, while older men tend to have higher status on average, I suspect that, after controlling for status, it is younger men who would be perceived as more attractive. Certainly, a young multi-millionaire would surely be considered a more eligible bachelor than an older homeless man. Therefore, age per se is not attractive; only high status is attractive, which happens to correlate with age.

[22] This idea is again based on Philippe Rushton’s Differential K theory, which I have reviewed here and here.

[23] Dutton is apparently aware of this objection. He acknowledges, albeit in a different book, that “Intelligence, in general, is associated with health” (Why Islam Makes You Stupid: p174). However, in this same book, he also claims that: 

Intelligence has been shown to be only weakly associated with mutational load” (Why Islam Makes You Stupid: p169). 

Interestingly, Dutton also claims in this book: 

Very high intelligence predicts autism” (Why Islam Makes You Stupid: p175). 

This claim, namely that exceptionally high intelligence is associated with autism, seems anecdotally plausible. Certainly, autism seems to have a complex and interesting relationship with intelligence
Unfortunately, however, Dutton does not cite a source for the claim the claim that exceptionally high intelligence is associated with autism. Nevertheless, according to data cited here, there is indeed a greater variance in the IQs of autistic people, with greater proportions of autistic people at both tail-ends of the bell curve, the author even referring to an inverted bell curve for intelligence among autistic people, though, even according to her own cited data, this appears to be an exaggeration. However, this is not a scholarly source, but rather appears to be the website of a not entirely disinterested advocacy group, and it is not entirely clear from where this data derives, the piece referring only to data from the Netherlands collected by the Dutch Autism Register (NAR). 

[24] Admittedly, Dutton does cite one study showing that subjects can identify Mormons from facial photographs alone, and that the two groups differed in skin quality (Rule et al 2010). However, this might reflect merely the health advantages resulting from the religiously imposed abstention from the consumption of alcohol, tobacco, tea and coffee.
For what it’s worth, my own subjective and entirely anecdotal impression is almost the opposite of Dutton’s, at least here in secular modern Britain, where anyone who identifies as Christian, let alone a fundamentalist, unless perhaps s/he is elderly, tends to be regarded as a bit odd.
An interesting four-part critique of this theory, along very different lines from my own, is provided by Scott A McGreal at the Psychology Today website, see here, here, here, and here. Dutton responds with a two-part rejoinder here and here.

[25] However, when it comes to actual politicians, I suspect this difference may be attenuated, or even nonexistent, since pursuing a career in politics is, by its very nature, a very untraditional, and unfeminine, career choice, most likely because, in Darwinian terms, political power has a greater reproductive payoff for men than for women. Thus, it is hardly surprising that leading female politicians, even those who theoretically champion traditional sex roles, tend themselves to be quite butch and masculine in appearance and often as unattractive as their leftist opponents (e.g. Ann Widdecombe). Indeed, even Ann Coulter, a relatively attractive woman, at least by the standards of female political figures, has been mocked for her supposedly mannish appearance and pronounced Adam’s apple.
Moreover, most leading politicians are at least middle-aged, and female attractiveness peaks very young, in mid- to late-teens into early-twenties

[26] Another medical condition associated with a specific look, as well as with mental disability, is cretinism, though due to medical advances, most people with the condition in western societies, develop normally and no longer manifest either the distinctive appearance or the mental disability. 


Aldridge et al (2011) Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Molecular Autism 14;2(1):15. 
Apicella et al (2015) Hadza Hunter-Gatherer Men do not Have More Masculine Digit Ratios (2D:4D) American Journal of Physical Anthropology 159(2):223-32. 
Bagenjuk et al (2019) Personality Traits and Obesity, International Journal of Environmental Research and Public Health 16(15): 2675. 
Bailey & Hurd (2005) Finger length ratio (2D:4D) correlates with physical aggression in men but not in women. Biological Psychology 68(3):215-22. 
Batrinos (2014) The endocrinology of baldness. Hormones 13(2): 197–212. 
Beckley et al (2014) Association of height and violent criminality: results from a Swedish total population study. International Journal of Epidemiology 43(3):835-42 
Benderlioglu & Nelson (2005) Digit length ratios predict reactive aggression in women, but not in men Hormones and Behavior 46(5):558-64. 
Berggren et al (2017) The right look: Conservative politicians look better and voters reward it Journal of Public Economics 146:  79-86. 
Blanchard & Lyons (2010) An investigation into the relationship between digit length ratio (2D: 4D) and psychopathy, British Journal of Forensic Practice 12(2):23-31. 
Buru et al (2017) Evaluation of the hand anthropometric measurement in ADHD children and the possible clinical significance of the 2D:4D ratioEastern Journal of Medicine 22(4):137-142. 
Case & Paxson (2008) Stature and status: Height, ability, and labor market outcomes, Journal of Political Economy 116(3): 499–532. 
Cash (1990) Losing Hair, Losing Points?: The Effects of Male Pattern Baldness on Social Impression Formation. Journal of Applied Social Psychology 20(2):154-167. 
De Waal et al (1995) High dose testosterone therapy for reduction of final height in constitutionally tall boys: Does it influence testicular function in adulthood? Clinical Endocrinology 43(1):87-95. 
Dixson & Vasey (2012) Beards augment perceptions of men’s age, social status, and aggressiveness, but not attractiveness, Behavioral Ecology 23(3): 481–490. 
Dutton et al (2017) The Mutant Says in His Heart, “There Is No God”: the Rejection of Collective Religiosity Centred Around the Worship of Moral Gods Is Associated with High Mutational Load Evolutionary Psychological Science 4:233–244. 
Dysniku et al (2015) Minor Physical Anomalies as a Window into the Prenatal Origins of Pedophilia, Archives of Sexual Behavior 44:2151–2159. 
Elias et al (2012) Obesity, Cognitive Functioning and Dementia: Back to the Future, Journal of Alzheimer’s Disease 30(s2): S113-S125. 
Ellis & Hoskin (2015) Criminality and the 2D:4D Ratio: Testing the Prenatal Androgen Hypothesis, International Journal of Offender Therapy and Comparative Criminology 59(3):295-312 
Fossen et al (2022) 2D:4D and Self-Employment: A Preregistered Replication Study in a Large General Population Sample Entrepreneurship Theory and Practice 46(1):21-43. 
Gouchie & Kimura (1991) The relationship between testosterone levels and cognitive ability patterns Psychoneuroendocrinology 16(4): 323-334. 
Griffin et al (2012) Varsity athletes have lower 2D:4D ratios than other university students, Journal of Sports Sciences 30(2):135-8. 
Hilgard et al (2019) Null Effects of Game Violence, Game Difficulty, and 2D:4D Digit Ratio on Aggressive Behavior, Psychological Science 30(1):095679761982968 
Hollier et al (2015) Adult digit ratio (2D:4D) is not related to umbilical cord androgen or estrogen concentrations, their ratios or net bioactivity, Early Human Development 91(2):111-7 
Hönekopp & Urban (2010) A meta-analysis on 2D:4D and athletic prowess: Substantial relationships but neither hand out-predicts the other, Personality and Individual Differences 48(1):4-10. 
Hoskin & Ellis (2014) Fetal testosterone and criminality: Test of evolutionary neuroandrogenic theory, Criminology 53(1):54-73. 
Ishikawa et al (2001) Increased height and bulk in antisocial personality disorder and its subtypes. Psychiatry Research 105(3):211-219. 
Işık et al (2020) The Relationship between Second-to-Fourth Digit Ratios, Attention-Deficit/Hyperactivity Disorder Symptoms, Aggression, and Intelligence Levels in Boys with Attention-Deficit/Hyperactivity Disorder, Psychiatry Investigation 17(6):596–602. 
Janowski et al (1994) Testosterone influences spatial cognition in older men. Behavioral Neuroscience 108(2):325-32. 
Jokela et al (2012) Association of personality with the development and persistence of obesity: a meta-analysis based on individual–participant data, Etiology and Pathophysiology 14(4): 315-323. 
Kanazawa (2014) Intelligence and obesity: Which way does the causal direction go? Current Opinion in Endocrinology, Diabetes and Obesity (5):339-44. 
Kandel et al (1989) Minor physical anomalies and recidivistic adult violent criminal behavior, Acta Psychiatrica Scandinavica 79(1) 103-107. 
Kangassalo et al (2011) Prenatal Influences on Sexual Orientation: Digit Ratio (2D:4D) and Number of Older Siblings, Evolutionary Psychology 9(4):496-508 
Kerry & Murray (2019) Is Formidability Associated with Political Conservatism?  Evolutionary Psychological Science 5(2): 220–230. 
Keshavarz et al (2017) The Second to Fourth Digit Ratio in Elite and Non-Elite Greco-Roman Wrestlers, Journal of Human Kinetics 60: 145–151. 
Kleisner et al (2014) Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women. PLoS ONE 9(3): e81237. 
Kordsmeyer et al (2018) The relative importance of intra- and intersexual selection on human male sexually dimorphic traits, Evolution and Human Behavior 39(4): 424-436. 
Kosinski & Wang (2018) Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114(2):246-257. 
Kyselicová et al (2021) Autism spectrum disorder and new perspectives on the reliability of second to fourth digit ratio Developmental Pyschobiology 63(6). 
Li et al (2016) The relationship between digit ratio and sexual orientation in a Chinese Yunnan Han population, Personality and Individual Differences 101:26-29. 
Lippa (2003) Are 2D:4D finger-length ratios related to sexual orientation? Yes for men, no for women, Journal of Personality &Social Psychology 85(1):179-8 
Lolli et al (2017) A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans, Proceedings of the Royal Society B: Biological Sciences 284(1857):20170356 
Malamed (1992) Personality correlates of physical height. Personality and Individual Differences 13(12):1349-1350. 
Manning & Taylor (2001) Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans, Evolution & Human Behavior 22(1):61-69. 
Manning et al (2001) The 2nd to 4th digit ratio and autism, Developmental Medicine & Child Neurology 43(3):160-164. 
Martel et al (2008) Masculinized Finger-Length Ratios of Boys, but Not Girls, Are Associated With Attention-Deficit/Hyperactivity Disorder, Behavioral Neuroscience 122(2):273-81. 
Martin et al (2015) Associations between obesity and cognition in the pre-school years, Obesity 24(1) 207-214 
Mazur & Booth (1998) Testosterone and dominance in men. Behavioral and Brain Sciences, 21(3), 353–397. 
Mazur (2009) Testosterone and violence among young men. In Walsh & Beaver (eds) Biosocial Criminology: New Directions in theory and Research. New York: Routledge. 
Moffat & Hampson (1996) A curvilinear relationship between testosterone and spatial cognition in humans: Possible influence of hand preference. Psychoneuroendocrinology. 21(3):323-37. 
Murray et al (2014) How are conscientiousness and cognitive ability related to one another? A re-examination of the intelligence compensation hypothesis, Personality and Individual Differences, 70, 17–22. 
Nieshclag & Behr (2013) Testosterone Therapy. In Nieschlag & Behr (eds) Andrology: Male Reproductive Health and Dysfunction. New York: Springer. 
Ozgen et al (2010) Minor physical anomalies in autism: a meta-analysis. Molecular Psychiatry 15(3):300–7. 
Ozgen et al (2011) Morphological features in children with autism spectrum disorders: a matched case-control study. Journal of Autism and Developmental Disorders 41(1):23-31. 
Peterson & Palmer (2017) Effects of physical attractiveness on political beliefs. Politics and the Life Sciences 36(02):3-16 
Persico et al (2004) The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height, Journal of Political Economy 112(5): 1019-1053. 
Pratt et al (2016) Revisiting the criminological consequences of exposure to fetal testosterone: a meta-analysis of the 2d:4d digit ratio, Criminology 54(4):587-620. 
Price et al (2017). Is sociopolitical egalitarianism related to bodily and facial formidability in men? Evolution and Human Behavior, 38, 626-634. 
Puts (2010) Beauty and the beast: Mechanisms of sexual selection in humans, Evolution and Human Behavior 31(3):157-175. 
Rammstedt et al (2016) The association between personality and cognitive ability: Going beyond simple effects, Journal of Research in Personality 62: 39-44. 
Rosenberg & Kagan (1987) Iris pigmentation and behavioral inhibition Developmental Psychobiology 20(4):377-92. 
Rosenberg & Kagan (1989) Physical and physiological correlates of behavioral inhibition Developmental Psychobiology 22(8):753-70. 
Rule et al (2010) On the perception of religious group membership from faces. PLoS ONE 5(12):e14241. 
Salas-Wright & Vaughn (2016) Size Matters: Are Physically Large People More Likely to be Violent? Journal of Interpersonal Violence 31(7):1274-92. 
Spurk & Abele (2011) Who Earns More and Why? A Multiple Mediation Model from Personality to Salary, Journal of Business and Psychology 26: 87–103. 
Sutin et al (2011) Personality and Obesity across the Adult Lifespan Journal of Personality and Social Psychology 101(3): 579–592. 
Valla et al (2011). The accuracy of inferences about criminality based on facial appearance. Journal of Social, Evolutionary, and Cultural Psychology, 5(1), 66-91. 
Voracek et al (2011) Digit ratio (2D:4D) and sex-role orientation: Further evidence and meta-analysis, Personality and Individual Differences 51(4): 417-422. 
Weinberg et al (2007) Minor physical anomalies in schizophrenia: A meta-analysis, Schizophrenia Research 89: 72–85. 
Wiersma & Kappe 2015 Selecting for extroversion but rewarding for conscientiousness, European Journal of Work and Organizational Psychology 26(2): 314-323. 
Williams et al (2000) Finger-Length Ratios and Sexual Orientation, Nature 404(6777):455-456. 
Xu et al (2011) Minor physical anomalies in patients with schizophrenia, unaffected first-degree relatives, and healthy controls: a meta-analysis, PLoS One 6(9):e24129. 
Xu & Zheng (2016) The Relationship Between Digit Ratio (2D:4D) and Sexual Orientation in Men from China, Archives of Sexual Behavior 45(3):735-41. 

Desmond Morris’s ‘The Naked Ape’: A Pre-Sociobiological Work of Human Ethology 

Desmond Morris, Naked Ape: A Zoologist’s Study of the Human Animal (New York: Mcgraw-Hill Book Company, 1967)

First published in 1967, ‘The Naked Ape’, a popular science classic authored by the already famous British zoologist and TV presenter Desmond Morris, belongs to the pre-sociobiological tradition of human ethology

In the most general sense, the approach adopted by the human ethologists, who included, not only Morris, but also playwright Robert Ardrey, anthropologists Lionel Tiger and Robin Fox and the brilliant Nobel-prize winning ethologist, naturalist, zoologist, pioneering evolutionary epistemologist and part-time Nazi sympathizer Konrad Lorenz, was correct. 

They sought to study the human species from the perspective of zoology. In other words, they sought to adopt the disinterested perspective, and detachment, of, as Edward O Wilson was later to put it, “zoologists from another planet” (Sociobiology: The New Synthesis: p547). 

Thus, Morris proposed cultivating: 

An attitude of humility that is becoming to proper scientific investigation… by deliberately and rather coyly approaching the human being as if he were another species, a strange form of life on the dissecting table” (p14-5).  

In short, Morris proposed to study humans just as a zoologist would any other species of non-human animal. 

Such an approach was an obvious affront to anthropocentric notions of human exceptionalism – and also a direct challenge to the rather less scientific approach of most sociologists, psychologists, social and cultural anthropologists and other such ‘professional damned fools’, who, at that time, almost all studied human behavior in isolation from, and largely ignorance of, biology, zoology, and the scientific study of the behavior of all animals other than humans. 

As a result, such books inevitably attracted controversy and criticism. Such criticism, however, invariably missed the point. 

The real problem was not that the ethologists sought to study human behavior in just the same way a zoologist would study the behavior of any nonhuman animal, but rather that the study of the behavior of nonhuman animals itself remained, at this time, very much in its infancy. 

Thus, the field of animal behavior was to be revolutionized just a decade or so after the publication of ‘The Naked Ape’ by the approach that came to be known as, first, sociobiology, now more often as behavioral ecology, or, when applied to humans, evolutionary psychology

These approaches sought to understand behavior in terms of fitness maximization – in other words, on the basis of the recognition that organisms have evolved to engage in behaviors which tended to maximize their reproductive success in ancestral environments. 

Mathematical models, often drawn from economics and game theory, were increasingly employed. In short, behavioral biology was becoming a mature science. 

In contrast, the earlier ethological tradition was, even at its best, very much a soft science. 

Indeed, much such work, for example Jane Goodall’s rightly-celebrated studies of the chimpanzees of Gombe, was almost pre-scientific in its approach, involving observation, recording and description of behaviors, but rarely the actual testing or falsification of hypotheses. 

Such research was obviously important. Indeed, Goodall’s was positively groundbreaking. 

After all, the observation of the behavior or an organism is almost a prerequisite for the framing of hypotheses about the behavior of that organism, since hypotheses are, in practice, rarely generated in an informational vacuum from pure abstract theory. 

However, such research was hardly characteristic of a mature and rigorous science. 

When hypotheses regarding the evolutionary significance of behavior patterns were formulated by early ethologists, this was done on a rather casual ad hoc basis, involving a kind of ‘armchair adaptationism’, which could perhaps legitimately be dismissed as the spinning of, in Stephen Jay Gould’s famous phrase, just so stories

Thus, a crude group selectionism went largely unchallenged. Yet, as George C Williams was to show, and Richard Dawkins later to forcefully reiterate in The Selfish Gene (reviewed here), behaviors are unlikely to evolve that benefit the group or species if they involve a cost to the inclusive fitness of the individual engaging in the behavior. 

Robert Wright picks out a good example of this crude group selectionism from ‘The Naked Ape’ itself, quoting Morris’s claim that, over the course of human evolution: 

To begin with, the males had to be sure that their females were going to be faithful to them when they left them alone to go hunting. So the females had to develop a pairing tendency” (p64). 

To anyone schooled in the rudiments of Dawkinsian selfish gene theory, the fallacy should be obvious. But, just in case we didn’t spot it, Wright has picked it out for us: 

Stop right there. It was in the reproductive interests of the males for the females to develop a tendency toward fidelity? So natural selection obliged the males by making the necessary changes in the females? Morris never got around to explaining how, exactly, natural selection would perform this generous feat” (The Moral Animal: p56). 

In reality, couples have a conflict of interest here, and the onus is clearly on the male to evolve some mechanism of mate-guarding, though a female might conceivably evolve some way to advertise her fidelity if, by so doing, she secured increased male parental investment and provisioning, hence increasing her own reproductive success.[1]

In short, mating is Machiavellian. A more realistic view of human sexuality, rooted in selfish gene theory, is provided by Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here). 

Unsuccessful Societies? 

The problems with ‘The Naked Ape’ begin in the very first chapter, where Morris announces, rather oddly, that, in studying the human animal, he is largely uninterested in the behavior of contemporary foraging groups or other so-called ‘primitive’ peoples. Thus, he bemoans: 

The earlier anthropologists rushed off to all kinds of unlikely corners of the world… scattering to remote cultural backwaters so atypical and unsuccessful that they are nearly extinct. They then returned with startling facts about the bizarre mating customs, strange kinship systems, or weird ritual procedures of these tribes, and used this material as though it were of central importance to the behaviour of our species as a whole. The work done by these investigators… did not tell us was anything about the typical behaviour of typical naked apes. This can only be done by examining the common behaviour patterns that are shared by all the ordinary, successful members of the major cultures-the mainstream specimens who together represent the vast majority. Biologically, this is the only sound approach” (p10).[2]

Thus, today, political correctness has wholly banished the word ‘primitive’ from the anthropological lexicon. It is, modern anthropologists insist, demeaning and pejorative.  

Indeed, post-Boasian cultural anthropologists in America typically reject the very notion that some societies are more advanced than others, championing instead a radical cultural relativism and insisting we have much to learn from the lifestyle and traditions of hunter-gatherers, foragers, savage cannibals and other such ‘indigenous peoples’. 

Morris also rejects the term ‘primitive’ as a useful descriptor for hunter-gatherer and other technologically-backward peoples, but for diametrically opposite reasons. 

Thus, for Morris, to describe foraging groups as ‘primitive’ is to rather give them altogether too much credit: 

The simple tribal groups that are living today are not primitive, they are stultified. Truly primitive tribes have not existed for thousands of years. The naked ape is essentially an exploratory species and any society that has failed to advance has in some sense failed, ‘gone wrong’. Something has happened to it to hold it back, something that is working against the natural tendencies of the species to explore and investigate the world around it” (p10). 

Instead, Morris proposes to focus on contemporary western societies, declaring: 

North America… is biologically a very large and successful culture and can, without undue fear of distortion, be taken as representative of the modern naked ape” (p51) 

It is indeed true that, with the diffusion of American media and consumer goods, American culture is fast becoming ubiquitous. However, this is a very recent development in historical terms, let alone on the evolutionary timescale of most interest to biologists. 

Indeed, viewed historically and cross-culturally, it is we westerners who are the odd, aberrant ones. 

Thus, we even have been termed, in a memorable backcronym, WEIRD (Western, Educated, Industrialized, Rich and Democratic), and hence quite aberrant, not only in terms of our lifestyle and prosperity, but also in terms of our psychology and modes of thinking

Moreover, while foraging groups, and other pre-modern peoples, may now indeed now be tottering on the brink of extinction, this again is a very recent development. 

Indeed, far from being aberrant, this was the lifestyle adopted by all humans throughout most of the time we have existed as a species, including during the period when most of our unique physical and behavioural adaptations evolved

In short, although we may inhabit western cities today, this is not the environment where we evolved, nor that to which our brains and bodies are primarily adapted.[3]

Therefore, given that it represents the lifestyle of our ancestors during the period when most of our behavioral and bodily adaptations evolved, primitive peoples must necessarily have a special place in any evolutionary theory of human behaviour.[4]

Indeed, Morris himself admits as much himself just a few pages later, where he acknowledges that: 

The fundamental patterns of behavior laid down in our early days as hunting apes still shine through all our affairs, no matter how lofty they may be” (p40). 

Indeed, a major theme of ‘The Naked Ape’ is the extent to which the behaviour even of wealthy white westerners is nevertheless fundamentally shaped and dictated by the patterns of foraging set out in our ancient hunter-gatherer past. 

This, of course, anticipates the concept of the environment of evolutionary adaptedness (or EEA) in modern evolutionary psychology

Thus, Morris suggests that the pattern of men going out to work to financially provision wives and mothers who stay home with dependent offspring reflects the ancient role of men as hunters provisioning their wives and children: 

“Behind the façade of modern city life there is the same old naked ape. Only the names have been changed: for ‘hunting’ read ‘working’, for ‘hunting grounds’ read ‘place of business’, for ‘home base’ read ‘house’, for ‘pair-bond’ read ‘marriage’, for ‘mate’ read ‘wife’, and so on” (p84).[5]

In short, while we must explain the behaviors of contemporary westerners, no less than those of primitive foragers, in the light of Darwinian evolution, nevertheless all such behaviors must be explained ultimately in terms of adaptations that evolved over previous generations under very different conditions. 

Indeed, in the sequel to ‘The Naked Ape’, Morris further focuses on this very point, arguing that modern cities, in particular, are unnatural environments for humans, rejecting the then-familiar description of cities as concrete jungles on the grounds that, whereas jungles are the “natural habitat” of animals, modern cities are very much an unnatural habitat for humans. 

Instead, he argues, the better analogy for modern cities is a Human Zoo

The comparison we must make is not between the city dweller and the wild animal but between the city dweller and the captive animal. The city dweller is no longer living in conditions natural for his species. Trapped, not by a zoo collector, but by his own brainy brilliance, he has set himself up in a huge restless menagerie where he is in constant danger of cracking under the strain” (The Human Zoo: pvii). 


Morris adopts what he calls a zoological approach. Thus, unlike modern evolutionary psychologists, he focuses as much on explaining our physiology as our behavior and psychology. Indeed, it is in explaining the peculiarities of human anatomy that Morris’s book is at his best.[6]

This begins, appropriately enough, with the trait that gives him his preferred name for our species, and also furnishes his book with its title – namely our apparent nakedness or hairlessness. 

Having justified calling us ‘The Naked Ape’ on zoological grounds, namely on the ground that this is the first thing the naturalist would notice upon observing our species, Morris then comes close to contradicting himself, admitting that we actually have more hairs on our bodies than do chimpanzees.[7]

However, Morris summarily dispatches this objection: 

It is like saying that because a blind man has a pair of eyes, he is not blind. Functionally, we are stark naked and our skin is fully exposed” (p42). 

Why then are we so strangely hairless? Neoteny, Morris proposes, provides part of the answer. 

This refers to the tendency of humans to retain into maturity traits that are, in other primates, restricted to juveniles, nakedness among them. 

Neoteny is a major theme in Morris’s book – and indeed in human evolution

Besides our hairlessness, other human anatomical features that have been explained either partly or wholly in terms of neoteny, whether by Morris or by other evolutionists, include our brain size, growth patterns, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into sexual maturity (see below). Indeed, many of these traits are explicitly discussed by Morris himself as resulting from neoteny

However, while neoteny may supply the means by which our relative hairlessness evolved, it is not a sufficient explanation for why this development occurred, because, as Morris points out: 

The process of neoteny is one of the differential retarding of developmental processes” (p43). 

In other words, humans are neotenous in respect of only some of our characters, not all of them. After all, an ape that remained infantile in all respects would never evolve, for the simple reason that it would never reach sexual maturity and hence remain unable to reproduce. 

Instead, only certain specific juvenile or infantile traits are retained into adulthood, and the question then becomes why these specific traits were the ones chosen by natural selection to be retained. 

Thus, Morris concludes: 

It is hardly likely… that an infantile trait as potentially dangerous as nakedness was going to be allowed to persist simply because other changes were slowing down unless it had some special value to the new species” (p43). 

As to what this “special value” (i.e. selective advantage) might have been, Morris considers, in turn, various candidates.  

One theory considered by Morris theory relates to our susceptibility to insect parasites.  

Because humans, unlike many other primates, return to a home base to sleep most nights, we are, Morris reports, afflicted with fleas as well as lice (p28-9). Yet fur, Morris observes, is a good breeding ground for such parasites (p38-9). 

Perhaps, then, Morris imagines, we might have evolved hairlessness in order to minimize the problems posed by such parasites. 

However, Morris rejects this as an adequate explanation, since, he observes: 

Few other den dwelling mammals… have taken this step” (p43). 

An alternative explanation implicates sexual selection in the evolution of human hairlessness.  

Substantial sex differences in hairiness, as well as the retention of pubic hairs around the genitalia, suggests that sexual selection may indeed have played a role in the evolution of our relative hairlessness as compared to other mammals. 

Morris, however, rejects this explanation on the grounds that: 

The loss of bodily insulation would be a high price to pay for a sexy appearance alone” (p46). 

But other species often often pay a high price for sexually selected bodily adornments. For example, the peacock sports a huge, brightly coloured and elaborate tail that is thought to have evolved through sexual selection or female choice, which is costly to grow and maintain, impedes his mobility and is conspicuous to predators. 

Indeed, according to Amotz Zahavi’s handicap principle, it is precisely the high cost of such sexually-selected adornments that made them reliable fitness indicators and hence attractive to potential mates, because only a highly ‘fit’ male can afford to grow such a costly, inconvenient and otherwise useless appendage. 

Morris also gives unusually respectful consideration to the highly-controversial aquatic ape theory as an explanation for human hairlessness. 

Thus, if humans did indeed pass through an aquatic, or at least amphibious, stage during our evolution, then, Morris agrees, this may indeed explain our hairlessness, since it is indeed true that other aquatic or semiaquatic mammals, such as whales, dolphins and seals, also seem to have jettisoned most of their fur over the course of their evolution. 

This is presumably because fur increases frictional drag while in the water and hence impedes swimming ability, and is among the reasons that elite swimmers also remove their body-hair before competition. 

Indeed, our loss of body hair is among the human anatomical peculiarities that are most often cited by champions of aquatic ape theory in favor of the theory that humans did indeed pass through an aquatic phase during our evolution. 

However, aquatic ape theory is highly controversial, and is rejected by almost all mainstream evolutionists and biological anthropologists.  

As I have said, Morris, for his part, gives respectful consideration to the theory, and, unlike many other anthropologists and evolutionists, does not dismiss it out of hand as entirely preposterous and unworthy even of further consideration.[8]

On the contrary, Morris credits the theory as “ingenious”, acknowledging that, if true, it might explain many otherwise odd features of human anatomy, including not just our relative hairlessness, but also the retention of hairs on our head, the direction of the hairs on our backs, our upright posture, ‘streamlined’ bodies, dexterity of our hands and the thick extra layer of sub-cutaneous fat beneath our skin that is lacking in other primates. 

However, while acknowledging that the theory explains many curious anomalies of human physiology, Morris ultimately rejects ‘aquatic ape theory’ as altogether too speculative given the complete lack of fossil evidence in support of the theory – the same reason that most other evolutionists also reject the theory. 

Thus, he concludes: 

It demands… the acceptance of a hypothetical major evolutionary phase for which there is no direct evidence” (p45-6). 

Morris also rejects the theory that was, according to Morris himself, the most widely accepted explanation for our hairlessness among other evolutionists at the time he was writing – namely the theory that our hairlessness evolved as a cooling mechanism when our ancestors left the shaded forests for the open African savannah

The problem with this theory, as Morris explains it, is that:  

Exposure of the naked skin to the air certainly increases the chances of heat loss, but it also increases heat gain at the same time and risks damage from the sun’s rays” (p47). 

Thus, it is not at all clear that moving into the open savannah would indeed select for hairlessness. Otherwise, as Morris points out, we might expect other carnivorous, predatory mammals such as lions and jackals, who also inhabit the savannah, to have similarly jettisoned most of their fur. 

Ultimately, however, Morris accepts instead a variant on this idea – namely that hairlessness evolved to prevent overheating while chasing prey when hunting. 

However, this fails to explain why it is men’s bodies that are generally much hairier than those of women, even though, cross-culturally, in most foraging societies, it is men who do most, if not all, of the hunting. 

It also raises the question as to why other mammalian carnivores, including some that also inhabit the African Savannah and other similar environments, such as lions and jackals, have not similarly shed their body hair, especially since the latter rely more on their speed to catch prey species, whereas humans, armed with arrows and javelins as well as hunting dogs, do not always have to catch a prey themselves in order to kill it. 

I would tentatively venture an alternative theory, one which evidently did not occur to Morris – namely, perhaps our hairlessness evolved in concert with our invention and use of clothing (e.g. animal hides) – i.e. a case of gene-culture coevolution

Clothing would provide an alternative means of protect from both sun and cold alike, but one that has the advantage that, unlike bodily fur, it can be discarded (and put back on) on demand. 

This explanation suggests that, paradoxically, we became naked apes at the same time, and indeed precisely because, we had also become clothed apes. 

The Sexiest Primate? 

One factor said to have contributed to the book’s commercial success was the extent to which its thesis chimed with the prevailing spirit of the age during which it was first published, namely the 1960s. 

Thus, as already alluded to, it presented, in many ways, an idealized and romantic version of human nature, with its crude group-selectionism and emphasis on cooperation within groups without a concomitant emphasis on conflict between groups, and its depiction of humans as a naturally monogamous pair-bonding species, without a concomitant emphasis on the prevalence of infidelity, desertion, polygamy, Machiavellian mating strategies and even rape.  

Another element that jibed with the zeitgeist of the sixties was Morris’s emphasis on human sexuality, with Morris famously declaring: 

The naked ape is the sexiest primate alive” (p64). 

Are humans indeed the ‘sexiest’ of primates? How can we assess this claim? It depends, of course, on precisely how we define ‘sexiness’. 

Obviously, if beauty is in the eye of the beholder, then sexiness is located in a rather different part of the male anatomy, but equally subjective in nature. 

Thus, humans like ourselves find other humans more sexy than other primates because we have evolved to do so. A male chimpanzee, however, would likely disagree and regard a female chimpanzee as sexier. 

However, Morris presumably has something else in mind when he describes humans as the “sexiest” of primates. 

What he seems to mean is that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates. Thus, for example, he cites as evidence the extended or continuous sexual receptivity of human females, writing: 

There is much more intense sexual activity in our own species than in any other primates” (p56) 

However, the claim that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates is difficult to maintain when you have studied the behavior of some of our primate cousins. Thus, for example, both chimpanzees and especially bonobos, our closest relatives among extant non-human primates, are far more promiscuous than all but the sluttiest of humans

Indeed, one might cynically suggest that what Morris had most in mind when he described humans as “the sexiest primate alive” was simply a catchy marketing soundbite that very much tapped into the zeitgeist of the era (i.e. the 1960s) and might help boost sales for his book. 

Penis Size

As further evidence for our species’ alleged “sexiness” Morris also supposedly unusually large size of the human penis, reporting: 

The [human] male has the largest penis of any primate. It is not only extremely long when fully erect, but also very thick when compared with the penises of other species” (p80). 

This claim, namely that the human male has an unusually large penis, may originate with Morris, and has certainly since enjoyed wide currency in subsequent decades. 

Thus, competing theories have been formulated to account for the (supposedly) unusual size of our penes.

One idea is that our large penes evolved through sexual selection, more specifically female choice, with females preferring either the appearance, or the internal ‘feel’ of a large penis during coitus, and hence selecting for increased penis size among men (e.g. Mautz et al 2013; The Mating Mind: p234-6).

This idea dovetails neatly with Richard Dawkins’ tentative suggestion in an endnote appended to later editions of The Selfish Gene (reviewed here) that the capacity to maintain an erection (presumably especially a large erection) without a penis bone may function as an honest signal of health in accordance with Zahavi’s handicap principle, an idea I have previously discussed here (The Selfish Gene: p307-8).

Another suggestion implicates sperm competition. On this view, human penes are designed to remove sperm deposited by rival males in the female reproductive tract (Human Sperm Competition: p170-171; Gallup & Burch 2004; Gallup et al 2004; Goetz et al 2005; Goetz et al 2007). 

Yet, according to Alan F Dixson, the human penis is, in fact, not unusually long by primate standards, being roughly the same length as that of the chimpanzee (Sexual Selection and the Origins of Human Mating Systems: p64). 

Instead, Dixson reports: 

The erect human penis is comparable in length to those of other primates, in relation to body size. Only its circumference is unusual when compared to the penes of other hominids” (Sexual Selection and the Origins of Human Mating Systems: p65). 

The human penis is unusual, then, only in its width or girth. 

As to why our penes are so wide, the answer is quite straightforward, and has little to do with the alleged ‘sexiness’ of the human species, whatever that means. 

Instead, it is a simple, if indirect, reflection of our increased brain-size.

Increased brain-size first selected for changes in the size and shape of female reproductive anatomy. This, in turn, led to changes in male reporoductive anatomy. Thus, Bowman suggests: 

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008). 

Similarly, Robin Baker and Mark Bellis write: 

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: Copulation, Masturbation and Infidelity: p171). 

In turn, larger vaginas select for larger penises in order to fill this larger vagina (Bowman 2008).  

Interestingly, this theory directly contradicts the alleged claim of infamous race scientist Philippe Rushton (whose work I have reviewed here and here) that there is an inverse correlation between brain-size and penis-size, which relationship supposedly explains race differences in brain and genital size. Thus, Rushton was infamously quoted as observing: 

It’s a trade off, more brains or more penis. You can’t have everything.[9]

On the contrary, this analysis suggests that, at least as between species (and presumably as between sub-species, i.e. races, as well), there is a positive correlation between brain-size and penis-size.[10]

According to Baker and Bellis, one reason male penis size tracks that of female vagina size (both being relatively large, and especially wide, in humans) is that the penis functions as, in Baker and Bellis’s words, a “suction piston” during intercourse, the repeated thrusting functioning to remove any sperm previously deposited by rival males – a form of sperm competition

Thus, they report:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171). 

Interestingly, this theory – namely that the human penis functions as a sperm displacement device – although seemingly fanciful, actually explains some otherwise puzzling aspects of human coitus, such as its relatively extended duration, the male refractory period and related Coolidge effect – i.e. why a male cannot immediately recommence intercourse immediately after orgasm, unless perhaps with a new female (though this exception has yet to be experimentally demonstrated in humans), since to do so would maladaptively remove one’s own sperm from the female reproductive tract. 

Though seemingly fanciful, this theory even has some empirical support (Gallup & Burch 2004; Goetz et al 2005; Goetz et al 2007), including some delightful experiments involving sex toys of various shapes and sizes (Gallup et al 2004). 

Morris writes:

“[Man] is proud that he has the biggest brain of all the primates, but attempts to conceal the fact that he also has the biggest penis, preferring to accord this honor falsely to the mighty gorilla” (p9). 

Actually, the gorilla, mighty though he indeed may be, has relatively small genitalia. This is on account of his polygynous, but non-polyandrous, mating system, which involves minimal sperm competition.[11]

Moreover, the largeness of our brains, in which, according to Morris, we take such pride, may actually be the cause of the largeness of our penes, for which, according to Morris, we have such shame (here, he speaks for few men). 

Thus, large brains required larger heads which, in turn, required larger vaginas in order to successfully birth larger-headed babies. This in turn selected for larger penises to fill the larger vagina. 

In short, the large size, or rather large girth/width, of our penes has less to do with our being the “sexiest primate” and more to do with our being the brainiest

Female Breasts

In addition to his discussion of human penis size, Morris also argues that various other features of human anatomy that not usually associated with sex nevertheless evolved, in part, due to their role in sexual signaling. These include our earlobes (p66-7), everted lips (p68-70) and, tentatively and rather bizarrely, perhaps even our large fleshy noses (p67). 

He makes the most developed and persuasive case, however, in respect of another physiological peculiarity of the human species, and of human females in particular, namely the female breasts

Thus, Morris argues: 

For our species, breast design is primarily sexual rather than maternal in function” (p106). 

The evolution of protruding breasts of a characteristic shape appears to be yet another example of sexual signalling” (p70). 

As evidence, he cites the differences in shape between women’s breasts and both the breasts of other primates and the design of baby bottles (p93). In short, the shape of human breasts do not seem ideally conducive to nursing alone. 

The notion that breasts have a secondary function as sexual advertisements is indeed compelling. In most other mammals, large breasts develop only during pregnancy, but human breasts are permanent, developing at puberty, and, except during pregnancy and lactation, composed predominantly of fat not milk (see Møller et al 1995; Manning et al 1997; Havlíček et al 2016). 

On the other hand, it is difficult to envisage how breasts ever first became co-opted as a sexually-selected ornament. 

After all, the presence of developed breasts on a female would originally, as among other primates, have indicated that the female in question was pregnant, and hence infertile. There would therefore initially have been strong selection pressure among males against ever finding breasts sexually attractive, since it would lead to their pursuing infertile women whom they could not possibly impregnate. 

How then did breasts ever make the switch to a sexually attractive, sexually-selected ornament? This is what George Francis, at his blog, ‘Anglo Reaction’, terms the breast paradox.[12]

Morris does not address this not insignificant problem. However, he does suggest that two other human traits unique among primates may have facilitated the process. 

Our so-called ‘nakedness’ (i.e. relative hairlessness), the trait that furnished Morris’s book with its title, and Morris himself with his preferred name for our species, is the first of these traits. 

Swollen breast-patches in a shaggy-coated female would be far less conspicuous as signalling devices, but once the hair has vanished they would stand out clearly” (p70-1). 

Secondly, Morris argues that our bipedalism (i.e. the fact we walk on two legs) and resulting vertical posture, necessarily put the female reproductive organs out of sight underneath a woman when she adopts a standing position, and hence generally out of the sight of potential mates. There was therefore, Morris suggests, a need for some frontal sexual-signaling. 

This, he argues, was further necessitated by what he argues is our species’ natural preference for ventro-ventral (i.e. missionary position) intercourse. 

In particular, Morris argues that human female breasts evolved in order to mimic the appearance of the female buttocks, a form of what he terms ‘self-mimicry’. 

The protuberant, hemispherical breasts of the female must surely be copies of the fleshy buttocks” (p76). 

Everted Lips 

Interestingly, he makes a similar argument in respect of another trait of humans not shared by other extant primates – namely, our inverted lips.

The word ‘everted’ refers to the fact that our lips are turned outwards, as is easily perceived by comparing human lips with the much thinner lips of our closest non-human relatives

Again, this seems intuitively plausible, since, like female breasts, lips do indeed seem to be a much-sexualized part of the human anatomy, at least in western societies, and in at least some non-western cultures as well, if erotic art is to be taken as evidence.[13]

These everted lips, he argues, evolved to mimic the appearance of the female labia

As with Morris’s idea that female breasts evolved to mimic the appearance of female buttocks, the idea that our lips, and women’s use of lipstick, is designed to imitate the appearance of the female sexual organs has been much mocked.[14]

However, the similarity in appearance of the labia and human lips can hardly be doubted. After all, it is even attested to in the very etymology of the word

Of course, inverted lips reach their most extreme form among extant sub-species of hominid among black Africans. This Morris argues is because: 

If climatic conditions demand a darker skin, then this will work against the visual signalling capacity of the lips by reducing their colour contrast. If they really are important as visual signals, then some kind of compensating development might be expected, and this is precisely what seems to have occurred, the negroid lips maintaining their conspicuousness by becoming larger and more protuberant. What they have lost in colour contrast, they have made up for in size and shape” (p69-70).[15]

Thus, rejecting the politically-incorrect notion that black Africans are, as a race, somehow more ‘primitive than other humans, Morris instead emphasizes the fact that, in respect of this trait (i.e. everted lips), they are actually the most differentiated from non-human primates.  

Thus, all humans, compared to non-human primates, have everted lips, but black African lips are the most everted. Therefore, Morris concludes, using the word ‘primitive’ is in the special phylogenetic sense

Anatomically, these negroid characters do not appear to be primitive, but rather represent a positive advance in the specialization of the lip region” (p70).

In other words, whereas whites and Asians may be more advanced than blacks when it comes to intelligence, brain-size, science, technology and building civilizations, when it comes to everted lips, black Africans have us all beaten! 

Female Orgasm

Morris also discusses the function of the female orgasm, a topic which has subsequently been the subject of much speculation and no little controversy among evolutionists.  

Again, Morris suggests that humans’ unusual vertical posture, brought on by our bipedal means of locomotion, may have been central to the evolution of this trait. 

Thus, if a female were to walk off immediately after sexual intercourse had occurred, then: 

Under the simple influence of gravity the seminal fluid would flow back down the vaginal tract and much of it would be lost” (p79).  

This obviously makes successful impregnation less likely. As a result, Morris concludes: 

There is therefore a great advantage in any reaction that tends to keep the female horizontal when the male ejaculates and stops copulating” (p79). 

The chief adaptive function of the female orgasm therefore, according to Morris, is the tiredness, and perhaps post-coital tristesse, that immediately follows orgasm, and motivates the female experiencing these emotions to remain in a horizontal position even after intercourse has ended, and hence retain the male ejaculate within her reproductive tract. 

The violent response of female orgasm, leaving the female sexually satiated and exhausted has precisely this effect” (p79).[16]

However, the main problem with Morris’s theory is that it predicts that female orgasm should be confined to humans, since, at least among extant primates, we represent the only bipedal ape.  

Morris does indeed argue that the female organism is, like our nakedness, bipedal locomotion and large brains, an exclusively human trait, describing how, among most, if not all, non-human primates: 

At the end of a copulation, when the male ejaculates and dismounts, the female monkey shows little sign of emotional upheaval and usually wanders off as if nothing had happened” (p79). 

Unfortunately for Morris’s theory, however, evidence has subsequently accumulated that some non-human (and non-bipedal) female primates do indeed seem to sometimes experience responses seemingly akin to orgasm during copulation. 

Thus, Alan Dixson reports: 

Female orgasm is not confined to Homo sapiens. Putatively homologous responses [have] been reported in a number of non-human primates, including stump-tail and Japanese Macaques, rhesus monkeys and chimpanzees… Pre-human ancestors of Homo sapiens, such as the australopithecines, probably possessed a capacity to exhibit female orgasm, as do various extant ape and monkey species. The best documented example concerns the stump tailed macaque (Macaca arctoides), in which orgasmic uterine contractions have been recorded during female-female mounts… as well as during copulation… De Waal… estimates that female stump-tails show their distinctive ‘climax face’ (which correlates with the occurrence of uterine contractions) once in every six copulations. Vaginal spasms were noted in two female rhesus monkeys as a result of extended periods of stimulation (using an artificial penis) by an experimenter… Likewise, a female chimpanzee exhibited rhythmical vaginal contractions, clitoral erection, limb spasms, and body tension in response to manual stimulation of its genitalia… Masturbatory behaviour, accompanied by behavioural and physiological responses indicative of orgasm, has also been noted in Japanese macaques… and chimpanzees” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Thus, in relation to Morris’s theory, Dixson concludes that the theory lacks “comparative depth” because: 

Monkey and apes exhibit female orgasm in association with dorso-ventral copulatory postures and an absence of post-mating rest periods” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Certainly, female orgasm, unlike male orgasm, is hardly a prerequisite for successful impregnation. 

Thus, American physician, Robert Dickson, in his book, Human Sex Anatomy (1933), reports that, in a study of a thousand women who attended his medical practice afflicted with so-called ‘frigitity’ (i.e incapable of orgasmic response during intercourse): 

The frigid were not notably infertile, having the expected quota of living children, and somewhat less than the average incidence of sterility” (Human Sex Anatomy: p92). 

Thus, as argued by Donald Symons in his groundbreaking The Evolution of Human Sexuality (which I have reviewed here), the most parsomonious theory of the evolution of female orgasm is that it represents simply a non-adaptive byproduct of male orgasm, which is, of course, itself adaptive (see Sherman 1989Case Of The Female Orgasm: Bias in the Science of Evolution).

It thus represents, if you like, the female equivalent of male nipples – only more fun.


Interestingly, Morris also hypothesizes regarding the evolutionary function of another peculiarity of human female reproductive anatomy which, in contrast to the controversy regarding the evolutionary function, if any, of the female orgasm and clitoris (and of the female breasts), has received surprisingly scant attention from evolutionists – namely, the hymen

In most mammals, Morris reports, “it occurs as an embryonic stage in the development of the urogenital system” (p82). However, only in humans, he reports, is it, when not ruptured, retained into adulthood. 

Regarding the means by which it evolved, the trait is then, Morris concludes, like our large brains, upright posture and hairlessness, “part of the naked ape’s neoteny” (p82). 

However, as with our hairlessness, neoteny only the means by which this trait was retained into adulthood among humans, not the evolutionary reason for its retention.  

In other words, he suggests, the hymen, like other traits retained into adulthood among humans, must serve some evolutionary function. 

What is this evolutionary function? 

Morris suggests that, by making first intercourse painful for females, it deters young women from engaging in intercourse too early, and hence risking pregnancy, without first entering a relationship (‘pair-bond’) of sufficient stability to ensure that male parental investment, and provisioning, will be forthcoming (p73). 

However, pain experienced during intercourse occurs rather too late to deter first intercourse, because, by the time this pain is experienced, intercourse has already occurred. 

Of course, given our species’ unique capacity for speech and communication, the pain experienced during first intercourse could be communicated to young virginal women through conversation with other non-virginal women who had already experienced first intercourse.  

However, this would be an unreliable method of inducing fear and avoidance regarding first intercourse, especially given the sort of taboos regarding discussion of sexual activities which are common in many cultures. 

At any rate, why would natural, or sexual, selection not instead simply directly select for fear and anxiety regarding first intercourse – i.e. a psychological rather than a physiological adaptation. After all, as evolutionary psychologists and sociobiologists have convincingly demonstrated, our psychology is no less subject to natural selection than is our physiology. 

Although, as already noted, the evolutionary function, if any, of the female hymen has received surprisingly little attention from evolutionists, I can think of at least three rival hypotheses regarding the evolutionary significance of the hymen. 

First, it may have evolved among humans as a means of advertising to prospective suitors a prospective bride’s chastity, and hence reassuring the suitor of the paternity of offspring that subsequently result and encouraging paternal investment in offspring. 

This would, in turn, increase the perceived attractiveness of the female in question, and help secure her a better match with a higher-status male, and hence increase her own reproductive success

Thus, it is notable that, in many cultures, prospective brides are inspected for virginity, a so-called virginity test, sometimes by the prospective mother-in-law or another older woman, before being considered marriageable and accepted as brides. 

Alternatively, and more prosaically, the hymen may simply function to protect against infection, by preventing dirt and germs from entering a woman’s body by this route. 

This, of course, would raise the question as to why, at least according to Morris, the trait is retained into sexual maturity only among humans?  

Actually, however, as with his claim that the female orgasm is unique to humans, Morris’s claim that only humans retain the hymen into sexual maturity is disputed by other sources. Thus, for example, Catherine Blackledge reports: 

Hymens, or vaginal closure membranes or vaginal constrictions, as they are often referred to, are found in a number of mammals, including llamas, guinea-pigs, elephants, rats, toothed whales, seals, dugongs, and some primates, including some species of galagos, or bushbabys, and the ruffed lemur” (The story of V: p145). 

Finally, even more prosaically, the hymen may simply represent a nonadaptive vestige of the developmental process, or a nonadaptive by-product of our species’ neoteny

This would be consistent with the apparent variation with which the trait presents itself, suggesting that it has not been subject to strong selection pressure that has weeded out suboptimal variations. 

This then would appear to be the most parsimonious explanation. 

Zoological Nomenclature 

The works on human ethology of both Richard Ardrey and Konrad Lorenz attracted much attention and no little controversy in their day. Indeed, they perhaps attracted even more controversy than Morris’s own ‘The Naked Ape’, not least because they tended to place greater emphasis on humankind’s capacity, and alleged innate proclivity, towards violence. 

In contrast, Morris’s own work, placing less emphasis on violence, and more on sex, perhaps jibed better with the zeitgeist of the era, namely the 1960s, with its hippy exhortations to ‘make love not war’. 

Yet, although all these works were first published at around the same time, the mid- to late-sixties (though Adrey continued publishing books of this subject into the 1970s), Morris’s ‘The Naked Ape’ seems to be the only of these books that remains widely read, widely known and still in print, to this day. 

Partly, I suspect, this reflects its brilliant and provocative title, which works on several levels, scientific and literary.  

Morris, as we have seen, justifies referring to humans by this perhaps unflattering moniker on zoological grounds.  

Certainly, he acknowledges that humans possess many other exceptional traits that distinguish us from all other extant apes, and indeed all other extant mammals. 

Thus, we walk on two legs, use and make tools, have large brains and communicate via a spoken language. Thus, the zoologist could refer to us by any number of descriptors – “the vertical ape, the tool-making ape, the brainy ape” are a few of Morris’s own suggestions (p41).  

But, he continues, adopting the disinterested detachment of the proverbial alien zoologist: 

These were not the first things we noticed. Regarded simply as a zoological specimen in a museum, it is the nakedness that has the immediate impact” (p41) 

This name has, Morris observes, several advantages, including “bringing [humans] into line with other zoological studies”, emphasizing the zoological approach, and hence challenging human vanity. 

Thus, he cautions: 

The naked ape is in danger of being dazzled by [his own achievements] and forgetting that beneath the surface gloss he is still very much a primate. (‘An ape’s an ape, a varlet’s a valet, though they be clad in silk or scarlet’). Even a space ape must urinate” (p23). 

Thus, the title works also on another metaphoric level, which also contributed to the title’s power.  

The title ‘Naked Ape’ promises to reveal, if you like, the ‘naked’ truth about humanity—to strip humanity down in order to reveal the naked truth that lies beneath the façade and finery. 

Morris’s title reduces us to a zoological specimen in the laboratory, stripped naked on the laboratory table, for the purposes of zoological classification and dissection. 

Interestingly, humans have historically liked to regard ourselves as superior to other animals, in part, precisely because we are the only ones who did clothe ourselves. 

Thus, beside Adam and Eve, it was only primitive tropical savages who went around in nothing but a loincloth, and they were disparaged as uncivilized precisely on this account. 

Yet even tropical savages wore loincloths. Indeed, clothing, in some form, is sometimes claimed to be a human universal

Yet animals, on the other hand, go completely unclothed – or so we formerly believed. 

But Morris turns this reasoning on its head. In the zoological sense, it is humans who are the naked ones, being largely bereft of hairs sufficient to cover most of our bodies. 

Stripping humanity down in this way, Morris reveals the naked truth that beneath, the finery and façade of civilization, we are indeed an animal, an ape and a naked one at that. 

The power of Morris’s chosen title ensures that, even if, like all science, his book has quickly dated, his title alone has stood the test of time and will, I suspect, be remembered, and employed as a descriptor of the human species, long after Morris himself, and the books he authored, are forgotten and cease to be read. 


[1] In fact, as I discuss in a later section of this review, it is possible that the female hymen evolved through just such a process, namely as a means of advertising female virginity and premarital chastity (and perhaps implying post-marital fidelity), and hence as a paternity assurance mechanism, which benefited the female by helping secure male parental investment, provisioning and hypergamy.

[2] Morris is certainly right that anthropologists have overemphasized the exotic and unfamiliar (“bizarre mating customs, strange kinship systems, or weird ritual procedures”, as Morris puts it). Partly, this is simply because, when first encountering an alien culture, it is the unfamiliar differences that invariably stand out, whereas the similarities are often the very things which we tend to take for granted.
Thus, for example, on arriving in a foreign country, we are often struck by the fact that everyone speaks a foreign unintelligible language. However, we often take for granted the more remarkable fact that all cultures around the world do indeed have a spoken language, and also that all languages supposedly even share in common a universal grammar.
However, anthropologists have also emphasized the alien and bizarre for other reasons, not least to support theories of radical cultural malleability, sometimes almost to the verge of outright fabrication (e.g. Margaret Mead’s studies in Samoa).

[3] It is true that there has been some significant human evolution since the dawn of agriculture, notably the evolution of lactase persistence in populations with a history of dairy agriculture. Indeed, as Cochran and Harpending emphasize in their book The 10,000 Year Explosion, far from evolution having stopped at the dawn of agriculture or the rise of ‘civilization’, it has in fact sped up, as a natural reflection of the rapid change in environmental conditions that resulted. Thus, as Nicholas Wade concludes in A Troublesome Inheritance, much human evolution has been “recent, copious and regional”, leading to substantial differentiation between populations (i.e. race differences), including in psychological traits such as intelligence. Nevertheless, despite such tinkering, the core adaptations that identify us as a species were undoubtedly molded in ancient prehistory, and are universal across the human species.

[4] However, it is indeed important to recognize that the lifestyle of our own ancestors was not necessarily identical to that of those few extant hunter-gatherer groups that have survived into modern times, not least because the latter tend to be concentrated in marginal and arid environments (e.g. the San people of the Kalahari DesertEskimos of the Arctic region, Aboriginals of the Australian outback), with those formerly inhabiting more favorable environments having either themselves transitioned to agriculture or else been displaced or absorbed by more advanced invading agriculturalists with higher population densities and superior weapons and other technologies.

[5] This passage is, of course, sure to annoy feminists (always a good thing), and is likely to be disavowed even by many modern evolutionary psychologists since it relies on a rather crude analogy. However, Morris acknowledges that, since “’hunting’… has now been replaced by ‘working‘”: 

The males who set off on their daily working trips are liable to find themselves in heterosexual groups instead of the old all-male parties. All too often it [the pair bond] collapses under the strain” (p81). 

This factor, Morris suggests, explains the prevalence of marital infidelity. It may also explain the recent hysteria, and accompanying witch-hunts, regarding so-called ‘sexual harassment’ in the workplace.
Relatedly, and also likely to annoy feminists, Morris champions the then-popular man the hunter theory of hominid evolution, which posited that the key development in human evolution, and the development of human intelligence in particular, was the switch from a largely, if not wholly, herbivorous diet and lifestyle, to one based largely on hunting and the consumption of meat. On this view, it was the cognitive demands that hunting placed on humans that selected for increased intelligence among humans, and also the nutritional value of meat that made possible increases in  highly metabolically expensive brain tissue.
This theory has since fallen into disfavor. This seems to be primarily because it gives the starring role in human evolution to men, since men do most of the hunting, and relegates women to a mere supporting role. It hence runs counter to the prevailing feminist zietgeist.
The main substantive argument given against the ‘man the hunter theory’ is that other carnivorous mammals (e.g. lions, wolves) adapted to carnivory without any similar increase in brain-size or intelligence. Yet Morris actually has an answer to this objection.
Our ancestors, fresh from the forests, were relative latecomers to carnivory. Therefore, Morris contends, had we sought to compete with tigers and wolves by mimicking them (i.e. growing our fangs and claws instead of our brains) we would inevitably have been playing a losing game of evolutionary catch-up. 

Instead, an entirely new approach was made, using artificial weapons instead of natural ones, and it worked” (p22).

However, this theory fails to explain how female intelligence evolved. One possibility is that increases in female intelligence are an epiphenomenal byproduct of selection for male intelligence, rather like the female equivalent of male nipples.
On this view, men would be expected to have higher intelligence than women, just as male nipples are smaller than female nipples, and the male penis is bigger than the female clitoris. That adult men have greater intelligence than adult women is indeed the conclusion of a recent controversial theory, though the difference is very modest (Lynn 1999). There is also evidence this sexual division of labour between hunting and gathering led to sex dithfferences spatio-visual intelligence (Eals & Silverman 1994).

[6] Another difference from modern evolutionary psychologists derives from Morris’s ethological approach, which involves a focus on human-typical behaviour patterns. For example, he discusses the significance of body language and facial expressions, such as smiling, which is supposedly homologous with an appeasement gesture (baring clenched teeth, aka a ‘fear grin’) common to many primates, and staring, which represents a form of threat across many species.

[7] Interestingly, however, he acknowledges that this statement does not apply to all human races. Thus, he observes: 

Negroes have undergone a real as well as an apparent hair loss” (p42). 

Thus, it seems blacks, unlike Caucasians, have fewer hairs on their body than do chimpanzees. This fact is further evidence that, contrary to the politically correct orthodoxy, race differences are real and important, though this fact is, of course, played down by Morris and other popular science writers.

[8] Edward O Wilson, for example, in Sociobiology: The New Synthesis (which I have reviewed here) dismisses aquatic ape theory, as then championed by Elaine Morgan in The Descent of Woman, as feminist-inspired pop-science “contain[ing] numerous errors” and as being “far less critical in its handling of the evidence than the earlier popular books”, including, incidentally, that of Morris, who is mentioned by name in the same paragraph (Sociobiology: The New Synthesis: p29).

[9] Actually, I suspect this infamous quotation may be apocryphal, or at best a misconstrued joke. Certainly, while I think Rushton’s theory of race differences (which he calls ‘differential K theory’) is flawed, as I explain in my review of his work, there is nothing in it to suggest a direct trade-off between penis-size and brain-size. Indeed, one problem with Rushton’s theory, or at least his presentation of it, is that he never directly explains how traits such as penis-size actually relate to r/K selection in the first place.
The quotation is usually traced to a hit piece in Rolling Stone, a leftist hippie rag with a reputation for low editorial standards and fake news. However, Jon Entine, in his book on race differences in athletic ability, instead traces it to a supposed interview between Rushton and Geraldo Rivera broadcast on the Geraldo’ show in 1989 (Taboo: Why Black Athletes Dominate Sports: p74).
Interestingly, one study has indeed reported that there is a “demonstrated negative evolutionary relationship”, not between brain-size and penis-size, but rather between brain-size and testicle size, if only on account of the fact that each contain “metabolically expensive tissues” (Pitnick et al 2006).

[10] Interestingly, Baker and Bellis attribute race differences in penis-size, not to race differences in brain-size, but rather to race differences in birth weight. Thus, they conclude:

Racial differences in size of penis (Mongoloid < Caucasoid < Negroid…) reflects racial differences in birth weight… and hence presumably, racial differences in size of vagina” (Human Sperm Competition: p171). 

[11] In other words, a male silverback gorilla may mate with the multiple females in his harem, but each of the females in his harem likely have sex with only one male, namely that silverback. This means that sperm from rival males are rarely simultaneously present in the same female’s oviduct, resulting in minimal levels of sperm competition, which is known to select from larger testicles in particular, and also often more elaborate penes as well.

[12] Alternative theories for the evolution of permanent fatty breasts in women is that they function analogously to camel humps, i.e. as a storehouse of nutrients to guard against and provide reserves in the event of future scarcity or famine. On this view, the sexually dimorphic presentation (i.e. the fact that fatty breasts are largely restricted to women) might reflect the caloric demands of pregnancy. Indeed, this might explain why women have higher levels of fat throughout their bodies. (For a recent review of rival theories for human breast evolution see Pawłowski & Żelaźniewicz 2021.)

[13] However, to be pedantic, this phraseology is perhaps problematic, since, to say that breasts and lips are ‘sexualized’ in western, and at least some non-western, cultures implicitly presupposes that they are not already inherently sexual parts of our anatomy by virtue of biology, which is, of course, the precisely what Morris is arguing. 

[14] For example, if I recall correctly, extremely annoying, left-wing 1980s-era British comedian Ben Elton once commented in a one of his stand-up routines that the male anthropologist (i.e. Morris, actually not an anthropologist, at least not by training) who came up with this idea (namely, that lips and lipstick mimiced the appearance of the labia) had obviously never seen a vagina in his life. He also, if I recall correctly, attributed this theory to the supposed male-dominated, androcentric nature of the field of anthropology – an odd notion given that Morris is not an anthropologist by training, and cultural anthropology is, in fact, one of the most leftist-dominated, feminist-infested, politically correct fields in the whole of academia, this side of ‘gender studies’, which, in the present, politically-correct world of academia, is saying a great deal.

[15] To test this theory, we might look at other relatively dark-skinned, but non-Negroid, populations. Here, the theory receives, at best, only partial support. Thus, Australian Aboriginals, another dark-skinned but unrelated group, do indeed tend to have quite large lips. However, these lips are not especially everted. 
On the other hand, the dark-skinned Dravidian populations of Southern India are not generally especially large-lipped, but are rather quite Caucasoid in facial morphology, and indeed, like the generally lighter-complexioned, Indo-European speaking, ‘Aryan’ populations of northern India, were generally classified as ‘Caucasoid by most early-twentieth century racial anthropologists.

[16] This theory is rather simpler, and has hence always struck me as more plausible, than the more elaborate, but also more widely championed so-called ‘upsuck hypothesis’, whereby female orgasm is envisaged as somehow functioning to suck semen deeper into the cervix. This idea is largely based on a single study involving two experiments on a single subject (Fox et al 1970). However, two other studies failed to produce any empirical support for the theory (Grafenberg 1950; Masters & Johnson 1966). Baker and Bellis’s methodologically problematic work on what they call ‘flowback’ provides, at best, ambivalent evidence (Baker & Bellis 1993). For detailed critique, see Dixson’s Sexual Selection and the Origins of Human Mating Systems: p74-6.


Baker & Bellis (1993) Human sperm competition: ejaculate manipulation by females and a function for the female orgasm. Animal Behaviour 46:887–909. 
Bowman EA (2008) Why the human penis is larger than in the great apes. Archives of Sexual Behavior 37(3): 361. 
Eals & Silverman (1994) The Hunter-Gatherer theory of spatial sex differences: Proximate factors mediating the female advantage in recall of object arrays. Ethology and Sociobiology 15(2): 95-105.
Fox et al 1970. Measurement of intra-vaginaland intra-uterine pressures during human coitus by radio-telemetry. Journal of Reproduction and Fertility 22:243–251. 
Gallup et al (2004). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277–289 
Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology 2:12-23. 
Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: A preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences 38:749-763 
Goetz et al (2007) Sperm Competition in Humans: Implications for Male Sexual Psychology, Physiology, Anatomy, and Behavior. Annual Review of Sex Research 18:1. 
Grafenberg (1950) The role of urethra in female orgasm. International Journal of Sexology 3:145–148. 
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226. 
Lynn (1999) Sex differences in intelligence and brain size: A developmental theory. Intelligence 27(1):1-12.
Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236. 
Masters & Johnson (1966) Human Sexual Response (Boston: Little, Brown, 1966).
Mautz et al (2013) Penis size interacts with body shape and height to influence male attractiveness, Proceedings of the National Academy of Sciences 110(17): 6925–30.
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219. 
Pawłowski & Żelaźniewicz (2021) The evolution of perennially enlarged breasts in women: a critical review and a novel hypothesis. Biological reviews of the Cambridge Philosophical Society 96(6): 2794-2809. 
Pitnick et al (2006) Mating system and brain size in bats. Proceedings of the Royal Society B: Biological Sciences 273(1587): 719-24. 

Pierre van den Berghe’s ‘The Ethnic Phenomenon’: Ethnocentrism and Racism as Nepotism Among Extended Kin

Pierre van den Berghe, The Ethnic Phenomenon (Westport: Praeger 1987) 

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common. 

In ‘The Ethnic Phenomenon’, pioneering sociologist-turned-sociobiologist Pierre van den Berghe attempts to explain this universal phenomenon. 

In the process, he not only provides a persuasive ultimate evolutionary explanation for the universality of ethnocentrism, but also produces a remarkable synthesis of scholarship that succeeds in incorporating virtually every aspect of ethnic relations as they have manifested themselves throughout history and across the world, from colonialism, caste and slavery to integration and assimilation, within this theoretical and explanatory framework. 

Ethnocentrism as Nepotism? 

At the core of Pierre van den Berghe’s theory of ethnocentrism and ethnic conflict is the sociobiological theory of kin selection. According to van den Berghe, racism, xenophobia, nationalism and other forms of ethnocentrism can ultimately be understood as kin-selected nepotism, in accordance with biologist William D Hamilton’s theory of inclusive fitness (Hamilton 1964a; 1964b). 

According to inclusive fitness theory (also known as kin selection), organisms evolved to behave altruistically towards their close biological kin, even at a cost to themselves, because close biological kin share genes in common with one another by virtue of their kinship, and altruism towards close biological kin therefore promotes the survival and spread of these genes. 

Van den Berghe extends this idea, arguing that humans have evolved to sometimes behave altruistically towards, not only their close biological relatives, but also sometimes their distant biological relatives as well – namely, members of the same ethnic group as themselves. 

Thus, van den Berghe contends: 

Racial and ethnic sentiments are an extension of kinship sentiments [and] ethnocentrism and racism are… extended forms of nepotism” (p18). 

Ethnic Groups as Kin Groups?

Before reading van den Berghe’s book, I was skeptical regarding whether the degree of kinship shared among co-ethnics would ever be sufficient to satisfy Hamilton’s rule, whereby, for altruism to evolve, the cost of the altruistic act to the altruist, measured in terms of reproductive success, must be outweighed by the benefit to the recipient, also measured in terms of reproductive success, multiplied by the degree of relatedness of the two parties (Brigandt 2001; cf. Salter 2008; see also On Genetic Interests). 

Thus, Brigandt (2001) takes van den Berghe to task for his formulation of what the latter catchily christens “the biological golden rule”, namely: 

Give unto others as they are related unto you” (p20).[1]

However, contrary to both critics of his theory (e.g. Brigandt 2001) and others developing similar ideas (e.g. Rushton 2005; Salter 2000), van den Berghe is actually agnostic on the question of whether ethnocentrism is ever actually adaptive in modern societies, where the shared kinship of large nations or ethnic groups is, as van den Berghe himself readily acknowledges, “extremely tenuous at best” (p243). Thus, he concedes: 

Clearly, for 50 million Frenchmen or 100 million Japanese, any common kinship that they may share is highly diluted … [and] when 25 million African-Americans call each other ‘brothers’ and ‘sisters’, they know that they are greatly extending the meaning of these terms” (p27).[2]

Instead, van den Berghe suggests that nationalism and racism may reflect the misfiring of a mechanism that evolved when our ancestors still still lived in small kin-based groups of hunter-gatherers that represented little more than extended families (p35; see also Tooby and Cosmides 1989; Johnson 1986). 

Thus, van den Berghe explains: 

Until the last few thousand years, hominids interacted in relatively small groups of a few score to a couple of hundred individuals who tended to mate with each other and, therefore, to form rather tightly knit groups of close and distant kin” (p35). 

Therefore, in what evolutionary psychologists now call the environment of evolutionary adaptedness or EEA:

The natural ethny [i.e. ethnic group] in which hominids evolved for several thousand millennia probably did not exceed a couple of hundred individuals at most” (p24) 

Thus, van den Berghe concludes: 

The primordial ethny is thus an extended family: indeed, the ethny represents the outer limits of that inbred group of near or distant kinsmen whom one knows as intimates and whom therefore one can trust” (p25). 

On this view, ethnocentrism was adaptive when we still resided in such groups, where members of our own clan or tribe were indeed closely biologically related to us, but is often maladaptive in contemporary environments, where our ethnic group may include literally millions of people. 

Another not dissimilar theory has it that racism in particular might reflect the misfiring of an adaptation that uses phenotype matching, in particular physical resemblance, as a form of kin recognition

Thus, Richard Dawkins in his seminal The Selfish Gene (which I have reviewed here), cautiously and tentatively speculates: 

Conceivably, racial prejudice could be interpreted as an irrational generalization of a kin-selected tendency to identify with individuals physically resembling oneself, and to be nasty to individuals different in appearance” (The Selfish Gene: p100). 

Certainly, van den Berghe takes pains to emphasize that ethnic sentiments are vulnerable to manipulation – not least by exploitative elites who co-opt kinship terms such as ‘motherland’, fatherland and ‘brothers-in-arms‘ to encourage self-sacrifice, especially during wartime (p35; see also Johnson 1987; Johnson et al 1987; Salmon 1998). 

However, van den Berghe cautions, “Kinship can be manipulated but not manufactured [emphasis in original]” (p27). Thus, he observes how: 

Queen Victoria could cut a motherly figure in England; she even managed to proclaim her son the Prince of Wales; but she could never hope to become anything except a foreign ruler of India; [while] the fiction that the Emperor of Japan is the head of the most senior lineage descended from the common ancestor of all Japanese might convince the Japanese peasant that the Emperor is an exalted cousin of his, but the myth lacks credibility in Korea or Taiwan” (p62-3). 

This suggests that the European Union, while it may prove successful as customs union, single market and even an economic union, and while integration in other non-economic spheres may also prove a success, will likely never command the sort of loyalty and allegiance that a nation-state holds over its people, including, sometimes, the willingness of men to fight and lay down their lives for its sake. This is because its members come from many different cultures and ethnicities, and indeed speak many different languages. 

For van den Berghe, national identity cannot be rooted in anything other than a perception of shared ancestry or kinship. Thus, he observes: 

Many attempts to adopt universalistic criteria of ethnicity based on legal citizenship or acquisition of educational qualifications… failed. Such was the French assimilation policy in her colonies. No amount of proclamation of Algérie française could make it so” (p27). 

Thus, so-called civic nationalism, whereby national identity is based, not on ethnicity, but rather, supposedly, on a shared commitment to certain common values and ideals (democracy, the ‘rule of law’ etc.), as encapsulated by the notion of America as a proposition nation’, is, for van den Berghe, a complete non-starter. 

Yet this is today regarded as the sole basis for national identity and patriotic feeling that is recognised as legitimate, not only in the USA, but also all other contemporary western polities, where any assertion of racial nationalism or a racially-based or ethnically-based national identity is, at least for white people, anathema and beyond the pale. 

Moreover, due to the immigration policies of previous generations of western political leaders, policies that largely continue today, all contemporary western polities are now heavily multi-ethnic and multi-racial, such that any sense of national identity that was based on race or ethnicity is arguably untenable as it would necessarily exclude a large proportion of their populations.

On the other hand, however, van den Berghe’s reasoning also suggests that the efforts of some white nationalists to construct a pan-white, or pan-European, ethnic identity is also, like the earlier efforts of Japanese imperialist propagandists to create a pan-Asian identity, and of Marcus Garvey’s UNIA to construct a pan-African identity, likely to end in failure.[3]

Racism vs Ethnocentrism 

Whereas ethnocentrism is therefore universal, adaptive and natural, van den Berghe denies that the same can be said for racism

There is no evidence that racism is inborn, but there is considerable evidence that ethnocentrism is” (p240). 

Thus, van den Berge concludes: 

The genetic propensity is to favor kin, not those who look alike” (p240).[4]

As evidence, he cites:

The ease with which parental feelings take precedence over racial feeling in cases of racial admixture” (p240). 

In other words, fathers who sire mixed-race offspring with women of other races, and the women of other races with whom they father such offspring, often seemingly love and care for the resulting offspring just as intensely as do parents whose offspring is of the same race as themselves.[5]

Thus, cultural, rather than racial, markers are typically adopted to distinguish ethnic groups (p35). These include: 

  • Clothing (e.g. hijabs, turbans, skullcaps);
  • Bodily modification (e.g. tattoos, circumcision); and 
  • Behavioural criteria, especially language and dialect (p33).

Bodily modification and language represent particularly useful markers because they are difficult to fake, bodily modification because it is permanent and hence represents a costly commitment to the group (in accordance with Zahavi’s handicap principle), and language/dialect, because this is usually acquirable only during a critical period during childhood, after which it is generally not possible to achieve fluency in a second language without retaining a noticeable accent. 

In contrast, racial criteria, as a basis for group affiliation, is, van den Berghe reports, actually quite rare: 

Racism is the exception rather than the rule in intergroup relations” (p33). 

Racism is also a decidedly modern phenomenon. 

This is because, prior to recent technological advances in transportation (e.g. ocean-going ships, aeroplanes), members of different races (i.e. groups distinguishable on the basis of biologically inherited physiological traits such as skin colour, nose shape, hair texture etc.) were largely separated from one another by the very geographic barriers (e.g. deserts, oceans, mountain ranges) that reproductively isolated them from one another and hence permitted their evolution into distinguishable races in the first place. 

Moreover, when different races did make contact, then, in the absence of strict barriers to exogamy and miscegenation (e.g. the Indian caste system), racial groups typically interbred with one another and hence become phenotypically indistinguishable from one another within just a few generations. 

This, van den Berghe explains, is because: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Therefore, in the ancestral environment for which our psychological adaptations are designed (i.e. before the development of ships, aeroplanes and other methods of long-distance intercontinental transportation), different races did not generally coexist in the same locale. As a result, van den Berghe concludes: 

We have not been genetically selected to use phenotype as an ethnic marker, because, until quite recently, such a test would have been an extremely inaccurate one” (p 240). 

Humans, then, have simply not had sufficient time to have evolved a domain-specificracism module’ as suggested by some researchers.[6]

Racism is therefore, unlike ethnocentrism, not an innate instinct, but rather “a cultural invention” (p240). 

However, van den Berghe rejects the fashionable, politically correct notion that racism is “a western, much less a capitalist monopoly” (p32). 

On the contrary, racism, while not innate, is, not a unique western invention, but rather a recurrent reinvention, which almost invariably arises where phenotypically distinguishable groups come into contact with one another, if only because: 

Genetically inherited phenotypes are the easiest, most visible and most reliable predictors of group membership” (p32).

For example, van den Berghe describes the relations between the Tutsi, Hutu and Pygmy Twa of Rwanda and neighbouring regions as “a genuine brand of indigenous racism” which, according to van den Berghe, developed quite independently of any western colonial influence (p73).[7]

Moreover, where racial differences are the basis for ethnic identity, the result is, van den Berghe claims, ethnic hierarchies that are particularly rigid, intransient and impermeable.

For van den Berghe, this then explains the failure of African-Americans to wholly assimilate into the US melting pot in stark contrast to successive waves of more recently-arrived European immigrants. 

Thus, van den Berghe observes: 

Blacks who have been English-speaking for several generations have been much less readily assimilated in both England… and the United States than European immigrants who spoke no English on arrival” (p219). 

Thus, language barriers often break down within a generation. 

As Judith Harris emphasizes in support of peer group socialization theory, the children of immigrants whose parents are not at all conversant in the language of their host culture nevertheless typically grow up to speak the language of their host culture rather better than they do the first language of their parents, even though the latter was the cradle tongue to which they were first exposed, and first learnt to speak, inside the family home (see The Nurture Assumption: which I have reviewed here). 

As van den Berghe observes: 

It has been the distressing experience of millions of immigrant parents that, as soon as their children enter school in the host country, the children begin to resist speaking their mother tongue” (p258). 

While displeasing to those parents who wish to pass on their language, culture and traditions to their offspring, this response is wholly adaptive from the perspective of the offspring themselves:  

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (p258).[8]

However, whereas one can learn a new language, it is not usually possible to change one’s race – the efforts of Rachel Dolezal, Elizabeth Warren, Jessica Krug and Michael Jackson notwithstanding. However, due to the one-drop rule and the history of miscegenation in America, passing is sometimes possible (see below). 

Instead, phenotypic (i.e. racial) differences can only be eradicated after many generations of miscegenation, and sometimes, as in the cases of countries like the USA and Brazil, not even then. 

Meanwhile, van den Berghe observes, often the last aspect of immigrant culture to resist assimilation is culinary differences. However, he observes, increasingly even this becomes only a ‘ceremonial’ difference reserved for family gatherings (p260). 

Thus, van den Berghe surmises, Italian-Americans probably eat hamburgers as often as Americans of any other ethnic background, but at family gatherings they still revert to pasta and other traditional Italian cuisine

Yet even culinary differences eventually disappear. Thus, in both Britain and America, sausage has almost completely ceased to be thought of as a distinctively German dish (as have hamburgers, originally thought to have been named in reference to the city of Hamburg) and now pizza is perhaps on the verge of losing any residual association with Italians. 

Is Racism Always Worse than Ethnocentrism? 

Yet if raciallybased ethnic hierarchies are particularly intransigent and impermeable, they are also, van den Berghe claims, “peculiarly conflict-ridden and unstable” (p33). 

Thus, van den Berghe seems to believe that racial prejudice and animosity tends to be more extreme and malevolent in nature than mere ethnocentrism as exists between different ethnic groups of the same race (i.e. not distinguishable from one another on the basis of inherited phenotypic traits such as skin colour). 

For example, van den Berghe claims that, during World War Two: 

There was a blatant difference in the level of ferociousness of American soldiers in the Pacific and European theaters… The Germans were misguided relatives (however distant), while the ‘Japs’ or the ‘Nips’ were an entirely different breed of inscrutable, treacherous, ‘yellow little bastards.’ This was reflected in differential behavior in such things as the taking (versus killing) of prisoners, the rhetoric of war propaganda (President Roosevelt in his wartime speeches repeatedly referred to his enemies as ‘the Nazis, the Fascists, and the Japanese’), the internment in ‘relocation camps’ of American citizens of Japanese extraction, and in the use of atomic weapons” (p57).[9]

Similarly, in his chapter on ‘Colonial Empires’, by which he means “imperialism over distant peoples who usually live in noncontiguous territories and who therefore look quite different from their conquerors, speak unrelated languages, and are so culturally alien to their colonial masters as to provide little basis for mutual understanding”, van den Berghe writes: 

Colonialism is… imperialism without the restraints of common bonds of history, culture, religion, marriage and blood that often exist when conquest takes place between neighbors” (p85). 

Thus, he claims: 

What makes for the special character of the colonial situation is the perception by the conqueror that he is dealing with totally unrelated, alien and, therefore, inferior people. Colonials are treated as people totally beyond the pale of kin selection” (p85). 

However, I am unpersuaded by van den Berghe’s claim that conflict between more distantly related ethnic groups is always, or even typically, more brutal than that among biologically and culturally more closely related groups. 

After all, even conquests of neighbouring peoples, identical in race, if not always in culture, to the conquering group, are often highly brutal, for example the British in Ireland or the Japanese in Korea and China in the first half of the twentieth century. 

Indeed, many of the most intense and intractable ethnic conflicts are those between neighbours and ethnic kin, who are racially (and culturally) very similar to one another. 

Thus, for example, Catholics and Protestants in Northern Ireland, Greeks and Turks in Cyprus, and Bosnians, Croats, Serbs and Albanians in the Balkans, and even Jews and Palestinians in the Middle East, are all racially and genetically quite similar to one another, and also share many aspects of their culture with one another too. (The same is true, to give a topical example at the time of writing, of Ukrainians and Russians.) However, this has not noticeably ameliorated the nasty, intransient and bloody conflicts that have been, and continue to be, waged among them.  

Of course, the main reason that most ethnic conflict occurs between close neighbours is because neighbouring groups are much more likely to come into contact, and hence into conflict, with one another, especially over competing claims to land.[10]

Yet these same neighbouring groups are also likely to be related to one another, both culturally and genetically, because of both shared origins and the inevitable history of illicit intermarriage or miscegenation, and cultural borrowings, that inevitably occur even among the most hostile of neighbours.[11]

Nevertheless, the continuation of intense ethnic animosity between ethnic groups who are genetically, close to one another seems to pose a theoretical problem, not only for van den Berghe’s theory, but also, to an even greater degree, for Philippe Rushton’s so-called genetic similarity theory (which I have written about here), which argues that conflict between different ethnic groups is related to their relative degree of genetic differentiation from one another (Rushton 1998a; 1998b; 2005). 

It also poses a problem for the argument of political scientist Frank K Salter, who argues that populations should resist immigration by alien immigrants proportionally to the degree to which the alien immigrants are genetically distant from themselves (On Genetic Interests; see also Salter 2002). 

Assimilation, Acculturation and the American Melting Pot 

Since racially-based hierarchies result in ethnic boundaries that are both “peculiarly conflict-ridden and unstable” and also peculiarly rigid and impermeable, Van den Berghe controversially concludes: 

There has never been a successful multiracial democracy” (p189).[12]

Of course, in assessing this claim, we must recognize that ‘success’ is not only a matter of degree, but also can also be measured on several different dimensions. 

Thus, many people would regard the USA as the quintessential “successful… democracy”, even though the US has been multiracial, to some degree, for the entirety of its existence as a nation. 

Certainly, the USA has been successful economically, and indeed militarily.

However, the US has also long been plagued by interethnic conflict, and, although successful economically and militarily, it has yet to be successful in finding a way to manage its continued interethnic conflict, especially that between blacks and whites.

The USA is also afflicted with a relatively high rate of homicide and gun crime as compared to other developed economies, as well as low levels of literacy and numeracy and educational attainment. Although it is politically incorrect to acknowledge as much, these problems also likely reflect the USA’s ethnic diversity, in particular its large black underclass.

Indeed, as van den Berghe acknowledges, even societies divided by mere ethnicity rather than race seem highly conflict-prone (p186). 

Thus, assimilation, when it does occur, occurs only gradually, and only under certain conditions, namely when the group which is to be assimilated is “similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (p219). 

Thus, van den Berghe observes: 

People tend to assimilate and acculturate when their ethny [i.e. ethnic group] is geographically dispersed (often through migration), when they constitute a numerical minority living among strangers, when they are in a subordinate position and when they are allowed to assimilate by the dominant group” (p185). 

Moreover, van den Berghe is careful distinguish what he calls assimilation from mere acculturation.  

The latter, acculturation, involves a subordinate group gradually adopting the norms, values, language, cultural traditions and folkways of the dominant culture into whom they aspire to assimilate. It is therefore largely a unilateral process.[13]

In contrast, however, assimilation goes beyond this and involves members of the dominant host culture also actually welcoming, or at least accepting, the acculturated newcomers as a part of their own community.  

Thus, van den Berghe argues that host populations sometimes resist the assimilation of even wholly acculturated and hence culturally indistinguishable out-groups. Examples of groups excluded in this way include pariah castes, such as the untouchable dalits of the Indian subcontinent, the Burakumin of Japan and, at least according to van den Berghe, blacks in the USA.[14]

In other words, assimilation, unlike acculturation, is very much a two-way street. Thus, just as it ‘takes two to tango’, so assimilation is very much a bilateral process: 

It takes two to assimilate” (p217).  

On the one hand, minority groups may sometimes themselves resist assimilation, or even acculturation, if they perceive themselves as better off maintaining their distinct identify. This is especially true of groups who perceive themselves as being, in some respects, better-off than the host outgroup into whom they refuse to be absorbed. 

Thus, middleman minorities, or market-dominant minorities, such as Jews in the West, the overseas Chinese in contemporary South-East Asia, the Lebanese in West Africa and South Asians in East Africa, being, on average, much wealthier than the bulk of the host populations among whom them live, often perceive no social or economic advantage to either assimilation or acculturation and hence resist the process, instead stubbornly maintaining their own language and traditions and marrying only among themselves. 

The same is also true, more obviously, of alien ruling elites, such as the colonial administrators, and settlers, in European colonial empires in Africa, India and elsewhere, for whom assimilation into native populations would have been anathema.

Passing’, ‘Pretendians’ and ‘Blackfishing’ 

Interestingly, just as market-dominant minorities, middleman minorities, and European colonial rulers usually felt no need to assimilate into the host society in whose midst they lived, because to do so would have endangered their privileged position within this host society, so recent immigrants to America may no longer perceive any advantage to assimilation. 

On the contrary, there may now be an economic disincentive operating against assimilation, at least if assimilation means forgoing from the right to benefit from affirmative action in employment and college admissions

Thus, in the nineteenth and early twentieth centuries, the phenomenon of passing, at least in America, typically involved non-whites, especially light-skinned mixed-race African-Americans, attempting to pass as white or, if this were not realistic, sometimes as Native American.  

Some non-whites, such as Bhagat Singh Thind and Takao Ozawa, even brought legal actions in order to be racially reclassified as ‘white’ in order to benefit from America’s then overtly racialist naturalization law.

Contemporary cases of passing, however, though rarely referred to by this term, typically involve whites themselves attempting to somehow pass themselves off as some variety of non-white (see Hannam 2021). 

Recent high-profile recent examples have included Rachel Dolezal, Elizabeth Warren and Jessica Krug

Interestingly, all three of these women were both employed in academia and involved in leftist politics – two spheres in which adopting a non-white identity is likely to be especially advantageous, given the widespread adoption of affirmative action in college admissions and appointments, and the rampant anti-white animus that infuses so much of academia and the cultural Marxist left.[15]

Indeed, the phenomenon is now so common that it even has its own associated set of neologisms, such as Pretendian, ‘blackfishing’ and, in Australia, box-ticker.[16]

Indeed, one remarkable recent survey purported to uncover that fully 34% of white college applicants in the United States admitted to lying about their ethnicity on their applications, in most cases either to improve their chances of admission or to qualify for financial aid

Although Rachel Dolezal, Elizabeth Warren and Jessica Krug were all women, this survey found that white male applicants were even more likely to lie about their ethnicity than were white female applicants, with only 16% of white female applicants admitting to lying, as compared to nearly half (48%) of white males.[17]

This is, of course, consistent with the fact that it is white males who are the primary victims of affirmative action and other forms of discrimination.  

This strongly suggests that, whereas there were formerly social (and legal) benefits that were associated with identifying as white, today the advantages accrue to instead to those able to assume a non-white identity.  

For all the talk of so-called ‘white privilege’, when whites and mixed-race people, together with others of ambiguous racial identity, preferentially choose to pose as non-white in order to take advantage of the perceived benefits of assuming such an identity, they are voting with their feet and thereby demonstrating what economists call revealed preferences

This, of course, means that recent immigrants to America, such as Hispanics, will have rather less incentive in integrate into the American mainstream than did earlier waves of European immigrants, such as Irish, Poles, Jews and Italians, the latter having been, primarily, the victims of discrimination rather than its beneficiaries

After all, who would want to be another, boring unhyphenated American when to do so would presumably mean relinquishing any right to benefit from affirmative action in job recruitment or college admissions, not to mention becoming a part of the hated white ‘oppressor’ class. 

In short, ‘white privilege’ isn’t all it’s cracked up to be. 

This perverse incentive against assimilation obviously ought to be worrying to anyone concerned with the future of American as a stable unified polity. 

Ethnostates – or Consociationalism

Given the ubiquity of ethnic conflict, and the fact that assimilation occurs, if at all, only gradually and, even then, only under certain conditions, a pessimist (or indeed a racial separatist) might conclude that the only way to prevent ethnic conflict is for different ethnic groups to be given separate territories with complete independence and territorial sovereignty. 

This would involve the partition of the world into separate ethnically homogenous ethnostates, as advocated by racial separatists and many in the alt-right. 

Yet, quite apart from the practical difficulties such an arrangement would entail, not least the need for large-scale forcible displacements of populations, this ‘universal nationalism’, as championed by political scientist Frank K Salter among others, would arguably only shift the locus of ethnic conflict from within the borders of a single multi-ethnic state to between those of separate ethnostates – and conflict between states can be just as destructive as conflict within states, as countless wars between states throughout history have amply proven.  

In the absence of assimilation, then, perhaps fairest and least conflictual solution is what van den Berghe terms consociationalism. This term refers to a form of ethnic power-sharing, whereby elites from both groups agree to share power, each usually retaining a veto power regarding major decisions, and there is proportionate representation for each group in all important positions of power. 

This seems to be roughly the basis of the power sharing agreement imposed on Northern Ireland in the Good Friday Agreement, which was largely successful in bringing an end to the ethnic conflict known as ‘the Troubles.[18]

On the other hand, however, power-sharing was explicitly rejected by both the ANC and the international anti-apartheid movement as a solution in another ethnically-divided polity, namely South Africa, in favour of majority rule, even though the result has been a situation very similar to the situation in Northern Ireland which led to the Troubles, namely an effective one-party state, with a single party in power for successive decades and institutionalized discrimination against minorities.[19]

Consociationalism or ethnic power-sharing also arguably the model towards which the USA and other western polities are increasingly moving, with quotas and so-called ‘affirmative action increasingly replacing the earlier ideals of appointment by merit, color blindness or freedom of association, and multiculturalism and cultural pluralism replacing the earlier ideal of assimilation

Perhaps the model consociationalist democracy is van den Berghe’s own native Belgium, where, he reports: 

All the linguistic, class, religious and party-political quarrels and street demonstrations have yet to produce a single fatality” (p199).[20]

Belgium is, however, very much the exception rather than the rule, and, at any rate, though peaceful, remains very much a divided society

Indeed, power-sharing institutions, in giving official, institutional recognition to the existing ethnic divide, function only to institutionalize and hence reinforce and ossify the existing ethnic divide, making successful integration and assimilation almost impossible – and certainly even less likely to occur than it had been in the absence of such institutional arrangements. 

Moreover, consociationalism can be maintained, van den Berghe emphasizes, only in a limited range of circumstances, the key criterion being that the groups in question are equal, or almost equal, to one another in status, and not organized into an ethnic hierarchy. 

However, even when the necessary conditions are met, it invariably involves a precarious balancing act. 

Just how precarious is illustrated by the fate of other formerly stable consociationalist states. Thus, van den Bergh notes the irony that earlier writers on the topic had cited Lebanon as “a model [consociationalist democracy] in the Third World” just a few years before the Lebanese Civil War broke out in the 1970s (p191). 

His point is, ironically, only strengthened by the fact that, in the three decades since his book was first published, two of his own examples of consociationalism, namely the USSR and Yugoslavia, have themselves since descended into civil war and fragmented along ethnic lines. 

Slavery and Other Recurrent Situations  

In the central section of the book, van den Berghe discusses such historically recurrent racial relationships as “slavery”, middleman minorities, “caste” and “colonialism”. 

In large part, his analyses of these institutions and phenomena do not depend on his sociobiological theory of ethnocentrism, and are worth reading even for readers unconvinced by this theory – or even by readers skeptical of sociobiology and evolutionary psychology altogether. 

Nevertheless, the sociobiological model continues to guide his analysis. 

Take, for example, his chapter on slavery. 

Although the overtly racial slavery of the New World was quite unique, slavery often has an ethnic dimension, since slaves are often captured during warfare from among enemy groups. 

Indeed, the very word slave is derived from the ethnonym, Slav, due to the frequency with which the latter were captured as slaves, both by Christians and Muslims.[21]

In particular, van den Berghe argues that: 

An essential feature of slave status is being torn out of one’s network of kin selection. This condition generally results from forcible removal of the slave from his home group by capture and purchase” (p120).

This then partly explains, for example, why European settlers were far less successful in enslaving the native inhabitants of the Americas than they were in exploiting the slave labour of African slaves who had been shipped across the Atlantic, far from their original kin groups, precisely for this purpose. 

Thus, for van den Berghe, the quintessential slave is: 

Not only involuntarily among ethnic strangers in a strange land: he is there alone, without his support group of kinsmen and fellow ethnics” (p115)

Here van den Berghe seemingly anticipates the key insight of Jamaican sociologist Orlando Peterson in his comparative study of slavery, Slavery and Social Death, who terms this key characteristic of slavery natal alienation.[22]

This, however, is likely to be only a temporary condition, since, at least if allowed to reproduce, then, gradually over time, slaves would put down roots, produce new families, and indeed whole communities of slaves.[23]

When this occurs, however, slaves gradually, over generations, cease to be true slaves. The result is that: 

Slavery can long endure as an institution in a given society, but the slave status of individuals is typically only semipermanent and nonhereditary… Unless a constantly renewed supply of slaves enters a society, slavery, as an institution, tends to disappear and transform itself into something else” (p120). 

This then explains the gradual transformation of slavery during the medieval period into serfdom in much of Europe, and perhaps also the emergence of some pariah castes such as the untouchables of India. 

Paradoxically, van den Berghe argues that racism became particularly virulent in the West precisely because of Western societies’ ostensible commitment to notions of liberty and the rights of man, notions obviously incompatible with slavery. 

Thus, whereas most civilizations simply took the institution of slavery for granted, feeling no especial need to justify its existence, western civilization, given its ostensible commitment to such lofty notions as individual liberty and the equality of man, was always on the defensive, feeling a constant need to justify and defend slavery. 

The main justification hit upon was racialism and theories of racial superiority

If it was immoral to enslave people, but if at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (p115).  

This then explains much of the virulence of western racialism in the much of the eighteenth, nineteenth and even early-twentieth centuries.[24]

Another important, and related, ideological justification for slavery was what van den Berghe refers to as ‘paternalism’. Thus, Van den Berghe observes that: 

All chattel slave regimes developed a legitimating ideology of paternalism” (p131). 

Thus, in the American South, the “benevolent master” was portrayed a protective “father figure”, while slaves were portrayed as childlike and incapable of living an independent existence and hence as benefiting from their own enslavement (p131). 

This, of course, was a nonsense. As van den Berghe cynically observes: 

Where the parentage was fictive, so, we may assume, is the benevolence” (p131). 

Thus, exploitation was, in sociobiological terms, disguised as kin-selected parental benevolence

However, despite the dehumanization of slaves, the imbalance of power between slave and master, together with the men’s innate and evolved desire for promiscuity, made the sexual exploitation of female slaves by male masters all but inevitable.[25]

As van den Berghe observes: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Thus, he notes the hypocrisy whereby: 

Dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women” (p33). 

The result was that the fictive ideology of ‘paternalism’ that served to justify slavery often gave way to literal paternity of the next generation of the slave population. 

This created two problems. First, it made the racial justification for slavery, namely the ostensible inferiority of black people, ring increasingly hollow, as ostensibly ‘black slaves acquired greater European ancestry, lighter skins and more Caucasoid features with each successive generation of miscegenation. 

Second, and more important, it also meant that the exploitation of this next generation of slaves by their owners potentially violated the logic of kin selection, because: 

If slaves become kinsmen, you cannot exploit them without indirectly exploiting yourself” (p134).[26]

This, van den Berghe surmises, led many slave owners to free those among the offspring of slave women whom they themselves, or their male relatives, had fathered. As evidence, he observes:  

In all [European colonial] slave regimes, there was a close association between manumission and European ancestry. In 1850 in the United States, for example, an estimated 37% of free ‘negroes’ had white ancestry, compared to about 10% of the slave population” (p132). 

This leads van den Bergh to conclude that many such free people of color – who were referred to as people of color precisely because their substantial degree of white ancestry precluded any simple identification as black or negro – had been freed by their owner precisely because their owner was now also their kinsmen. Indeed, many may have been freed by the very slave-master who had been responsible for fathering them. 

Thus, to give a famous example, Thomas Jefferson is thought to have fathered six offspring, four of whom survived to adulthood, with his slave, Sally Hemings – who was herself already three-quarters white, and indeed Jefferson’s wife’s own half-sister, on account of miscegenation in previous generations. 

Of these four surviving offspring, two were allowed to escape, probably with Jefferson’s tacit permission or at least acquiescence, while the remaining two were freed upon his death in his will.[27]

This seems to have been a common pattern. Thus, van den Berghe reports: 

Only about one tenth of the ‘negro’ population of the United States was free in 1860. A greatly disproportionate number of them were mulattoes, and, thus, presumably often blood relatives of the master who emancipated them or their ancestors. The only other slaves who were regularly were old people past productive and reproductive age, so as to avoid the cost of feeding the aged and infirm” (p129). 

Yet this made the continuance of slavery almost impossible, because each new generation more and more slaves would be freed.  

Other slave systems got around this problem by continually capturing or importing new slaves in order to replenish the slave population. However, this option was denied to American slaveholders by the abolition of the slave trade in 1807

Instead, the Americans were unique in attempting to ‘breed’ slaves. This leads van den Berghe to conclude that: 

By making the slave woman widely available to her master…Western slavery thus literally contained the genetic seeds of its own destruction” (p134).[28]

Synthesising Marxism and Sociobiology 

Given the potential appeal of his theory to nationalists, and even to racialists, it is perhaps surprising that van den Berghe draws heavily on Marxist theory. Although Marxists were almost unanimously hostile to sociobiology, sociobiologists frequently emphasized the potential compatibility of Marxist theory and sociobiology (e.g. The Evolution of Human Sociality). 

However, van den Berghe remains, to my knowledge, the only figure (except myself) to actually successfully synthesize sociobiology and Marxism in order to produce novel theory.  

Thus, for example, he argues that, in almost every society in existence, class exploitation is disguised by an ideology (in the Marxist sense) that disguises exploitation as either: 

1) Kin-selected nepotistic altruism – e.g. the king or dictator is portrayed as benevolent ‘father’ of the nation; or
2) Mutually beneficial reciprocity – i.e. social contract theory or democracy (p60). 

However, contrary to orthodox Marxist theory, van den Berghe regards ethnic sentiments as more fundamental than class loyalty since, whereas the latter is “dependent on a commonality of interests”, the former is often “irrational” (p243). 

Nationalist conflicts are among the most intractable and unamenable to reason and compromise… It seems a great many people care passionately whether they are ruled and exploited by members of their own ethny or foreigners” (p62). 

In short, van den Berghe concludes: 

Blood runs thicker than money” (p243). 

Another difference is that, whereas Marxists view control over the so-called means of production (i.e. the means necessary to produce goods for sale) as the ultimate factor determining exploitation and conflict in human societies, Darwinians instead focus on conflict over access to what I have termed the means of reproduction – in other words, the means necessary to produce offspring (i.e. fertile females, their wombs and vaginas etc.). 

This is because, from a Darwinian perspective: 

The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (p165). 

Thus, unlike his contemporary Darwin, Karl Marx was, for all his ostensible radicalism, in his emphasis on economics rather than sex, just another Victorian sexual prude.[29]

Mating, Miscegenation and Intermarriage 

Given that reproduction, not production, is the ultimate focus of individual and societal conflict and competition, van den Berghe argues that ultimately questions of equality, inequality and assimilation must be also determined by reproductive, not economic, criteria. 

Thus, he concludes, intermarriage, especially if it occurs, not only frequently, but also in both directions (i.e. involves both males and females of both ethnicities, rather than always involving males of one ethnic group, usually the dominant ethnic group, taking females of the other ethnic group, usually the subordinate group, as wives), is the ultimate measure of racial equality and assimilation: 

Marriage, especially if it happens in both directions, that is with both men and women of both groups marrying out, is probably the best measure of assimilation” (p218). 

In contrast, however, he also emphasizes that mere “concubinage is frequent [even] in the absence of assimilation” (p218). 

Moreover, such concubinage invariably involves males of the dominant-group taking females from the subordinate-group as concubines, whereas dominant-group females are invariably off-limits as sexual partners for subordinate group males. 

Thus, van den Berghe observes, although “dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women”, they nevertheless are jealously protective of their own women and enforce strict double-standards (p33). 

For example, historian Wynn Craig Wade, in his history of the Ku Klux Klan (which I have reviewed here), writes: 

In [antebellum] Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation.” (The Fiery Cross: p20). 

The result, van den Berghe reports, is that: 

The subordinate group in an ethnic hierarchy invariably ‘loses’ more women to males of the dominant group than vice versa” (p75). 

Indeed, this same pattern is even apparent in the DNA of contemporary populations. Thus, geneticist James Watson reports that, whereas the mitochondrial DNA of contemporary Columbians, which is passed down the female line, shows a “range of Amerindian MtDNA types”, the Y-chromosomes of these same Colombians, are 94% European. This leads him to conclude: 

The virtual absence of Amerindian Y chromosome types, reveals the tragic story of colonial genocide: indigenous men were eliminated while local women were sexually ‘assimilated’ by the conquistadors” (DNA: The Secret of Life: p257). 

As van den Berghe himself observes: 

It is no accident that military conquest is so often accompanied by the killing, enslavement and castration of males, and the raping and capturing of females” (p75). 

This, of course, reflects the fact that, in Darwinian terms, the ultimate purpose of power is to maximize reproductive success

However, while the ethnic group as a whole inevitably suffers a diminution in its fitness, there is a decided gender imbalance in who bears the brunt of this loss. 

The men of the subordinate group are always the losers and therefore always have a reproductive interest in overthrowing the system. The women of the subordinate group, however frequently have the option of being reproductively successful with dominant-group males” (p27). 

Indeed, subordinate-group females are not only able, and sometimes forced, to mate with dominant-group males, but, in purely fitness terms, they may even benefit from such an arrangement.  

Hypergamy (mating upward for women) is a fitness enhancing strategy for women, and, therefore, subordinate-group women do not always resist being ‘taken over’ by dominant-group men” (p75). 

This is because, by so doing, they thereby obtain access to both the greater resources that dominant group males are able to provide in return for sexual access or as provisioning for their offspring, as well as the superior’ genes which facilitated the conquest in the first place. 

Thus, throughout history, women and girls have been altogether too willing to consort and intermarry with their conquerors. 

The result of this gender imbalance in the consequences of conquest and subjugation, is, a lack of solidarity as between men and women of the subjugated group. 

This sex asymmetry in fitness strategies in ethnically stratified societies often creates tension between the sexes within subordinate groups. The female option of fitness maximization through hypergamy is deeply resented by subordinate group males” (p76). 

Indeed, even captured females who were enslaved by their conquerers sometimes did surprisingly well out of this arrangement, at least if they were young and beautiful, and hence lucky enough to be recruited into the harem of a king, emperor or other powerful male.

One slave captured in Eastern Europe even went on to become effective queen of the Ottoman Empire at the height of its power. Hurrem Sultan, as she came to be known, was, of course, exceptional, but only in degree. Members of royal harems may have been secluded, but they also lived in some luxury.

Indeed, even in puritanical North America, where concubinage was very much frownded upon, van den Berghe reports that “slavery was much tougher on men than on women”, since: 

Slavery drastically reduced the fitness of male slaves; it had little or no such adverse effect on the fitness of female slaves whose masters had a double interest – financial and genetic – in having them reproduce at maximum capacity” (p133) 

Van den Berghe even tentatively ventures: 

It is perhaps not far-fetched to suggest that, even today, much of the ambivalence in relations between black men and women in America… has its roots in the highly asymmetrical mating system of the slave plantation” (p133).[30]

Miscegenation and Intermarriage in Modern America 

Yet, curiously, however, patterns of interracial dating in contemporary America are anomalous – at least if we believe the pervasive myth that America is a ‘systemically racist’ society where black people are still oppressed and discriminated against

On the one hand, genetic data confirms that, historically, matings between white men and black women were more frequent than the reverse, since African-American mitochondrial DNA, passed down the female line, is overwhelmingly African in origin, whereas their Y chromosomes, passed down the male line, are often European in origin (Lind et al 2007). 

However, recent census data suggests that this pattern is now reversed. Thus, black men are now about two and a half times as likely to marry white women as black women are to marry white men (Fryer 2007; see also Sailer 1997). 

This seemingly suggests white American males are actually losing out in reproductive competition to black males. 

This observation led controversial behavioural geneticist Glayde Whitney to claim: 

By many traditional anthropological criteria African-Americans are now one of the dominant social groups in America – at least they are dominant over whites. There is a tremendous and continuing transfer of property, land and women from the subordinate race to the dominant race” (Whitney 1999: p95). 

However, this conclusion is difficult to square with the continued disproportionate economic deprivation of much of black America. In short, African-Americans may be reproductively successful, and perhaps even, in some respects, socially privileged, but, despite benefiting from systematic discrimination in employment and admission to institutions of higher education, they are clearly also, on average, economically much worse-off as compared to whites and Asians in modern America.  

Instead, perhaps the beginnings of an explanation for this paradox can be sought in van den Berghe’s own later collaboration with anthropologist, and HBD blogger, Peter Frost

Here, in a co-authored paper, van den Berghe and Frost argue that, across cultures, there is a general sexual preference for females with somewhat lighter complexion than the group average (van den Berghe and Frost 1986). 

However, as Frost explains in a more recent work, Fair Women, Dark Men: The Forgotten Roots of Racial Prejudice, preferences with regard to male complexion are more ambivalent (see also Feinman & Gill 1977). 

Thus, whereas, according to the title of a novel, two films and a hit Broadway musical, ‘Gentlemen Prefer Blondes’ (who also reputedly, and perhaps as a consequence, have more fun), the idealized male romantic partner is instead tall, dark and handsome

In subsequent work, Frost argues that ecological conditions in sub-Saharan Africa permitted high levels of polygyny, because women were economically self-supporting, and this increased the intensity of selection for traits (e.g. increased muscularity, masculinity, athleticism and perhaps outgoing, sexually-aggressive personalities) which enhance the ability of African-descended males to compete for mates and attract females (Frost 2008). 

In contrast, Frost argues that there was greater selection for female attractiveness (and perhaps female chastity) in areas such as Northern Europe and Northeast Asia, where, to successfully reproduce, women were required to attract a male willing to provision them during cold winters throughout their gestation, lactation and beyond (Frost 2008). 

This then suggests that African males have simply evolved to be, on average, more attractive to women, whereas European and Asian females have evolved to be more attractive to men. 

This speculation is supported by a couple of recent studies of facial attractiveness, which found that black male faces were rated as most attractive to members of the opposite sex, but that, for female faces, the pattern was reversed (Lewis 2011; Lewis 2012). 

These findings could also go some way towards explaining patterns of interracial dating in the contemporary west (Lewis 2012). 

The Most Explosive Aspect of Interethnic Relations” 

However, such an explanation is likely to be popular neither with racialists, for whom miscegenation is anathema, nor with racial egalitarians, for whom, as a matter of sacrosanct dogma, all races must be equal in all things, even aesthetics and sex appeal.[31]

Thus, when evolutionary psychologist Satoshi Kanazawa made a similar claim in 2011 in a blog post (since deleted), outrage predictably ensued, the post was swiftly deleted, his then-blog dropped by its host, Psychology Today, and the author reprimanded by his employer, the London School of Economics, and forbidden from writing any blog or non-scholarly publications for a whole year. 

Yet all of this occurred within a year of the publication of the two papers cited above that largely corroborated Kanazawa’s finding (Lewis 2011; Lewis 2012). 

Yet such a reaction is, in fact, little surprise. As van den Berghe points out: 

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (p75). 

After all, from a sociobiological perspective, competition over reproductive access to fertile females is Darwinian conflict in its most direct and primordial form

Van den Berghe’s claim that interethnic sexual contact is “the most explosive aspect” of interethnic relations also has support from the history of racial conflict in the USA and elsewhere. 

The spectre of interracial sexual contact, real or imagined, has motivated several of the most notorious racially-motivated ‘hate-crimes’ of American history, from the torture-murder of Emmett Till for allegedly propositioning a white woman, to the various atrocities of the reconstruction-era Ku Klux Klan in defence of the ostensible virtue of ‘white womanhood, to the recent Charleston church shooting, ostensibly committed in revenge for the allegedly disproportionate rate of rape of white women by black man.[32]

Meanwhile, interracial sexual relations are also implicated in some of American history’s most infamous alleged miscarriages of justice, from the Scottsboro Boys and Groveland Four cases, and the more recent Central Park jogger case, all of which involved allegations of interracial rape, to the comparatively trivial conduct alleged, but by no means trivial punishment imposed, in the so-called Monroe ‘kissing case

Allegations of interracial rape also seem to be the most common precursor of full-blown race riots

Thus, in early-twentieth century America, the race riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 were all ignited, at least in part, by allegations of interracial rape or sexual assault

Meanwhile, on the other side of the Atlantic, multi-racial Britain’s first modern post-war race riot, the 1958 Notting Hill riot in London 1958, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before turning on them both. 

Meanwhile, Britain’s most recent unambiguous race riot, the 2005 Birmingham riot, an entirely non-white affair, was ignited by the allegation that a black girl had been gang-raped by South Asians.

Meanwhile, at least in the west, whites no longer seem participate in race riots, save as victims. However, an exception was the 2005 Cronulla riots in Sydney, Australia, which were ignited by the allegation that Middle Eastern males were sexually harassing white Australian girls on Sydney beaches. 

Similarly, in Britain, though riots have yet to result, the spectre of so-called Muslim grooming gangs, preying on, and pimping out, underage white British girls in northern towns across the England, has arguably done more to ignite anti-Muslim sentiment among whites in the UK than a whole series of Jihadist terrorist attacks on British civilian targets

Thus, in Race: The Reality of Human Differences (which I have reviewed here, here and here) Sarich and Miele caution that miscegenation, often touted as the universal panacea to racism simply because, if practiced sufficiently widely, it would eventually eliminate all racial differences, or at least blur the lines between racial groups, may actually, at least in the short-term, actually incite racist attacks. 

This, they argue, is because: 

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (Race: The Reality of Human Differences: p256) 

This “racial solidarist perspective” is, of course, a crudely group selectionist view of Darwinian competition, and it leads Sarich and Miele to hypothesize: 

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (Race: The Reality of Human Differences: p255). 

In other words, in sociobiological terms, ‘Robert’, a character from one of Michel Houellebecq’s novels, has it right when he claims: 

What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 


[1] Actually, however, contrary to Brigandt’s critique, it is clear that van den Berghe intended his “biological golden rule” only as a catchy and memorable aphorism, crudely summarizing Hamilton’s rule, rather than a quantitative scientific law akin to, or rivalling, Hamilton’s Rule itself. Therefore, this aspect of Brigandt’s critique is, in my view, misplaced. Indeed, it is difficult to see how this supposed rule could be applied as a quantitative scientific law, since relatedness, on the one hand, and altruism, on the other, are measured in different currencies. 

[2] Thus, van den Berghe concedes that: 

In many cases, the common descent acribed to an ethny is fictive. In fact, in most cases, it is partly fictive” (p27). 

[3] The question of racial nationalism (i.e. encompassing all members of a given race, not just those of a single ethnicity or language group) is actually more complex. Certainly, members of the same race do indeed share some degree of kinship, in so far as they are indeed (almost by definition) on average more closely biologically related to one another than to members of other races – and indeed that relatedness is obviously apparent in their phenotypic resemblance to one another. This suggests that racial nationalist movements such as that of, say, UNIA or of the Japanese imperialists, might have more potential as a viable form of nationalism than do attempts to unite racially disparate ethnicities, such as civic nationalism in the contemporary USA. The same may also be true of Oswald Mosley’s Europe a Nation campaign, at least while Europe remained primarily monoracial (i.e. white). However, any such racial nationalism would incorporate a far larger and more culturally, linguistically and genetically disparate group than any form of nationalism that has previously proven capable of mobilizing support.
Thus, Marcus Garvey’s attempt to create a kind of pan-African ethnic identity enjoyed little success and was largely restricted to North America, where African-Americans, do indeed share a common language and culture in addition to their race. Similarly, the efforts of Japanese nationalists to mobilize a kind of pan-Asian nationalism in support of their imperial aspirations during the first half of the twentieth century was an unmitigated failure, though this was partly because of the brutality with which they conquered and suppressed the other Asian nationalities whose support for pan-Asianism they intermittently and half-heartedly sought to enlist.
On the other hand, it is sometimes suggested that, in the early twentieth century, a white supremacist ideology was largely taken for granted among whites. However, while to some extent true, this shared ideology of white supremacism did not prevent the untold devastation wrought by the European wars of the early twentieth century, namely World Wars I and II, which Patrick Buchanan has collectively termed The Great Civil War of the West.
Thus, European nationalisms usually defined themselves by opposition to other European peoples and powers. Thus, just as Irish nationalism is defined largely by opposition to Britain, and Scottish nationalism by opposition to England, so English (and British) nationalism has itself traditionally been directed against rival European powers such as France and Germany (and formerly Spain), while French nationalism seems to have defined itself primarily in opposition to the Germans and the British, and German nationalism in opposition to the French and Slavs, etc.
It is true that, in the USA, a kind of