Catherine Hakim’s ‘Erotic Capital’: Too Much Feminism; Not Enough Evolutionary Psychology

Catherine Hakim, Honey Honey: The Power of Erotic Capital (London: Allen Lane 2011)

Catherine Hakim, a British sociologist – proudly displaying her own ‘erotic capital’ in a photograph on the dust jacket of the hardcover edition of her book – introduces her concept of ‘erotic capital’ in this work, variously titled either Money Honey: the Power of Erotic Capital’ or Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom’.[1]

Although Hakim insists this concept of ‘erotic capital’ is original to her, in reality it appears to be little more than social science jargon for sex appeal – a new term invented for a familiar concept, introduced to disguise the lack of originality of the concept.[2]

Certainly, Hakim may be right that economists and sociologists have often failed to recognize and give sufficient weight to the importance of sexual attractiveness in human relations. However, this reflects only the prejudices, puritanism and prudery of economists and sociologists, not the originality of the concept.

In fact, the importance of sexual attractiveness in human affairs has been recognized by intelligent laypersons, poets and peasants from time immemorial. It is also, of course, a central focus of much research in evolutionary psychology.

Hakim maintains that her concept of ‘erotic capital’ is broader than mere sex appeal by suggesting that even heterosexual people tend to admire and enjoy the company of individuals of the same sex with high levels of erotic capital:

Even if they are not lesbian, women often admire other women who are exceptionally beautiful, or well-dressed, and charming. Even if they are not gay, men admire other men with exceptionally well-toned, ‘cut’ bodies, handsome faces and elegant social manners” (p153).

There is perhaps some truth to this.

For example, I recall hearing that the audiences at (male) bodybuilding contests are, perhaps oddly, composed predominantly of heterosexual men. Similarly, since action movies are a genre that appeals primarily to male audiences, it was presumably heterosexual men and boys who represented the main audiences for Arnold Schwarzenegger action movies during his 1980s heyday, and they were surely not attracted by his acting ability. Indeed, I am reminded of this meme.[3]

Likewise, heterosexual women seem, in many respects, even more obsessed with female beauty than are heterosexual men. Indeed, this is arguably not very surprising, since female beauty is of far more importance to women than to men, since their own marital prospects, and hence socioeconomic status, depend substantially upon it.

Thus, just as pornographic magazines, which, until eclipsed in the internet age, attracted an overwhelmingly male audience, were filled with pictures of beautiful, sexy women in various states of undress, so fashion magazines, which attracted an audience as overwhelmingly female and porn’s was male, were likewise filled with pictures of beautiful, sexy women, albeit somewhat less explicit and wearing more clothes.

However, if men do indeed sometimes admire muscular men, and women do sometimes admire beautiful women, I nevertheless suspect people are just as often envious of and hence hostile towards same-sex rivals whom they perceive as more attractive than themselves.

Indeed, there is even some evidence for this.

In her book, Survival of the Prettiest (which I have reviewed here), Nancy Etcoff reviews many of the advantages associated with good looks, as does Catherine Hakim in Money Honey. However, Etcoff, for her part, also identifies at least one area where beautiful women are apparently at a disadvantage – namely, they tend to have difficulties holding down friendships with other women, presumably on account of jealousy:

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (Survival of the Prettiest: p50; citing Krebs & Adinolfy 1975).[4]

Interestingly, sexually insightful French novelist Michel Houellebecq, in his novel, Whatever, suggests that the same may be true for exceptionally handsome men. Thus, he writes:

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63).

A Sex Difference in Sexiness?

Besides introducing her supposedly novel concept of ‘erotic capital’, Hakim’s book purports to make two original discoveries, namely that:

  1. Women have greater erotic capital than men do; and
  2. Because men have a greater sex drive than women, “there is a systematic and apparently universal male sex deficit: men generally want a lot more sex than they get” (p39).

However, once one recognizes that ‘erotic capital’ essentially amounts to sex appeal, it is doubtful whether these two claims are really conceptually separate.

Rather, it is the very fact that men are not getting as much sex as they want that explains why women have greater sex appeal than men, because men are always on the lookout for more sex – or, to put the matter the other way around, it is women’s greater levels of sex appeal (i.e. ability to trigger the male sex drive) that explains why heterosexual men want more sex than they can get. After all, it is sex appeal that drives the desire for sex, just as it is one person’s desire for sex that invests the person with whom they desire to have sex with sex appeal.

Indeed, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

Evolutionary Psychology

Yet there is a curious omission in Hakim’s otherwise comprehensive review of the literature on this topic, one that largely deprives her exposition of its claims to originality.

Save for two passing references (at p88 and in an endnote at p320), she omits any mention of a theoretical approach in the human behavioural sciences which has, for at least thirty years prior to the publication of her book, not only focused on sexual attractiveness and recognized what Hakim refers to as the ‘universal male sex deficit’ (albeit not by this name), but also provided a compelling theoretical explanation for this phenomenon, something conspicuously absent from her own exposition – namely, evolutionary psychology and sociobiology.

According to evolutionary psychologists, men have evolved a greater desire for sex, especially commitment-free promiscuous sex, because it enabled them to increase their reproductive success at minimal cost, whereas the reproductive rate of women was more tightly constrained, burdened as they are with the costs of both pregnancy and lactation.

This insight, known as Bateman’s principle dates from over sixty years ago (Bateman 1948), was rediscovered, refined and formalized by Robert Trivers in the 1970s (Trivers 1972), and applied explicitly to humans from at least the late-1970s with the publication of Donald Symons’ seminal The Evolution of Human Sexuality (which I have reviewed here).

Therefore, Hakim is disingenuous claiming:

Only one social science theory [namely, Hakim’s own] accords erotic capital any role at all” (p156).

Yet, despite her otherwise comprehensive review the literature on sexual attractiveness and its correlates, including citations of some studies conducted by evolutionary psychologists themselves to test explicitly sociobiological theories, one searches the index of her book in vain for any entry for ‘evolutionary psychology’, ‘sociobiology’ or ‘behavioural ecology’.[5]

Yet Hakim’s book often merely retreats ground evolutionary psychologists covered decades previously.

For instance, Hakim treats male homosexual promiscuity as a window onto the nature of male sexuality when it is freed from the constraints imposed by women (p68-71; p95-6).

Thus, as evidence that men have a stronger sex drive than women, Hakim writes:

Paradoxically, the most compelling evidence of this comes from homosexuals, who are relatively impervious to the brainwashing and socialization of the heterosexual majority. Lesbian couples enjoy sex less frequently than any other group. Gay male couples enjoy sex more frequently than any other group—and their promiscuous lifestyle makes them the envy of many heterosexual men. Gay men in long-term partnerships who have become sexually bored with each other maintain an active sex life through casual sex, hookups, and promiscuity. Even among people who step outside the heterosexual hegemony to carve out their own independent sexual cultures, men are much more sexually active than women, on average” (p95-6).

Here, Hakim echoes, but conspicuously fails to cite or acknowledge the work of evolutionary psychologist Donald Symons, who, in his seminal The Evolution of Human Sexuality (which I have reviewed here), first published in 1979, some three decades before Hakim’s own book, pioneered this exact same approach, in his ninth chapter, titled ‘Test Cases: Hormones and Homosexuals’. Thus, Symons writes:

I have argued that male sexuality and female sexuality are fundamentally different, and that sexual relationships between men and women compromise these differences; if so, the sex lives of homosexual men and women—who need not compromise sexually with members of the opposite sex—should provide dramatic insight into male sexuality and female sexuality in their undiluted states. Homosexuals are the acid test for hypotheses about sex differences in sexuality” (The Evolution of Human Sexuality: p292).

To this end, Symons briefly surveys the rampant promiscuity of American gay culture in the pre-AIDS era when he was writing, including the then-prevalent practice of gay men meeting strangers for anonymous sex in public lavatoriesgay bars and exclusively gay bathhouses (The Evolution of Human Sexuality: p293-4).

He then contrasts this hedonistic lifestyle with that of lesbians, whose romantic relationships typically mirror heterosexual relationships, being characterized by long-term pair bonds and monogamy.

This similarity between lesbian relationships and heterosexual coupling, and the stark contrast with rampant homosexual male promiscuity, suggests, Symons argues, that, contrary to feminist dogma, which asserts that it is men who both dictate and primarily benefit from the terms of heterosexual coupling, it is in fact women who dictate the terms of heterosexual coupling in accordance with their own interests and desires (The Evolution of Human Sexuality: p300).

Thus, as popular science writer Matt Ridley writes:

Donald Symons… has argued that the reason male homosexuals on average have more sexual partners than male heterosexuals, and many more than female homosexuals, is that male homosexuals are acting out male tendencies or instincts unfettered by those of women” (The Red Queen: p176).

This is, of course, virtually exactly the same argument that Hakim is making, using exactly the same evidence, but Symons is nowhere cited in her book.

Hakim again echoes the work of Donald Symons in noting the absence of a market for pornography among women to mirror the extensive market for pornography produced for male consumers.

Thus, before the internet age, magazines featuring primarily nude pictures of women commanded sizable circulations despite the stigma attached to their purchase. In contrast, Hakim reports:

The vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility… Women should logically be the main audience for male nudes, but they display little interest. Most of the erotic magazines aimed at women in Europe have failed, and almost none of the photographers doing male nudes are women. The taste for erotica and pornography is typically a male interest, whether heterosexual or homosexual in character…The lack of female interest in male nudes (at least to the same level as men) demonstrates both lower female sexual interest and desire, and the higher erotic value of the female nude in almost all cultures —with a major exception being ancient Greece” (p71).

Yet here again Hakim directly echoes, but fails to cite, Donald Symons’s seminal The Evolution of Human Sexuality, who, citing the Kinsey Reports, observed:

Enormous numbers of photographs of nude females and magazines exhibiting nude or nearly nude females are produced for heterosexual men; photographs and magazines depicting nude males are produced for homosexual men, not for women” (The Evolution of Human Sexuality: p174)

This Symons calls “the natural experiment of commercial periodical publishing” (The Evolution of Human Sexuality: p182).

Similarly, just as Hakim notes that “the vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility” (p71), so Symons three decades earlier concluded:

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (The Evolution of Human Sexuality: p304).

However, Symons’s pioneering book on the evolutionary psychology human sexuality is not cited anywhere in Hakim’s book, and neither is it listed in her otherwise quite extensive bibliography.

Sex Surveys

Another odd omission from Hakim’s book is that, while she extensively cites the findings of numerous ‘sex surveys’ replicating the robust finding that men report more sexual partners over any given timespan than women do, Hakim never grapples with, and only once in passing alludes to, the obvious problem that (homosexual encounters aside) every sexual encounter must involve both a male and a female, such that, on average, given the approximately equal numbers of both males and females in the population as a whole (i.e. an equal sex ratio), men and women must have roughly the same average number of sex partners over their lifetimes.[6]

Two explanations have been offered for this anomalous finding. Firstly, there may be a small number of highly promiscuous women – i.e. prostitutes – whom surveys generally fail to adequately sample (Brewer et al 2000).

Alternatively, it is suggested, not unreasonably, that respondents may be dishonest even in ostensibly anonymous surveys, especially when they deal with sensitive subjects such as a person’s sexual experience and behaviours.

Popular stereotype has it that it is men who lie in sex surveys in order to portray themselves as more promiscuous and hence ‘successful with women’ than they really are.

However, while this claim seems to be mostly conjecture, there is actual data showing that women are also dishonest in sex surveys, lying about their number of sex partners for precisely the opposite reason – namely to appear more innocent and chaste, or at least less rampantly slutty, than they really are, given the widespread demonization of promiscuity among women.

Thus, one interesting study found that women report relatively more sexual partners in surveys when they believe their answers are anonymous than they do when they believe their answers may be viewed by the experimenter, and more still when they believe that they are hooked up to a polygraph machine designed to detect any dishonest answers when reporting their answers. Indeed, in the fake lie-detector conditions, female respondents actually reported more sexual partners than did male respondents (Alexander and Fisher 2003).

A further factor may be that men and women define ‘sex’ differently, at least for the purposes of giving answers to sex surveys, perhaps exploiting the same sort of semantic ambiguities that Bill Clinton sought to exploit to evade perjury charges in relation to his claim not to have had sexual relations’ with Monica Lewinsky.

Paternity Certainty, Mate Guarding and the Suppression of Female Sexuality

Hakim claims men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’. Thus, she claims:

Men have taken steps to prevent women exploiting their one major advantage over men, starting with the idea erotic capital is worthless anyway. Women who openly deploy their beauty or sex appeal are belittled as stupid, lacking in intellect and other ‘meaningful’ social attributes” (p75).

In particular, Hakim views so-called ‘sexual double-standards’ and the puritanical attitudes expressed by many religions (especially Christianity and Islam) as mechanisms by which men suppress female sexuality and thereby prevent women taking advantage of their greater levels of ‘erotic capital’ or sex appeal as compared to men.

Citing the work of female historian Gerda Lerner, Hakim claims that men established patriarchy and sought to control the sexuality of women so as to assure themselves of the paternity of their offspring:

Patriarchal systems of control and authority were developed by men who wanted to be sure that their land and property, whatever they were, would be passed on to their own biological children” (p77).

However, she fails to explain the ultimate evolutionary reason why men would ever even be interested in, or care about, the paternity of the offspring who inherit their property.

Here, of course, evolutionary psychology provides a ready and compelling explanation.

Evolutionary psychologists contend that human male’s interest in the paternity of their putative offspring ultimately reflects the sociobiological imperative of maximizing their reproductive success by securing the passage of their genes into subsequent generations, and their concern that their parental investment not be maladaptively misdirected towards offspring fathered, not by themselves, but rather by a rival male.

Yet Hakim is evidently unaware of, or at least does not cite, the substantial scientific literature in evolutionary psychology on male sexual jealousy and mate guarding (e.g. Wilson & Daly 1992; Buss et al 1992).

Had Hakim familiarized herself with this literature, and the literature on mate guarding among non-human animals, she might have spared herself from her next error. For on the very next page, citing another female historian, one Julia Stonehouse, Hakim purports to trace men’s efforts to control women’s sexuality back to the supposed discovery of the role sex – and of men – in reproduction in 3000BC (p78-9).

At the beginning of civilization, from around 20000 BC to 8000 BC, there were no gods, only goddesses who had the magical power to give birth to new life quite independently… Men were seen to have no role at all in reproduction up up to around 3000 BC… Theories of reproduction changed around 3000 BC – man was suddenly presented as sowing the ‘seed’ that was incubated by women to deliver the man’s child… Control of women’s sexuality started only when men believed they planted the unique seed that produces a baby” (p78-9).[7]

This would seem a very odd claim to anyone with a background in biology, especially in sociobiology, behavioural ecology and animal behaviour.

Hakim is apparently unaware that naturalists have long observed analogous patterns of what biologists call mate guarding among non-human species, who are, of course, surely not consciously (or even subconsciously) aware of the relationship between sexual intercourse and reproduction, but who have nevertheless been programmed by natural selection to behave in such a way as to maximise their reproductive success by engaging in such mate-guarding behaviours, even without any conscious awareness of the ultimate evolutionary function of such behaviour.

For example, analogous behaviours are observed among our closest extant nonhuman relatives, namely chimpanzees. Thus, Jane Goodall, in her seminal study of chimpanzee behaviour in the wild, describes how the dominantalpha male’ within a troop of chimpanzees will attempt to prevent any males other than him from mating with a fertile estrus female, though she acknowledges:

The best that even a powerful alpha male can, realistically, hope to do is to ensure that most of the copulations around the time of ovulation are his” (The Chimpanzees of Gombe: p473).

In addition, she reports how even subordinate males sometimes successfully sequester fertile females into consortships, whereby they seclude fertile females, often forcibly, leading them to a peripheral part of the group’s home range so as to monopolize sexual access to the female in question, until her period of maximum fertility and sexual receptivity has passed (The Chimpanzees of Gombe: p453-465).

Such chimpanzee consortships sometimes involve force and coercion but other times seem to be largely consensual. We might therefore characterize them as representing the rough chimpanzee equivalent something in between either:

  1. Taking your wife or girlfriend away for a romantic weekend away in Paris; or
  2. Kidnapping a teenage girl and keeping her locked in the basement as a sex slave.

Certainly then, although chimpanzees are almost certainly unaware of the role of sexual intercourse, and of males, in reproduction, they nevertheless engage in mate-guarding behaviours simply because such behaviours tended to maximize their reproductive success in ancestral environments.

Indeed, more controversially, Goodall herself even tentatively proposes an analogy with human sexual jealousy, noting that:

“[Some] aggressive interventions [among chimpanzees] appear to be caused by feelings of sexual and social competitiveness which, if we were describing human behavior, we should label jealousy” (The Chimpanzees of Gombe: p326).

Thus, if our closest ancestors among extant primates, along with humans themselves, evince something akin to sexual jealousy and male sexual proprietariness, then it is a fair bet that our common ancestor with chimpanzees did too, and hence that mate-guarding was also practised by our prehuman ancestors, and certainly predates 3000 BC, the oddly specific date posited by Hakim and Stonehouse.

Certainly, mate-guarding does not require, or presuppose, any conscious (or indeed subconscious) awareness of the role of sexual intercourse – or even of males – in reproduction.[8]

Who Is Responsible to the Stigmatization of Promiscuity?

As for Hakim’s claim that men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’, this also seems very dubious.

Take, for example, the stigmatization of sex workers such as prostitutes, a topic to which Hakim herself devotes considerable attention. Hakim argues that this stigma results from men’s envy of women’s greater levels of erotic capital and their desire to prevent women from exploiting this advantage to the full.

Thus, she writes:

The most powerful and effective weapon deployed by men to curtail women’s use of erotic capital is the stigmatization of women who sell sexual services” (p75).

Unfortunately, however, this theory is plainly contradicted by the observation that women are actually generally more censorious of promiscuity and prostitution than are men (Baumeister and Twenge 2002).

In contrast, men, for obvious reasons, rather enjoy the company of prostitutes and other promiscuous women – although it is true that, due to concerns regarding paternity certainty, they may not wish to marry them.

Hakim, for her part, acknowledges that:

The stigma attached to selling sexual services in the Puritan Christian world… is so complete that women are just as likely as men to condemn prostitution and prostitutes. Sometimes women are even more hostile, and demand the eradication (or regulation) of the industry more fiercely than men, a pattern now encouraged by many feminists” (p76).

In an associated endnote, going further, she even concedes:

In Sweden, the 1996 sex survey showed women objected to prostitutes twice as often as men: two fifths of women versus one fifth of men thought that both buyers and sellers should be treated as criminals” (p282).

Yet this pattern is by no means limited to Sweden, but rather appears to be universal. Thus, Baumeister and Twenge report:

Women seem consistently more opposed than men to prostitution and pornography. Klassen, Williams, and Levitt (1989) reported the results of a survey asking whether prostitution is ‘always wrong’. A majority (69%) of women, but only a minority (45%) of men, were willing to condemn prostitution in such categorical terms. At the opposite extreme, about three times as many men (17%) as women (6%) responded that prostitution is not wrong at all” (Baumeister and Twenge 2002).

Indeed, men appear to more liberal, permissive and tolerant, and women more censorious, in respect of virtually aspects of sexual morality. Thus, women are much more likely than men to disapprove of pornography, promiscuity, prostitution, premarital sex, sex with robots and household appliances and other such fun and healthy recreational activities (see Baumeister and Twenge 2002).[9]

Faced with this overwhelming evidence, Hakim is forced to acknowledge:

If women in Northern Europe object to the commercial sex industry more strongly than men, this seems to destroy my argument that the stigmatization and criminalization of prostitution is promoted by patriarchal men” (p76).

However, Hakim has a ready, if not entirely convincing, response, maintaining that:

Over time women have come to accept and actively support ideologies that constrain them” (p77).

And also that:

Women have generally had the main responsibility for enforcing constraints but did not invent them” (p273).

However, this effectively reduces women to mindless puppets without agency of their own.

It also fails to explain why women are actually more puritanical than are men themselves.

Perhaps evil, devious, villainous, patriarchal men could somehow have manipulated women, against their own better interests, into being somewhat puritanical, or perhaps even as puritanical as are men themselves. However, they are unlikely to have succeeded in manipulating women into becoming even more puritanical than those evil male geniuses supposedly doing the manipulation and persuading.

Hakim’s Mythical ‘Male Sex Right

Hakim suggests that sexual morality reflects what she calls a “male sex right” (p82).

Thus, she argues that the moral opprobrium attaching to gold-diggers and prostitutes reflects the supposed patriarchal assumption that:

Men should get what they want for free, especially sex” (p79).

Men should not have to pay women for sexual favours or erotic entertainments [and] men should get what they want for free” (p98).

However, this theory is plainly contradicted by three incontestable facts.

First, promiscuous sex is stigmatized even where it does not involve payment. Thus, if prostitutes are indeed stigmatized, so are ‘sluts’ who engage in sex promiscuously but without any demand for payment.

Secondly, marriage is not condemned by moralists but rather held up as a moral ideal despite the fact that, as Hakim herself acknowledges, it usually involves a trade of sexual access in return for financial support – i.e. disguised (and overpriced) prostitution.

Third, far from advocating, as suggested by Hakim, that men should ‘get sex for free’, Christian moralists traditionally promoted abstinence and celibacy, especially before marriage, outside of marriage, and, for those held in highest regard by the church (i.e. nuns, monks and priests), permanently.[10]

In short, what seems to be condemned by moralists seems to be the promiscuity itself, not the demand for payment.

After all, if there really were  a “male sex right”, as contended by Hakim, then rape would presumably be, not a crime, but rather a basic, universal and inalienable human right!

Puritanism and Prudery as Price-fixing Among Prostitutes

A more plausible theory of the stigmatization of sex work might be sought, not in the absurd fallacies of feminism, but in the ‘dismal science’ of economics.

On this view, what is stigmatized is not the sale of sex itself, but rather its availability at too low a price.

Sex available at too low a price runs undercutting other women and driving down the prices the latter can themselves hope to demand for sexual services.

On this view, if men can get bargain basement blowjobs outside of marriage or similar ‘committed’ relationships, then they will have no need to pursue such relationships and women will lose the economic security with which these relationships provide them.

Hakim claims that sexual morality reflects the assumption that:

Men should get what they want for free, especially sex” (p79).

My own view is almost the opposite. Sexual morality reflects the assumption, not that men should be able to get sex for free, but rather that they should be obliged to pay a hefty price (e.g. the ultimate price – marriage), and certainly a lot more than is typically demanded by prostitutes.

Aside from myself, this view has been most comprehensively developed by psychologist Roy Baumeister and colleagues. Baumeister and Vohs (2006: p358) write:

“The so-called ‘cheap’ woman (the common use of this economic term does not strike us as accidental), who dispenses sexual favors more freely than the going rate, undermines the bargaining position of all other women in the community, and they become faced with the dilemma of either lowering their own expectations of what men will give them in exchange for sex or running the risk that their male suitors will abandon them in favor of other women who offer a better deal” (Baumeister and Vohs 2006: p358).

On this view, women’s efforts to prevent other women from capitalizing on their sex appeal is, as Baumeister and Vohs put it, analogous to:

Other rational economic strategies, such as OPEC‘s efforts to drive up the world price of oil by inducing member nations to restrict their production” (Baumeister and Vohs 2006: p357).

Interestingly, an identical analogy – between the supply of oil and of sex – had earlier been adopted by Warren Farrell in his excellent The Myth of Male Power (which I have reviewed here), where he wrote:

In the Middle East, female sex and beauty are to Middle Eastern men what oil and gas are to Americans: the shorter the supply the higher the price. The more women ‘gave’ away sex for free, or for a small price, the more the value of every woman’s prize would be undermined, which is why anger toward prostitution, purdah violation (removing the veil), and pornography runs so deep, especially among women. It is also why parents told daughters, ‘Don’t be cheap.’ ‘Cheap’ sex floods the market” (The Myth of Male Power: p77).

This then explains why women are generally more puritanical and censorious of promiscuity, prostitution and pornography than are men.

It might also explain why feminism and puritanical anti-sex attitudes tend to go together.

Hakim herself insists that feminist campaigners against prostitution, pornography and other such fun and healthy recreational activities are the unwitting dupes of their patriarchal oppressors, having inadvertently internalized ‘patriarchal’ norms that demonize sex work and women’s legitimate exploitation of their erotic capital for financial gain.

In fact, however, the feminists are probably acting in their own selfish best interests by opposing such activities. As Donald Symons explains in his excellent The Evolution of Human Sexuality (which I have reviewed here):

The gain in power to control heterosexual interaction that accompanies the reduction of sexual pleasure is probably one reason… that feminism and antisexuality often go together… As with more recent feminist movements the militant suffrage movement in England before World War I ‘never made sexual freedom a goal, and indeed the tone of its pronouncements was more likely to be puritanical and censorious on sexual matters than permissive: ‘Votes for women and chastity for men’ was one of Mrs Pankhurst’s slogans’… Much recent feminist writing about female sexuality… emphasize[s] masturbation and, not infrequently, lesbianism, which in some respects are politically equivalent to antisexuality”  (The Evolution of Human Sexuality: p262).

However, if feminist prudery is rational in reflecting the interests of feminist prudes, it does not reflect the interests of women in general. Indeed, to represent the interests of women as a whole (as feminists typically purport to do) is almost impossible, because the interests of different women conflict, not least since women are in reproductive competition primarily with one another. Thus, Symons observes:

Feminist prostitutes and many nonprostitute, heterosexual feminists are in direct competition, and it should be no surprise that they are often to be found at one another’s throats” (The Evolution of Human Sexuality: p260).

This, he explains, is because:

To the extent that heterosexual men purchase the services of prostitutes and pornographic masturbation aids, the market for the sexual services of nonprostitute women is diminished and their bargaining position vis-à-vis men is weakened… The implicit belief of heterosexual feminists such as Brownmiller that, in the absence of prostitution and pornography, men will come to want the same kinds of heterosexual relationships that women want may be an attempt to underpin morally a political program whose primary goal is to improve the feminists’ own bargaining position”  (The Evolution of Human Sexuality: p260).

Hakim does not really address this alternative and, in my view, far more plausible theory of the origins of, and rationale behind, sexual prudery and puritanism. Indeed, she does not even mention this alternative explanation for the stigmatization and criminalization of sex work anywhere in the main body of her text, instead only acknowledging its existence in two endnotes (p273 & p283).

In both endnotes, she gives little consideration to the theory, but rather summarily and rather dismissively rejects the theory. On the first occasion, she gives no real reason for rejecting this theory, merely commenting that, in her opinion, Baumeister and Twenge (2002), who champion this theory:

Confuse distal and proximate causes, policy-making and policy implementation. Women generally have the main responsibility for enforcing constraints but do not invent them” (p273, note 20).

On the second occasion, she simply claims, in a single throwaway sentence:

The trouble with this argument is of course that marital relationships are not comparable with casual relationships” (p283, note 8).

However, although this sentence includes the words “of course”, its conclusion is by no means self-evident, and Hakim provides no evidence in support of this conclusion in the endnote.

Admittedly, she does briefly expand upon the same idea at a different point her text, where she similarly contends:

The dividing line between the two markets [i.e. mating markets involving short-term relationships and long-term relationships] is sufficiently important for there to be little or no competition between the two markets” (p235).

This, however, seems doubtful. From a male perspective, both long-term and short-term relationships may serve identical ends – namely access to regular sex.[11]

Therefore, paying a prostitute may represent an alternative (often cheaper) substitute for the time and expense of conventional courtship.

As Donald Symons puts it:

The payment of money and the payment of commitment are not psychologically equivalent, but they may be economically equivalent in the heterosexual marketplace” (The Evolution of Human Sexuality: p260).

Indeed, conventional courtship often, indeed almost invariably, involves the payment of monies by the male partner (e.g. for dates).

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

Thus, according to Baumeister and Twenge:

Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification” (Baumeister and Twenge 2002: p172).

As explained by Tobias and Mary Marcy in their forgotten early twentieth century Marxist-masculist masterpiece, Women As Sex Vendors (which I have reviewed here and here), street prostitutes, especially those supporting a pimp, are stigmatized simply because:

These women are selling below market or scabbing on the job” (Women As Sex Vendors: p29).

What’s that got to do with the Price of Prostitutes?

Particularly naïve, if not borderline economically illiterate, is Hakim’s conclusions regarding the likely effect of the decriminalization of prostitution on the prices prostitutes are able to demand for their services. Thus, she writes:

The only realistic solution to the male sex deficit is the complete decriminalization of the sex industry. It should be allowed to flourish like other leisure industries. The imbalance in sexual interest would be resolved by the laws of supply and demand, as it is in other entertainments. Men would probably find they have to pay more than they are used to” (p98).

In fact, far from men “find[ing] they have to pay more than they are used to”, the usual consequence of the decriminalization of the sale of a commodity is a fall in the value of this commodity, not a rise.

This is because criminalization produces additional costs for suppliers, not least the risk of prosecution, which are almost invariably more than enough to offset lack of regulation and taxes, and the reduced demand attendant to criminalization, which generally reflects the generally lesser risk of prosecution associated with consumption as opposed to supply.[12]

Thus, with the passage into force of the Volstead Act in 1920, which banned the sale and purchase of alcoholic beverages throughout the USA, the price of alcohol is said to have roughly tripled or even quadrupled.

Similarly, the legalization of marijuana in many US states seems to have been associated with a drop in its price, albeit not as great a fall as some opponents (and no few advocates!) of legalization apparently anticipated.

Indeed, later in her book rather contradicting herself, Hakim admits:

In countries where the [sex] trade is criminalized, such as the United States and Sweden, the local price of sexual services can be pushed higher, due to higher risks” (p165).

And also that:

In countries where prostitution is criminalized, fees can sometimes be higher than in countries where it is legal, due to scarcity and higher risks” (p87).

In short, all the evidence suggests that, if prostitution were entirely decriminalized, or, better still, destigmatized as well, then, far from men “find[ing] they have to pay more than they are used to”, in fact the price of prostitutes would drop considerably.

Hakim writes:

Women offering sexual services can earn anywhere between double and fifty times more than they could earn in ordinary jobs, especially jobs at a comparable level of education. This world of greater opportunity is something that men would prefer women not know about. This is the principal reason why providing sexual services is stigmatized… to ensure women never learn anything about it” (p229).

In reality, however, far from this being something that “men would prefer women not know about”, men would benefit if more women were aware of, and took advantage of, the high earnings available to them in the sex industry – because then more women would presumably enter this line of work and hence prices would be driven down by increased competition.

In addition, if more women worked in the sex industry, fewer would be competing for jobs with men in other industries.

In contrast, the main losers would be existing sex workers, who find that they would have to drop their prices in order to cope with increased competition from other service providers – and perhaps also women in pursuit of husbands, who would find that, with bargain basement blowjobs available from prossies, more and more men find have little need to subject themselves to the inequities and indignities of marriage and conventional courtship, which, of course, offer huge economic benefits to women precisely because they are, compared to purchasing the services of prostitutes, such a bad deal for men.

Sexual Double-Standards Cut Both Ways

Arguing that the stigmatization of sex work is “the most powerful and effective weapon deployed by men to curtail women’s use of erotic capital”, Hakim points to the fact that this “stigma… never affects men who sell sex quite so much” as evidence that this stigma was invented by, and hence serves the interests of, evil male oppressors.

Thus, she contends:

The patriarchal nature of… [negative] stereotypes [about sex workers] is exposed by quite different perceptions of men who sell sex: attitudes here are ambivalent, conflicted, unsure” (p76).

I would contend that there is a more convincing economic explanation as why males providing sexual services are relatively less stigmatized – namely, gigolos and rent-boys, in offering services to women and homosexual men, do not threaten to undercut the prices demanded by non-prostitute women on the hunt for husbands.

Indeed, the proof that there is nothing whatever patriarchal about these differing perceptions is provided by the fact that, in respect of long-term relationships, these ‘double-standards’ are reversed.

Thus, whereas homemaker’ or ‘housewife is a respectable occupation for a woman, attitudes towards ‘househusbands’ who are financially dependent on their wives are – to adopt Hakim’s own phraseology – ‘ambivalent, conflicted, unsure’.

Meanwhile, men who are financially dependent on their partners and whose partners happen to work in the sex industry – i.e. pimps – are actually criminalized for their purportedly exploitative lifestyle.

However, the lifestyle of a pimp is actually directly analogous to that of a housewife/homemaker – both are economically dependent on their sexual partners and both are notorious for spending an exorbitant proportion of their sexual partner’s earnings on items such as clothing and jewellery.

Women’s Sexual Power – Innate or Justly Earned?

Hakim argues that exploitation of sex appeal for financial gain – e.g. working in the sex industry, marrying for money or flirting with the boss for promotions – ought to be regarded as a perfectly legitimate means of social, occupational and economic advancement.

In defending this proposition, she resorts to ad hominem, asserting (without citing data) that disapproval of the exploitation of erotic capitalalmost invariably comes from people who are remarkably unattractive and socially clumsy” (p246).

I will not stoop to respond to this schoolyard-tier substitution of personal abuse for rational debate (roughly, ‘if you disagree with me it’s only because you’re ugly!’), save to comment that the important question is not whether such people is ugly – but rather whether they are right.

Defending women’ exploitation of the male sexual drive, Hakim protests

Apparently is fine for men to exploit any advantage they have in wealth or status, but rules are invented to prevent women exploiting their advantage in erotic capital” (p149).

However, this ignores the fact that, whereas men’s greater earnings are a consequence of the fact that they work longer hours, for a greater proportion of their adult lives, in more dangerous and unpleasant working conditions, women’s greater level of sex appeal merely reflects their good fortune in being born female.

Yet Hakim denies erotic capital is “entirely inherited”, instead insisting:

All aspects of erotic capital can be developed, just like intelligence”.[13]

However, no amount of make-up, howsoever skillfully applied, can disguise excessively irregular features and even expensive plastic surgery and silicone enhancements are recognized as inferior to the real thing.

Moreover, even Hakim would presumably be hard-pressed to deny that the huge advantages incumbent on being born female are indeed “entirely inherited”. Indeed, even men who undergo costly gender reassignment surgery are rarely as attractive as even the average woman.

However, Hakim insists that:

Women generally have higher erotic capital than men because they work harder at it” (p244).

Here, I suspect Hakim has her causation precisely backwards. In fact, women work harder at being attractive (e.g. applying makeup, spending copious amounts of money on clothes, jewelry etc.) precisely because the rightly realize that good looks has bigger pay-offs for women than for men.

Indeed, Hakim herself admits:

Even if men and women had identical levels of erotic capital, the male sex deficit automatically gives women the upper-hand in private relationships” (p244).[14]

A Darwinian perspective suggests that both women’s greater erotic capital and the male sex deficit result ultimately from the fact that females biologically make a greater investment in offspring and therefore represent the limiting factor in mammalian reproduction.

In short, no amount of hard work will grant to men the sexual power conferred upon women simply by virtue of their fortune in being born as a member of the privileged sex.

Disadvantage, Discrimination and Double-Standards

Given that she believes erotic capital can be enhanced through the investment of time and effort, Hakim denies that the advantages accruing to attractive people are in any way unfair or discriminatory. Similarly, she does not regard the advantages accruing to women on account of their greater erotic capital – such as their greater ability to marry up’ (‘hypergamy’) or earn lucrative salaries in the sex industry – as unfair.

However, oddly, Hakim is all too ready to invoke the malign spectre of ‘discrimination’ on those rare occasions where inequality of outcome seemingly benefits men over women.

Thus, Hakim gripes argues that:

The entertainment industry… currently recognizes and rewards erotic capital more than any other industry. However, here too there is an unfair bias against women that leads to lower rewards for higher levels of erotic capital than are observed for men. In Hollywood, male stars earn more than female stars, even though female stars do the same work, but going ‘backwards and in high heels’” (p231).

Oddly, however, Hakim neglects to observe that in Hollywood’s next door neighbour, the pornographic industry, female performers earn more than men and the disparity is much greater and affects all performers, not just A-list stars.

This is despite the fact that, in this very same paragraph quoted above, she acknowledges in parenthesis that “entertainment industry… includes the commercial sex industry” (p231).

Neither does Hakim note that, as discussed by Warren Farrell in Why Men Earn More (reviewed here):

Top women models earn about five times more, that is, about 400% more, than their male ‘equivalent’. Put another way, men models earn about 20% of the pay for the same work” (Why Men Earn More: p97-8).

Hakim rightly decries the fact that:

The concept of discrimination is too readily applied in situations where there is differential treatment or outcomes. In many cases, there are simple explanations for such outcomes that do not involve unfair favoritism or intentional bias” (p131-2).

Yet, oddly, despite this wise counsel, Hakim fails to follow her own advice, being all too ready to invoke discrimination as an explanation, especially malign patriarchal discrimination, wherever she finds women at a seeming disadvantage.

For example, many studies find that more physically attractive people earn somewhat higher salaries, on average, than do relatively less attractive people (e.g. Scholz & Sicinski 2015).

However, perhaps surprisingly, the wage premium associated with good looks is generally found to be somewhat greater for males than for females (e.g. Frieze, Olson & Russell 1991).[15]

This is, for Hakim, a form of “hidden sex discrimination” (p194). Thus, she protests:

Attractive men receive a larger beauty premium than do women. This is clear evidence of sex discrimination, especially as all studies show women score higher than men on attractiveness scales” (p246).

At first glance, it may indeed seem anomalous that the wage premium associated with physical attractiveness is rather greater for men than for women. However, rather than rushing to invoke the malign spectre of sexual discrimination, a simpler explanation is readily at hand.

Perhaps relatively more attractive women simply reduce their efforts in the workplace because other means of social advancement are opened up to them by virtue of their physical attractiveness – not least marriage.

After all, as Hakim herself emphasizes elsewhere in her book:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labor market to women. All the evidence suggests that both routes can be equally important paths to social status and wealth for women in modern societies” (p142).

Therefore, rather than expend effort to advance herself through her career, a young woman, especially an attractive young woman, instead focuses her attention on marriage as a form of advancement. As the redoubtable HL Mencken put it in his book In Defense of Women:

The time is too short and the incentive too feeble. Before the woman employee of twenty-one can master a tenth of the idiotic ‘knowledge’ in the head of the male clerk of thirty, or even convince herself that it is worth mastering, she has married the head of the establishment or maybe the clerk himself, and so abandons the business” (In Defense of Women: p70).

Or, as Matthew Fitzgerald puts it in his delightfully subtitled Sex-ploytation: How Women Use Their Bodies to Extort Money From Men:

It takes far less effort to warm the bed of a millionaire than to earn a million dollars yourself” (Sex-ploytation: p10)

In short, why work for money when you have the easier option of marrying it instead?

Moreover, evidence suggests that relatively more physically attractive women are indeed able to marry men with higher levels of income and accumulated capital than are relatively less physically attractive women (Elder 1969; Hamermesh and Biddle 1994; Udry & Eckland 1984).

Indeed, some of the same studies that show the lesser benefits of attractiveness for women in terms of earnings and occupational advancement also show greater benefits for women in terms of marriage prospects (e.g Elder 1969; Udry & Eckland 1984).

Thus, psychologist Nancy Etcoff writes, in her book Survival of the Prettiest (which I have reviewed here):

“The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (Survival of the Prettiest: p65)

Yet, in stark contrast, as even Hakim herself acknowledges, ‘marrying up’ is not an option for even the handsomest of males simply because:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)… Even today, most women admit that their goal was always to marry a higher-earning man, and most achieve their goal” (p141).[16]

In short, it seems that Hakim regards any advantage accruing to women on account of their greater erotic capital as natural and legitimate, not to mention fair game for women to exploit to the full and at the expense of men.

However, in those rare instances where sexual attractiveness seemingly benefits men more than it does women, this advantage is then necessarily attributed by Hakim to a “hidden sex discrimination” and hence viewed as inherently malign.

Are Women Wealthier Than Men?

Hakim claims that the importance of what she calls erotic capital has been ignored or overlooked due to what she claims is “the patriarchal bias in social science” (p75).

As anyone who is remotely aware of the current state of the social sciences should be all too aware, there is little evidence for “patriarchal bias in social science”. On the contrary, for over half a century at least, the social sciences have been heavily infested with feminism.

My own view is almost the opposite of Hakim’s – namely, it is not “patriarchal bias”, but rather feminist bias that has led social scientists to ignore the importance of sexual attractiveness in social and economic relations – because feminists, in their efforts to portray women as a ‘disadvantaged and oppressed group, have felt the need to ignore or downplay women’s sexual power over men.

In fact, although Hakim accuses them of being unwitting agents of patriarchy, feminists have probably been wise to play down women’s sexual power over men – because once this power is admitted, the fundamental underlying premise of feminism, namely that women represent an oppressed group, is exposed as fallacious.

Indeed, much of data reviewed by Hakim herself inadvertently proves precisely this.

For example, Hakim observes that:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labour market to women. All the evidence suggests that both routes can be equally important paths to wealth for women in modern societies” (p142).

As a consequence, Hakim observes that:

There are more female than male millionaires in a modern country such as Britain. Normally, men can only make their fortune through their jobs and businesses. Women achieve the same wealthy lifestyle and social advantages through marriage as well as through career success” (p24).

There are more female than male millionaires in Britain. Some women get rich through their own efforts, while others are wealthy widows and divorcées who married well” (p142).

Here, though, I suspect Hakim actually downplays the extent of the gender differential. Certainly, she is right that in observing that “normally, men can only make their fortune through their jobs and businesses” and hence that:

Handsome men who marry into money are still rare compared to the numbers of beautiful women who do this” (p24).

However, while she is right that “some women get rich through their own efforts, while others are wealthy widows and divorcées who married well”, I suspect she is exaggerating when she claims “both routes can be equally important paths to wealth for women in modern societies”.

In fact, while many women become rich through marriage or inheritance, self-made millionaires seem to be overwhelmingly male.

Thus, most self-made millionaires make their fortunes through business and investment. However, as Warren Farrell observes in his excellent Why Men Earn More (reviewed here and here), whereas feminists blame the lower average earnings of women as compared to men on discrimination by employers, in fact, among the self-employed and business owners, where discrimination by employers is not a factor, the disparity in earnings between men and women is even greater than among employees.

Thus, Farrell reports:

When there was no boss to ‘hold women back’, women who owned their own businesses netted, at the time (1970s through 1990s) between 29% and 35% of what men netted; today, women who own their own businesses net only 49% of their male counterparts’ net earnings” (Why Men Earn More: pxx).

On the other hand, focussing on the ultra-rich, in the latest 2023 Forbes 400 list of the richest Americans, there are only sixty women, just fifteen percent of the total, of whom only twelve (i.e. just twenty percent) are, Forbes magazine reports, ‘self-made’, in contrast to fully seventy percent of the men in the list.

None of the six richest women on the list seem to have played any part in accumulating their own wealth, each either inheriting it from a deceased father or husband, or expropriating it from their husbands in the divorce courts.[17]

As Ernest Belfort Bax wrote over a century ago in The Legal Subjection of Men (reviewed here):

The bulk of women’s property, in 99 out of every 100 cases, is not earned by them at all. It arises from gift or inheritance from parents, relatives, or even the despised husband. Whenever there is any earning in the matter it is notoriously earning by some mere man or other. Nevertheless, under the operation of the law, property is steadily being concentrated into women’s hands” (The Legal Subjection of Men: p9).

This, of course, suggests that it is men rather than women who should be campaigning for ‘equal opportunity’, because, whereas most traditionally male careers are now open to both sexes, the opportunity to advance oneself through marriage remains almost the exclusive preserve of women, since, as Hakim herself acknowledges:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)” (p141).

Women also have other career opportunities available to them that are largely closed to men, or at least to heterosexual men – namely, careers in the sex industry.

Yet such careers can be highly lucrative. Thus, Hakim herself reports that:

Women offering sexual services can earn anywhere between twice and fifty times what they could earn in ordinary jobs, especially jobs at a comparable level of education” (p229).

Yet men are not only denied these easy and lucrative means of financial enrichment but are also driven by the Hakim calls the ‘male sex deficit’ to spend a large portion of whatever wealth they can acquire attempting to buy the sexual services and affection of women, whether through paying for sex workers or through conventional courtship.

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

As a consequence, despite working fewer hours, for a lesser proportion of their adult lives in safer and more pleasant working environments, women are estimated by researchers in the marketing industry to control around 80% of consumer spending.

Yet Hakim goes even further, arguing that both what she calls the ‘male sex deficit’ and the greater levels of erotic capital possessed by women place women at an advantage over men in all their interactions with one another, on account of what she refers to as ‘the principle of least interest’.

In other words, since men want sex with women more than women want sex with men, all else being equal, women almost always have the upper-hand in their relationships with men.[18]

Indeed, Hakim goes so far as to claim that men are condemned to a:

Semi-permanent state of sexual desire and frustration… Suppressed and unfulfilled desires permeate all of men’s interactions with women” (p228).

Yet, here, Hakim surely exaggerates.

Indeed, to take Hakim’s words literally, one would almost be led to believe that men walk around with permanent erections.

I doubt any man is ever really consumed with overwhelming “suppressed and unfulfilled desires” when conversing with, say, the average fat middle-aged woman in the contemporary west. Indeed, even when engaging in polite pleasantries, routine conversation, or even mild flirtation with genuinely attractive young women, most men are capable of maintaining their composure without visibly salivating or contemplating rape.

Yet, for all her absurd exaggeration, Hakim does have a point. Indeed, she calls to mind Camille Paglia’s memorable and characteristically insightful description of men as:

Sexual exiles… [who] wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19).

Therefore, Hakim is right to claim that, by virtue of the ‘the principle of least interest’, women generally have the upper-hand in interactions with men.

Indeed, her conclusions are dramatic – and, though she seemingly does not fully appreciate their implications – actually directly contradict and undercut the underlying premises of feminism – namely that women are disadvantaged as compared to men.[19]

Thus, she observes that:

At the national level, men may have more power than women as a group – they run governments, international organizations, the biggest corporation and trade unions. However, this does not automatically translate into men having more power at the personal level. At this level, erotic capital and sexuality are just as important as education, earnings and social networks… Fertilityfurther enhances women’s power” (p245).

 On the contrary, she therefore concludes:

In societies where men retain power at the national level, it is entirely feasible for women to have greater power… for private relationships” (p245).

Yet women’s power over their husbands, and women’s sexual power over men in general, also confers upon women both huge economic power and even indirect political power, especially given that men, including powerful men, have a disposition to behave chivalrously and protectively towards women.

Thus, one is reminded of Arthur Schopenhauer’s observation, in his brilliant, celebrated and infinitely insightful essay On Women, of how:

Man strives in everything for a direct domination over things, either by comprehending or by subduing them. But woman is everywhere and always relegated to a merely indirect domination, which is achieved by means of man, who is consequently the only thing she has to dominate directly” (Schopenhauer, On Women).

Indeed, in this light, we might do no better than contemplate in relation to our own cultures the question Aristotle posed of the ancient Spartans over two thousand years ago:

What difference does it make whether women rule, or the rulers are ruled by women?” (Aristotle, Politics II).

References

Alexander & Fisher (2003) Truth and consequences: Using the bogus pipeline to examine sex differences in self-reported sexuality, Journal of Sex Research 40(1): 27-35.
Bateman (1948), Intra-sexual selection in Drosophila, Heredity 2 (Pt. 3): 349-368.
Baumeister & Vohs (2004) Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions, Personality and Social Psychology Review 8(4) 339-363.
Baumseister & Twenge (2002) Cultural Suppression of Female Sexuality, Review of General Psychology 6(2): 166-203.
Brewer, Garrett, Muth & Kasprzyk (2000) Prostitution and the sex discrepancy in reported number of sexual partners, Proceedings of the National Academy of Sciences of the United States of America; USA 2000, 12385.
Buss (1989) Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures, Behavioral and Brain Science 12(1):1-14.
Buss, Larson, Westen & Semmelroth (1992) Sex Differences in Jealousy: Evolution, Physiology, and Psychology, Psychological Science 3(4):251-255.
Elder (1969) Appearance and education in marriage mobility. American Sociological Review, 34, 519-533.
Frieze, Olson & Russell (1991) Attractiveness and Income for Men and Women in Management, Journal of Applied Social Psychology 21(13): 1039-1057.
Hamermesh & Biddle (1984) Beauty and the labor market. American Economic Review, 84, 1174-1194.
Kanazawa (2011) Intelligence and physical attractiveness. Intelligence 39(1): 7-14.
Kanazawa and Still (2018) Is there really a beauty premium or an ugliness penalty on earnings?Journal of Business and Psychology 33: 249–262.
Scholz & Sicinski (2015) Facial Attractiveness and Lifetime Earnings: Evidence from a Cohort Study, Review of Economics and Statistics (2015) 97 (1): 14–28.
Trivers (1972) Parental investment and sexual selection. In B. Campbell (Ed.) Sexual Selection and the Descent of Man, 1871-1971 (pp 136-179). Chicago, Aldine.
Udry and Eckland (1984) Benefits of being attractive: Differential payoffs for men and women.Psychological Reports, 54: 47-56.
Wilson & Daly (1992) The man who mistook his wife for a chattel. In: Barkow, Cosmides & Tooby, eds. The Adapted Mind, New York: Oxford University Press,1992: 289-322.


[1] Both editions appear to be largely identical in their contents, though I do recall noticing a few minor differences. Page numbers cited in the current review refer to the former edition, namely Money Honey: the Power of Erotic Capital, published in 2011 by Allen Lane, which is the edition of which this post is a review.

[2] One is inevitably reminded here of Richard Dawkins’s ‘First Law of the Conservation of Difficulty’, whereby Dawkins not inaccurately observes ‘obscurantism in an academic subject is said to expand to fill the vacuum of its intrinsic simplicity’.

[3] In this context, it is interesting to note that Arnold Schwarzenegger and other bodybuilders with extremely muscular physiques do not seem to be generally regarded as especially handsome and attractive by women. Anecdotally, women seem to prefer men of a more lean and athletic physique, in preference to the almost comically exaggerated musculature of most modern bodybuilders. As Nancy Etcoff puts it in Survival of the Prettiest (reviewed here), women seem to prefer:

Men [who] look masculine but not exaggeratedly masculine” (Survival of the Prettiest: p159).

In writing this, Etcoff seemed to have in mind primarily male facial attractiveness. However, it seems to apply equally to male musculature. For more detailed discussion on this topic, see here.

[4] Although I here attribute beautiful women’s unpopularity among other women to jealousy on the part of the latter, there are other possible explanations for this phenomenon. As I discuss in my review of Etcoff’s book (available here), another possibility is that beautiful women are indeed simply less likeable in terms of their personality. Perhaps, having grown accustomed to being fawned over and receiving special privileges on account of their looks, especially from men, they gradually become, over time, entitled and spoilt, something that is especially apparent to other women, who are immune to their physical charms.

[5] Hakim mentions evolutionary psychology as an approach, to my recollection, only once, in passing, in the main body of her text. Here, she associates the approach with ‘essentialism’, a scare-word, and straw man, employed by social scientists to refer to biological theories of sex and race differences, which Hakim herself defines as referring to “a specific outdated theory that there are important and unalterable biological differences between men and women”, as indeed there undoubtedly are (p88).
Evolutionary psychology as an approach is also mentioned, again in passing, in one of Hakim’s endnotes (p320, note 22). As mentioned above, Hakim also cites several studies conducted by evolutionary psychologists to test specifically evolutionary hypotheses (e.g. Kanazawa 2011; Buss 1989). Therefore, it cannot be that Hakim is simply unaware of this active research programme and theoretical approach.
Rather, it appears that she either does not understand how Bateman’s principle both anticipates, and provides a compelling explanation for the phenomena she purports to undercover (namely, the ‘male sex deficit’ and greater ‘erotic capital’ of women); or that she disingenuously decided not to discuss evolutionary psychology and sociobiology precisely because she recognizes the extent to which it deprives her own theory of its claims to originality.

[6] Actually, due to greater male mortality and the longer average lifespan of women, there are actually somewhat more women than men in the adult population. However, this is not sufficient to account for the disparity in number of sex partners reported in sex surveys, especially since the disparity becomes more pronounced only in older cohorts, who tend to be less sexually active. Indeed, since female fertility is more tightly contrained by age than is male fertility, the operational sex ratio may actually reveal a relative deficit of fertile females.

[7] Before they discovered of the role of men in impregnating women, and in those premodern societies where “this idea never emerged”, there was, Hakim reports, ‘free love’ and rampant promiscuity, sexual jealousy presumably being unknown (p79). Of course, we have heard these sorts of ideas before, not least in the discredited Marxian concept of ‘primitive communism’ and in Margaret Mead’s famous study of adolescence in Samoa. Unfortunately, however, Mead’s claims have been thoroughly debunked, at least with regard to Samoan culture. Indeed, it is notable that, in the examples of such premodern cultures supposedly practising ‘free love’ that are cited by Hakim, Samoa is conspicuously absent.

[8] This error is analogous to the so-called ‘Sahlins fallacy’, so christened by Richard Dawkins in his paper ‘Twelve misunderstandings of kin selection’, whereby celebrated cultural anthropologist (and left-wing political activist) Marshall Sahlins, in his book The Use and Abuse of Biology (reviewed here), assumed that, for humans, or other animals, to direct altruism towards biological relatives proportionate to their degree of relatedness as envisaged by kin selection and inclusive fitness theory, they must necessarily understand the mathematical concept of fractions.

[9] Only in respect of homosexuality, especially male homosexuality, are these attitudes oddly reversed. Here, women are more accepting and tolerant, whereas men are much more likely to disapprove of and indeed be repulsed by the idea of male homosexuality in particular (though heterosexual men often find the idea of lesbian sex arousing, at least until they witness for themselves what most real lesbian women actually look like).

[10] Thus, Hakim herself observes that, under Christian morality:

Celibacy was praised as admirable, then enforced on Catholic priests, monks, and nuns” (p80)

[11] If both long-term and short-term sexual relationships both serve similar functions for men – namely, a means of obtaining regular sexual intercourse – perhaps women do indeed conceive of such relationships as representing entirely separate marketplaces, since, unlike for heterosexual men, short-term commitment-free sex is much easier to obtain for women than is a long-term relationship. This then might explain Hakim’s assumption that the two markets are entirely separate, since, as herself a female, this is how she personally has always perceived it.
However, I suspect that, even for women, the two spheres are not entirely conceptually separate. For example, women sometimes enter short-term commitment-free sexual relationships with men, especially high-status men, in the hope that such a relationship might later develop into a long-term romantic relationship.

[12] Besides the risk of criminal prosecution, the costs for suppliers associated with criminalization include the inability of suppliers to resort to legal mechanisms either for protection or to enforce contracts. This is among the reasons that, in many jurisdictions were prostitution is criminalized, both prostitutes and their clients are at considerable risk of violence, including extortion, blackmail, rape and robbery. It is also why suppliers often turn instead to other means of protection, providing an opening for organized crime elements.

[13] In fact, it is a fallacy to suggest that because something can be enhanced or improved by “time and effort”, this means it is not “entirely inherited”, since the tendency to successfully devote “time and effort” to self-improvement is at least partly a heritable aspect of personality, associated with the personality factor identified by psychometricians as conscientiousness. Behavioural dispositions are, in principle, no less heritable than morphology.

[14] This, of course, implies that the greater female level of ‘erotic capital’ is separable from the ‘male sex deficit’, when, in reality, as I have already discussed the ‘male sex deficit’ provides an obvious explanation for why women have greater sex appeal, since, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

[15] Although there is a robust and well-established correlation between attractiveness and earnings, this does not necessarily prove that it is attractiveness itself that causes attractive people to earn more. In particular, Kanazawa and Still argue that more attractive people also tend to be more intelligent, and also have other personality traits, that are themselves associated with higher earnings (Kanazawa and Still 2018).

[16] Indeed, more affluent women are actually even more selective regarding the socio-economic status that they demand in a prospective partner, preferring partners who are even higher in socioeconomic status than they are themselves (Wiederman & Allgeier 1992; Townsend 1989).
This, of course, contradicts the feminist claim that women only aspire to marry up because, due to supposed discrimination, ‘patriarchy’, male privilege and other feminist myths, women lack the means to advance in social status through occupational means.
In fact, the evidence implies that the feminists have their causation exactly backwards. Rather than women looking to marriage for social advancement because they lack the means to achieve wealth through their careers due to discrimination, instead the better view is that women do not expend great effort in seeking to advance themselves through their careers precisely because they have the easier option of achieving wealth and privilege by simply marrying into it.
Unfortunately, the fact that even women with high salaries and of high socioeconomic status insist on marrying men of similarly high, or preferably even higher, socioeconomic status than themselves means that feminist efforts to increase the number of women in high status occupations, including by methods such as affirmative action and other forms of overt and covert discrimination against men, also have the secondary effect of reducing rates of marriage and hence of fertility, since the higher the socioeconomic status and earnings of women the fewer men there are of the same or higher socioeconomic status for them to marry, particularly because other high status high income occupations are similarly occupied increasingly by other women. This may be one major causal factor underlying one of the leading problems facing developed economies today, namely their failure to reproduce at replacement levels. This is one of many reasons we must stridently oppose such feminist policies.

[17] Of course, being ‘self-made’ is a matter of degree. Many of Among the six richest women in America listed by Forbes, the only ambiguous case, who might have some claim (albeit very weak) to having herself earned some small part of her own fortune, rather than merely inherited it, is the sixth richest woman in America, Abigail Johnson, who is currently CEO of the company established by her grandfather and formerly run by her father. Although she certainly did not build her own fortune, but rather very much inherited it, she nevertheless has been involved in running the family business that she inherited. The five richest women in America, in contrast, have no claim whatsoever to having earned their own fortunes. On the contrary, all seemingly inherited their wealth from male relatives (e.g. husbands, fathers), except for the former wife of Jeff Bezos, who instead expropriated the monies of her husband through divorce. According to Forbes the richest ‘self-made’ woman on the list is the seventh richest woman in America, and thirty-eighth richest person overall, Diana Hedricks. However, since she founded the company upon which her fortune is built with her then-husband, it is reasonable to suppose, given the rarity of ‘self-made’ female millionairs, that he in fact played the decisive role in establishing the family’s wealth.

[18] Actually, however, the situation is more complex. While men certainly want sex more than women do, especially promiscuous sex outside a committed relationship, women surely have a greater desire for long-term, committed, romantic relationships than men do. This complicates the calculus with respect to who has the least interest in a given relationship.
On the other hand, however, the reason why women have a strong desire for long-term committed romantic relationships is, at least in part, the financial benefits and security with which such relationships typically provide them. These one-sided benefits are, of course, further evidence that women do indeed have the upper-hand in their relationships with men, even, perhaps especially, in long-term committed relationships.
Yet men can also obtain sex outside of committed relationships, not least through prostitutes. Yet the very fact that heterosexual prostitution almost invariably involves the man paying the woman for sex rather than vice versa is, of course, further proof that, again, women do indeed have the upper-hand, on account of ‘the principle of least interest’.

[19] A full understanding of the extent to which women’s sexual power over men confers upon them an economically privileged position is provided by several works pre-dating Hakim’s own, namely Esther Vilar’s The Manipulated Man (which I have reviewed here), Matthew Fitzgereld’s delightfully subtitled Sex-Ploytation: How Women Use Their Bodies to Extort Money from Men, Tobias and Mary Marcy’s forgotten early twentieth century Marxist-masculist masterpiece Women As Sex Vendors (which I have reviewed here) and Warren Farrell’s The Myth of Male Power (which I have reviewed here and here).

A Rational Realist Review of Matt Ridley’s ‘The Rational Optimist’

Matt Ridley, The Rational Optimist (London: Fourth Estate, 2011)

Evolutionary psychology and sociobiology are fields usually associated with cynicism about human nature and skepticism regarding our capacity to change this fundamental nature in order to produce the utopian societies envisaged by Marxists, feminists and other such hopeless idealists.

It is therefore perhaps surprising that several popular science writers formerly known for writing books about evolutionary psychology have recently turned their pens to a very different topic – namely, that of human progress, and, in the process, concluded that, not only is societal progress real, but also that it is likely to continue in the foreseeable future.

Robert Wright, author of The Moral Animal, was the trailblazer back in 1999, with his ambitiously titled Nonzero: The Logic of Human Destiny, which argued that human history (and indeed evolutionary history as well) is characterized by progressive increases in the levels of non-zero-sum interactions, resulting in increased cooperation and prosperity.

Meanwhile, the latest onboard this particular bandwagon is the redoubtable Steven Pinker, whose books, The Better Angels of Our Nature, published in 2011, and Enlightenment Now, published seven years later in 2018, both focused on societal progress, the former focusing on supposed declines in levels of violence, while the latter is more general in its themes.

Ridley’s ‘The Rational Optimist’, first published just a year before Pinker’s The Better Angels of Our Nature, was also more general in its theme, but focuses primarily on improvements in living standards.

Ridley argues that, not only is human progress real, but that it has, a few temporary blips and hiccups apart, occurred throughout virtually the entirety of human history and is in no danger of stalling or slowing down, let alone going into reverse any time soon.

From Futurology to History

For a book whose ostensible theme is optimism regarding the future, Ridley spends an awful lot of his time talking about the past. Thus, most of his book is not about the probability of progress in the future, but rather the certainty of its occurrence during much of our past.

We have a tendency to look back on the past with nostalgia as a ‘Golden Age or ‘Lost Eden’. In reality, however, the life of the vast majority of people in all eras periods prior to the present was, to adopt the phraseology of Thomas Hobbes, compared to our lives today, ‘short, nasty and brutish’.

As Ridley bluntly observes:

It is easy to wax elegiac for the life of a peasant when you do not have to use a long-drop toilet” (p12).

Although we all habitually moan about rising prices, in fact, he argues, almost everything worth having has become cheaper, at least when one measures prices, not in dollars, cents or euros (which is, of course, misleading because it fails to take into account inflation and other factors), but rather in what Ridley regards as their true cost – namely the hours of human labour required to fund the purchase.

Indeed, Ridley claims:

Even housing has probably gotten cheaper tooThe average family house probably costs slightly less today than it did in 1900 or even 1700, despite including far more modern conveniences like electricity, telephone and plumbing” (p20).

Moreover, he insists:

Housing… is itching to get cheaper, but for confused reasons governments go to great lengths to prevent it” (p25).

In Britain, he protests, the main problem is “planning and zoning laws”. These are the laws and regulations that which prevent developers from simply buying up land and putting up housing estates and tower blocks in much of the countryside and green belt (p25).

Unfortunately, however, Britain is a small island, and, in the precise places where there is greatest demand for new housing (i.e. the South-East), it is already quite densely populated.[1]

Giving developers a free hand to put up new housing estates on what little remains of Britain’s countryside is a strange proposed solution to rising housing prices for someone who, elsewhere in his book, claims to “like wilderness” (p239). It is certainly a policy unlikely to find support among environmentalists, or indeed anyone concerned about protecting what remains of our once ‘green and pleasant land’.

Ridley is certainly right that there is a shortage of available housing in the UK, owing to both:

  1. The greater number of people divorcing or separating or never marrying or cohabiting in the first place and hence requiring separate accommodation; and
  2. A rising population.

Yet, with fertility rates in Britain having been at well below replacement levels since the 1970s, the increase in population that is occurring is entirely a product of inward migration from overseas.

However, rather than destroying what remains of Britain’s countryside in order to provide additional housing for ever increasing numbers of immigrants, perhaps the more sustainable solution is not more housing, but rather fewer people (see below).

Pollution

Ridley is on firmer ground in claiming, again contrary to popular opinion and environmentalist dogma, that, at least in developed western economies, pollution has actually diminished over the course of the twentieth century.

Thus, smog was formerly quite common in many British cities such as London until as recently as the Sixties, but is now all but unknown in the UK.

Thus, Ridley reports how, in a typical case of media scaremongering:

In 1970, Life magazine promised its readers that scientists had ‘solid experimental and theoretical evidence’ that ‘within a decade, urban dwellers will have to wear gas masks to survive air pollution … by 1985 air pollution will have reduced the amount of sunlight reaching earth by one half.’ Urban smog and other forms of air pollution refused to follow the script, as technology and regulation rapidly improved air quality” (p304).

On the other hand, however, while air quality may indeed have greatly improved in advanced Western economies such as Western Europe and North America, the direction of change in much of the so-called ‘developing world’ has been very different, precisely because much of the developing world has indeed so rapidly economically developed.

Moreover, a case can be made that improvement in air quality in the west have been possible only because developed western economies have ‘outsourced’ much of their industrial production, and, with it, much of their pollution, overseas, to developing economies, where labour is cheaper and environmental protection regulations much laxer, and where many of the goods consumed in western economies are now increasingly manufactured.

This suggests that, while parts of the developing world have indeed imitated the West in industrializing, and hence experiencing declining levels of air quality, they will not be successful at imitating the West in ‘deindustrializing’, and hence improving air quality, unless they too are able to outsource their industrial production to other parts of the ‘developing world’ that have yet to ‘develop’. But, in the end, we will run out of places.

Thus, when I was a child we were taught in school (or perhaps politically propagandized at) about how wonderfully environmentally friendly the communist Chinese were because, instead of driving cars to work, they all rode bicycles, and we were shown remarkable photographs from Chinese cities with hundreds of Chinese people cycling to work during rush-hour.

Now, however, with increasing levels of wealth, industrialization and development, the Chinese have largely abandoned bikes for cars, and Chinese cities seemingly have as big a problem with smog and air quality as Britain did in the early twentieth century. There are similar problems regarding air pollution in many other cities across the developing world, especially in Southeast Asia.

Yet a case can be made that even cars themselves represented an environmental improvement. Thus, before the spread of the much-maligned motor car, a major source of pollution was the emissions emitted by the form of transport that preceded the motor car – namely, horses.

Thus, in late-nineteenth and early-twentieth century, the streets of major cities were said to be fast disappearing under rising mountains of horse dung and the motor car was initially hailed as an “environmental savior” (SuperFreakonomics : p15).

Indeed, automobiles have themselves become less polluting over time.

The removal of lead from fuel is well-known, and may even have contributed to declining levels of violent crime, but Ridley goes further, also claiming, rather remarkably, that:

Today, a car emits less pollution travelling at full speed than a parked car did in 1970 from leaks” (p17).

However, Ridley’s sources for this claim are rather obscure and difficult to verify.

Elaborating on his source for this claim in a blog post on his website, he cites a book by Johan Norberg, När Människan Skapade Världen, written in Swedish and apparently unavailable in English translation, together with a blog post by Henry Payne, published at National Review, which, in turn, cites an article from the motoring magazine, Autoweek, that does not currently seem to be accessible online.

Moreover, investigating his sources more closely, it appears that the reference by Ridley to “a car” from today, and “a parked car” from 1970, seems to mean just that – namely, just one particular model from each era (namely, the 1970 and 2010 Ford Mustangs).

Whether this claim generalizes to other models is unclear (see Payne 2010; Ridley 2010).

Blips in History?

Ridley argues that progress has been long-standing, and the even worst catastrophes in history were at most mere temporary setbacks.

Thus, during the Great Depression, Ridley readily concedes, living standards did indeed decline precipitously. However, he is at pains to emphasize, the Great Depression itself lasted barely a decade, and, once it was over, living standards soon recovered and soon thereafter surpassed even those standards of living enjoyed during the economic boom of the Roaring Twenties that immediately preceded the Great Depression.

Ridley also argues against the view, fashionable among anthropologists, that hunter-gatherer cultures represented, in anthropologist anthropologist Marshall Sahlins’s famous phrase, the original affluent society, and that the transition to agriculture actually paradoxically lowered living standards and reduced available leisure-time.

Indeed, not just the agricultural revolution but also the industrial revolution was, according to Ridley, associated with improved living standards.

The immediate aftermath of the industrial revolution is popularly associated with Dickensian conditions of poverty and child labour. However, according to Ridley, the industrial revolution was actually associated with improvements in living standards, not just for wealthy industrialists, but for society as a whole – indeed, even for what became the urban proletariat.

After all, he explains, the Victorian-era urban proletariat were, for the most part, the descendants of what had formerly been the rural peasantry, and, while the Dickensian conditions under which they lived and laboured in nineteenth century cities may seem spartan to us, for them, they represented a marked improvement. This is why so many so gladly left their rural villages behind for the towns and cities.

On the other hand, however, the conventional view has it that, far from happily leaving rural villages behind because of superior living conditions offered in industrial cities, people were actually forced to leave because jobs were destroyed in the countryside by factors such as enclosure, the mechanization of agriculture and traditional cottage industries being outcompeted and destroyed by more efficient factory production in the cities.

On this view, while living conditions may indeed have been better in the cities than in the countryside at this time, this was only because job opportunities and living standards in had declined so steeply in rural areas.

Yet, according to Ridley, the only reason that the industrial revolution came to be associated with poverty and squalor was, not because of declines in living standards, but rather simply because, Ridley tells us, this was the first time activists, campaigners, politicians, and authors drew attention to the plight of the poor.

The reason for this change in attitudes was that this was the first time that society was sufficiently wealthy that it could afford to start doing something about the plight of the poor. This rising concern for the poor was therefore itself paradoxically a product of the increasing prosperity that the industrial revolution ushered in (p220).

Past Progress and the Problem of Induction

In a book ostensibly promoting optimism regarding the future, why then does Ridley spend so much time talking about the past?

The essence of his argument seems to be thus: Given all this improvement in the past, why is there any reason to believe that this pattern will suddenly cease tomorrow?

Thus, he quotes Whig historian Macaulay as demanding back in 1930 that:

On what principle is it, that when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?” (p11).

Thus, Macaulay concluded:

We cannot absolutely prove… that those are in error who tell us that society has reached a turning point, that we have seen our best days. But so said all who came before us, and with just as much apparent reason” (p287).

Unfortunately, this argument seems to be vulnerable to what philosophers call the problem of induction’.

In short, just because something has long been occurring throughout the past, is no reason to believe that it will continue occurring in the future, any more than, to quote a famous example, the fact that all the swans I have seen previously have thus far proven to be white necessarily proves that I won’t run into a black swan tomorrow.[2]

In other words, just because previous generations have always invented new technologies that have improved standards of living, or discovered new energy sources before the previously discovered ones have been depleted does not necessarily mean that future generations will be so fortunate.

In the end there might simply be no new technologies to invent or no new energy sources left to be discovered.

Self-Sufficiency vs Exchange

The only threat to continuing improvements in human living conditions across the world, in Ridley’s telling, is misguided governmental interference.

He attacks, in particular, several misguided but fashionable policy proposals.

First in Ridley’s firing line is what we might term the cult of self-sufficiency.

Following Adam Smith, Ridley believes that increasing prosperity is in large part a product of the twin processes of specialization and exchange.

These two processes go hand in hand.

On the one hand, it is only through exchange that we are able to specialize. After all, if we were unable to exchange the product of our own specialist labour for food, clothes and housing, then we would have to farm our own food, and knit our own clothes and construct our own housing.

On the other hand, it is only because of specialization and the increased efficiency of specialists that exchange is so profitable.

Thus, Ridley is much taken with Ricardo’s law of comparative advantage, which he writes has been described as “the only proposition in the whole of the social sciences that is both true and surprising” (p75).

In contrast, self-sufficiency, whether at the individual or familial level (e.g. living off the land, growing your own food, building your own home, making your own clothes), or at the national level (autarky, protectionism, embargoes, tariffs on imports), is a sure recipe for perpetual poverty.[3]

Thus, making your own clothes now costs more than buying them in a store. Likewise, DIY may (or may not) be a fun and relaxing hobby, but for well-qualified people with high salaries, it may be a more efficient use of time and money to hire a specialist.

Indeed, even the recent much maligned trend towards eating out and buying takeaways instead of cooking for oneself may reflect the same process towards increasing specialization first identified by Adam Smith.

Thus, Ridley, himself a large landowner and the heir to a peerage, observes that:

You may have no chefs, but you can decide on a whim to choose between scores of nearby bistros, or Italian, Chinese, Japanese or Indian restaurants, in each of which a team of skilled chefs is waiting to serve your family at less than an hour’s notice. Think of this: never before this generation has the average person been able to afford to have somebody else prepare his meals” (p36-7).[4]

Environmentally-Unfriendly ‘Environmentalism

Other misguided policies skewered by Ridley’s mighty pen include various fashionable environmentalist causes – or, rather, causes which masquerade as environmentally-friendly but are, in practice, as Ridley shows, anything but.

One fad that falls into the latter category is organic farming.

Organic farming is less efficient and more land intensive than modern farming techniques. It therefore requires more land to be converted for use by agriculture, which therefore requires the destruction of yet more of the rainforest and wilderness, yet nevertheless still produces much less food per acre.

Yet organic farming is not only bad for the environment, it is also especially bad for the poor, since it means food will be more expensive, and, since it is the poor who, having less income to spend on luxuries, already spend a greater proportion of their income of food, it is they who will suffer most.

Ridley applies much the same argument to biofuels. Again, these would require the use of more land for farming, depleting the amount of land that can be devoted either to the production of food, or to wildlife, resulting in increasing food prices and decreasing food production, with the global poor suffering the most.

In contrast, genetically modified foods promise to make the production of food cheaper, more efficient and less land intensive. Yet many self-styled environmentalists oppose them.

Why Fossil Fuels are Good for the Environment – and Renewables Bad

Perhaps most controversially, Ridley also argues that renewable energies are, paradoxically, bad for the environment. Again, this is because they are less efficient, and more land-hungry than fossil fuels.

Thus, he reports that to supply the USA alone with its current energy consumption would require:

Solar panels the size of Spain; or wind farms the size of Kazakhstan; or woodland the size of India and Pakistan; or hayfields for horses the size of Russia and Canada combined; or hydroelectric dams with catchments one third larger than all the continents put together” (p239).

Meanwhile, to provide Britain with its current energy needs without fossil fuels would necessitate:

Sixty nuclear power stations around the coasts, wind farms… cover[ing] 10 per cent of the entire land (or a big part of the sea)… solar panels covering an area the size of Lincolnshire, eighteen Greater Londons growing bio-fuels, forty-seven New Forests growing fast-rotation harvested timber, hundreds of miles of wave machines off the coast, huge tidal barrages in the Severn estuary and Strangford Lough, and twenty-five times as many hydro dams on rivers as there are today” (p343).

The prospect would hardly appeal to most environmentalists, certainly not to conservationists, since the result would be that:

The entire country would look like a power station” (p343).

Yet, despite this, “power cuts would be frequent”, since tidal, wind and solar power are all sporadic in the energy they supply, being dependent on weather conditions. Ridley therefore concludes:

Powering the world with such renewables now is the surest way to spoil the environment” (p343).

In contrast, fossil fuels are much less land hungry relative to the amount of energy they provide.

Therefore, he concludes that, contrary to popular opinion, “fossil fuels have spared much of the landscape from industrialization” and have hence proven an environmental boon (p238).

Only in respect of solar power, does Ridley actually has rather higher hopes (p345). The sun’s power is indeed immense. We are limited only in our current ability to extract it.

Indeed, besides nuclear power, geothermal power and tidal energy, virtually all of our energy sources derive ultimately from the power of the sun.

The Industrial Revolution, Ridley proposes, was enabled by “shifting from current solar power to stored solar power” – and, since then, progress has involved the extraction of ever older stores of the sun’s power – i.e. timber, peat, coal and lastly oil and gas (p216).

Each development was an improvement on the energy source that preceded it, both in terms of efficiency and environmental impact. To turn once again to relying on more primitive sources of energy would, Ridley argues, be a step backwards in every sense.

How Fossil Fuels Freed the Slaves

Fossil fuels are not only better for the environment, Ridley argues, they are also better for mankind, and not merely in the sense that humans benefit from leaving in a better environment. In addition, there are other, more firect benefits to mankind. Indeed, according to Ridley, it was fossil slaves that ultimately freed the slaves.

Thus, Ridley’s chapter entitled ‘The Release of Slaves’ says refreshing little about the familiar historical narrative of how puritanical Christian fundamentalist do-gooders and busybodies like William Wilberforce successfully campaigned for the abolition of slavery and thereby spoiled everybody’s fun.

Instead, Ridley shows that it was the adoption of fossil fuels that ultimately made freeing slaves possible by enabling technology to replace human labour – and indeed animal labour as well.

Thus, he reports:

It would take 150 slaves, working eight-hour shifts each, to pedal you to your current lifestyle. Americans would need 660 slaves… For every family of four… there should be 600 unpaid slaves back home, living in abject poverty: if they had any better lifestyle they would need their own slaves” (p236).

Thus, Ridley concludes:

It was fossil fuels that eventually made slavery – along with animal power, and wood, wind and water – uneconomic. Wilberforce’s ambition would have been harder to achieve without fossil fuels” (p214).[5]

Will the Oil Run Out?

As for the perennial fear that our demand for fossil fuels will ultimately exceed the supply, Ridley is unconcerned.

Fossil fuels may be non-renewable, he admits, but the potential supplies are still massive. Our only current problem is accessing them deeper underground in ever more inaccessible regions.

However, Ridley maintains that, one way or another, human ingenuity and technological innovation will find a way.

By the time they do run short, if they ever do, which they probably won’t, Ridley is confident we will have long since already discovered, or invented, a replacement.

In contrast, so-called renewables energy sources, such as wind and water power, while they may indeed be renewable, can nevertheless be very limited in the power they supply, or at least our capacity to extract it. Thus, there may indeed be great power in the wind, the waves and the sun, but it is very difficult, and costly, for us to extract anything more than a very small proportion of this.

This is, of course, one reason such technologies as windmills and watermills were largely abandoned in favour of fossil fuels over a century ago.

Many species, Ridley observes, have gone extinct, or are in danger of going extinct. Yet, since species are capable of reproduction, they are, Ridley argues, ‘renewable resources’.

In contrast, he observes:

There is not a single non-renewable resource that has yet run out: not coal, oil, gas copper, iron, uranium, silicone or stone… The stone age did not come to an end for lack of stone” (p302).

The Back to Nature Cult and the Naturalistic Fallacy

What then do these misguided fads – self-sufficiency, living off the land, organic food, renewable energies, opposition to GM crops etc. – all have in common?

Although Ridley does not address this, it seems to me that all the misguided policy proposals that Ridley excoriates have one or both of two things in common:

  1. They restrict the free operation of markets; and/or
  2. They seek to restrict new technologies that are somehow perceived as ‘unnatural.

Thus, many of these misguided fads can be traced to a version of what philosophers call the naturalistic fallacy or, more specifically, the appeal to nature fallacy – namely the belief that, if something is ‘natural’, that necessarily means it is good.

Yet the lives of humans in our natural state, i.e. as nomadic foragers, were, as Hobbes rightly surmised, short, nasty and brutish, at least as compared to our lives today.[6]

Thus, renewable energy sources, biofuels and organic farming all somehow seem more ‘natural’ than burning, mining and drilling for coaloil, and gas.

Likewise, genetically modified crops (aka ‘Frankenstein foods’) seem quintessentially ‘unnatural’, with connotations of eugenics and ‘playing god’.

In fact, however, we have been genetically modifying domesticated species ever since we began domesticating them. Indeed, this is the very definition of domestication.

Moreover, organic farming and so-called renewable energies are not a return to what is ‘natural’ (whatever that means), but simply a return to technologies that were surpassed and rendered obsolete hundreds of years ago.

If anything, returning to what is natural would involve a return to subsisting by hunting and gathering, but not many environmentalists this side of the Unabomber are willing to go that far. Instead, they only want to turn back the clock so far.[7]

Similarly, nuclear power is rejected by most environmentalists, primarily because it seems quintessentially ‘unnatural’ and the very word ‘nuclear’ is, I suspect, associated in the public mind with nuclear weapons, since the very word ‘nuclear’ invariably conjures images of Hiroshima, Nagasaki and the prospect of nuclear apocalypse.[8]

Yet nuclear power is actually much less costly in terms of human lives than, say, coal mines or offshore oil rigs, both of which are extremely dangerous places to work.

Likewise, being self-sufficient and ‘living off the land’ may seem intuitively ‘natural’, in that it is the way our ancestors presumably lived in the stone age.

However, Ridley argues that this is not true, and that humans have been, for the entirety of their existence as a species, voracious traders.

Indeed, he even argues that it is humankind’s appetite for and ability to trade, rather than language or culture, that distinguishes us from the remainder of the animal kingdom (p54-60).[9]

Global Warming

Necessarily, Ridley also addresses perhaps the most popular, and certainly the most politically correct, source of pessimism regarding the future, namely the threat of global warming or climate change.

Identifying climate change as both “by far the most fashionable reason for pessimism” and, together with the prospects (or alleged lack of prospects) for economic development in Africa, as one of the “two great pessimisms of today”, Ridley begins his discussion of this topic by acknowledging that these problems “confront the rational optimist with a challenge, to say the least” and as indeed representing “acute challenges” (p315).

Having made the acknowledgement, however, in the remainder of his discussion he suggests that the threat posed by global warming is in fact vastly exaggerated.

Like most so-called global warming skeptics (e.g. Bjørn Lomborg), or at least the more intelligent, knowledgeable ones who are actually worth reading, Ridley is no ‘denier’, in that he denies neither that global warming is occurring nor that it is caused, at least in part, by human activity.

Instead, he simply questions whether the threat posed is as great as it is portrayed as being by activists and scientists.

Thus, he begins his discussion of the topic by pointing out that climate has always changed throughout human history, and indeed prehistory, not so as to suggest that the changes that are currently occurring are of the same type and cause (i.e. not man-made), but rather to emphasize that we are more than capable of adapting, and that changes of similar magnitude will not mean the end of the world.

There were warmer periods in earth’s history in medieval times and about 6,000 years ago… and… humanity and nature survived much faster warming lurches in climate during the ice ages than anything predicted for this century” (p329).[10]

People move happily from London to Hong Kong or Boston to Miami and do not die from heat, so why should they die if their home city gradually warms by a few degrees?” (p336).

Indeed, far from denying the reality of climate change, Ridley follows former British Chancellor of the Exchequer Nigel Lawson, in his interesting book An Appeal to Reason, in actually, at least hypothetically and for the sake of argument, accepting the projections of the mainstream Intergovernmental Panel on Climate Change (IPCC) regarding future temperature increases.

Yet the key point emphasized by both Lawson and Ridley is that, under all the IPCC’s various projections, increased global warming results from increased carbon emissions, which themselves result from economic growth, particularly in what is today the Developing World.

This means that those projections which anticipate the greatest temperature increases also anticipate the greatest economic growth, which, in turn, means, not only that the future descendants on whose behalf we are today asked to make sacrifices will be vastly wealthier than we are today, but also that any threat posed by increases in temperature is more than offset by increases in wealth and prosperity, which very increases in prosperity will provide greater resources with which to tackle the negative effects of global warming.

Thus, with regard to rising sea levels for example, one of the most often cited threats said to result from global warming, it is notable that much of the Netherlands would be underwater at high tide were it not for land reclamation (e.g. the building of dykes and pumps).

Much of this successful land reclamation in the Netherlands occurred in previous centuries, when the technologies and resources available were much more limited. In the future, with increased prosperity and advances in technology, our ability to cope with rising sea levels will be even greater.

Ridley also points out that there are likely to be benefits associated with global warming as well as problems.

For example, he cites data showing that, all around the world, more people actually die from the extreme cold than from extreme heat (Zhau et al 2021).

Globally the number of excess deaths during cold weather continues to exceed the number of excess deaths during heat waves by a large margin – by about five to one in most of Europe” (p335).

This suggests that global warming may actually save lives overall, especially since global warming is anticipated to reduce cold temperatures to a greater extent than it increases warm temperatures, resulting in a greater reduction of extremely cold conditions than an increase in extreme heat.

Thus, Lomborg reports:

Global warming increases cold temperatures much more than warm temperatures, thus it increases night and winter temperatures much more than day and summer temperatures… Likewise, global warming increases temperatures in temperate and Arctic regions much more than in tropical areas” (Cool it: p12).

Indeed, with regard to food supply and farm yealds, Ridley concludes:

The global food supply will probably increase if temperature rises by up to 3°C. Not only will the warmth improve yields from cold lands and the rainfall improve yields from some dry lands, but the increased carbon dioxide will itself enhance yields, especially in dry areasLess habitat will probably be lost to farming in a warmer world” (p337).[11]

Finally, Ridley concludes by reporting:

Economists estimate that a dollar spent on mitigating climate change brings ninety cents of benefits compared with $20 benefits per dollar spent on healthcare and $16 per dollar spent on hunger” (p388).

Actually, however, judging by Ridley’s own associated endnote, this is not the conclusion of “economists” in general at all, but rather of one particular writer who is not an economist either by training or profession – namely, leading climate change skeptic Bjørn Lomborg.

Overpopulation?

Though conveniently left off the agenda of most modern mainstream environmentalists, a strong case can be made that it is overpopulation that represents the ultimate and most fundamental environmental issue. Other environmental problems are strictly secondary – because the reason why we wreak environmental damage is precisely because we need to provide for the increasing demands of a growing population.

Thus, concerned do-gooders who seek to lower their carbon footprints by cycling to work every day would arguably do better to simply forgo reproduction, since, by having children, they do not so much increase their own carbon footprint, as create another whole person complete with a carbon footprint all of their own.

However, in recent decades, talk of overpopulation has become politically-incorrect and taboo, because restricting reproductive rights seems redolent of eugenics and forced sterilizations, which are now, for entirely wrongheaded reasons, regarded as a bad thing.

Moreover, since population growth is now occurring largely among non-whites, especially black Africans, with whites themselves (and many other groups, not least East Asians) reproducing at well below replacement levels and fast being demographically displaced by nonwhites, even in their own indigenous ethnic homelands, it also has the faint whiff of racism and eugenics, making it especially politically incorrect.

Overpopulation has thus become ‘the environmental issue that dare not speak its name’.

Ridley concludes, however, that overpopulation is not a major concern because it handily solves itself through a curious though well-documented phenomenon known to demographers as the demographic transition, whereby increasing economic development is curiously accompanied by a decline in fertility.

There are, however, several problems with this rather convenient conclusion.

For one thing, while fertility rates have indeed fallen precipitously in developed economies in recent decades in concert with economic growth, no one really fully understands why this is happening.

Indeed, Ridley himself admits that it is “mysterious”, “an unexplained phenomenon” and that “demographic transition theory is a splendidly confused field” (p207).

Indeed, from an evolutionary psychological, sociobiological or Darwinian perspective, the so-called demographic transition is especially paradoxical since it is elementary Darwinism 101 that organisms should respond to resource abundance by channeling the additional resources into increased rates of reproduction so as to maximize their Darwinian fitness.

Although Ridley admits that the reasons behind this phenomenon are not fully understood, he identifies factors such as increased urbanization, female education and reduced infant mortality as the likely causal factors.[12]

However, uncertainty as to its causes does not dampen Ridley’s conviction that the phenomenon is universal and will soon be replicated in the so-called ‘developing world’ just as surely as it occurred in ‘developed economies’.

Yet, with the stakes potentially so high, can we really place such confidence in the continuation, and global spread, of a process whose causes remain so little understood?

The second problem with seeing the demographic transition as a simple, hands-off, laissez faire solution to overpopulation is that the observed association between economic development and population growth and stagnation is much more complex than Ridley makes out.

Thus, as we have seen, according to Ridley, living standards have been rising throughout pretty much the entirety of recorded history, and indeed prehistory. However, the below replacement level fertility rates observed in most developed economies date only to the latter half of the twentieth century. Indeed, even as recently as the immediate post-war decades in the middle of the twentieth century, there was a famous baby boom.

Until then, fertility rates had indeed already been in decline for some time. However, this decline was more than offset by massive reductions in the levels of infant mortality owing to improved health, nutrition and sanitation, such that industrialization and improved living standards were actually, until very recently, accompanied by massive increases, not decreases, in population-size.

Given that much of the so-called ‘developing world’, especially in Africa, is at obviously a much earlier stage of development than is the contemporary west, we may still expect many more decades more of population growth in Africa before any reductions eventually set in, if indeed they ever do.

Finally, this assumption that decreased fertility will inevitably accompany economic growth in the ‘developing world’ itself presupposes that the entirety of the so-called ‘developing world’ will indeed experience economic growth and development.

This is by no means a foregone conclusion. Indeed, the very term ‘developing world’, presupposing as it does that these parts of the world will indeed ‘develop’, may turn out to be a case of wishful thinking.

Africa, Aid and Development

This leads to a related issue: if Ridley’s conclusions regarding overpopulation strike me as overly optimistic, then his prognosis for Africa seems similarly naïve, if not borderline utopian.

Critiquing international aid programmes as having failed to bring about economic development and even as representing part of the problem, Ridley instead implicates various factors as responsible for Africa’s perceived ‘underdevelopment’. Primary among these is a lack of recognition given to property rights, which, he observes, deters both investment and the saving of resources necessary for economic growth.

Yet, Ridley insists, entrepreneurialism is rife in Africa and just waiting to be provided with a successful economic infrastructure (e.g. property rights) necessary to encourage and harness it to the general welfare.

Certainly Ridley is right that there is nothing intrinsic to the African soil or air that prevents economic development as has occurred elsewhere in the world.

However, Ridley fails to explain why the factors that he implicates as holding Africa back (e.g. corrupt government, lack of property rights) are seemingly so endemic throughout much of Africa but not elsewhere in the world.

Neither does he explain why similar problems (e.g. high rates of violent crime, poverty) also beset, not just Africa itself, but also other parts of the world composed of people of predominantly sub-Saharan African ancestry, from Haiti and Jamaica to Baltimore and Detroit.

This, of course, suggests the politically-incorrect possibility that the perceived underdevelopment’ of much of sub-Saharan Africa simply reflects something innate in the psychology of the indigenous people of that continent.

Immigration and Overpopulation

Yet, if Africa does not develop, then it presumably will not undergo the demographic transition either, since the latter, whatever its proximate explanation, seems to be dependent on economic growth and modernization.

This would mean that population in Africa would continue to grow, and, as population growth stalls, or even goes into reverse, in the developed world, people of sub-Saharan African descent will come to constitute an ever-increasing proportion of the world population.

Of course, population growth in a ‘developing world’ that fails to ‘develop’ is, from a purely environmental perspective, less worrisome, since living standards are lower and hence the environmental impact, and carbon footprint, of each additional person is lower.

However, mass immigration into western economies means that African populations, and populations from elsewhere in the developing world, are fast being imported into Europe, North America and elsewhere and fast becoming acclimatized to western living standards, but also, in addition to being younger on average and hence having more of their reproductive careers ahead of them, often retaining higher fertility levels than the indigenous population for several generations after migrating.

Thus, open-door immigration policies are transforming a Third World overpopulation problem into a problem for developed economies too, with all the environmental problems this brings with it (see Hardin, Lifeboat Ethics).

The result is that white Europeans will soon find themselves as minorities even in their own indigenous European homelands. As a result, they will effectively become stateless nations without a country to call their own

Of course, we are repeatedly reassured that this is not a problem, and that anyone who suggests it might be a problem is a loathsome racist, since immigrant communities and their descendants will, of course, undoubtedly successfully integrate into western culture and become westerners.

History, however, suggests that this is unlikely to be the case.

On the contrary, the assimilation of racially distinct immigrants has proven, at best, a difficult process.

Thus, in America, successive waves of European-descended immigrants (Irish, Poles, Italians, Jews) have largely successfully assimilated into mainstream American society and lost most of their cultural uniqueness. However, African-Americans remain very much a separate community, with their own neighbourhoods, dialect and culture, despite their ancestors having been resident in the USA longer than any of these European descended newcomers, and longer even than many of the so-called ‘Anglos’.

This cannot be attributed to the unique historical experience of the African diaspora population in America (i.e. slavery, segregation etc.), since the experience of European polities in assimilating, or attempting to assimilate, nonwhite immigrant communities in the post-war period has proved similar.

Thus, quite apart from the environmental impact of a rising population with First World living standards and carbon footprints to match, to which I have already alluded, various problems are likely to result from the demographic transformation of the west, which may threaten the very survival of western civilization, at least in the form in which we have hitherto known it.

After all, civilizations and cultures are ultimately the product of the people who compose them. A Europe composed increasingly of Muslims will no longer be a western civilization but rather, in all likelihood, a Muslim one.

Meanwhile, other peoples have arguably failed to independently found civilizations of any type sufficient to warrant the designation ‘civilization, nor arguably even to maintain advanced civilizations bequeathed to them, as the post-colonial experience in much of sub-Saharan Africa well illustrates.

Yet it is, as we have seen, these peoples who will, on current projections, come to constitute an increasing proportion of the world population, and hence presumably of immigrants to the west as well, over the course of the coming century.

This suggests that western civilization may not survive the replacement of its founding stock.[13]

Moreover, increasing ethnic diversity will also likely foreshadow other problems, in particular the sort of ethnic conflict that seemingly inevitably besets multiethnic polities.

Thus, multiethnic states – from Lebanon and the former Yugoslavia to Rwanda and Northern Ireland – have frequently been beset by interethnic conflict and violence, and even those multiethnic polities whose divisions have yet to result in outright violence and civil war (e.g. Belgium) remain very much divided states.[14]

In transforming what were formerly monoracial, if not monoethnic, states into multiracial states, European elites are seemingly voluntarily importing the exact same sorts of ethnic conflict into their own societies.

On this view, the Muslim terrorist attacks, and various race riots, which various European countries have experienced in recent decades may prove an early foretaste and harbinger of problems to come.

In addition, if western populations are currently undergoing a radical transformation in their racial and ethnic composition, these problems are only exacerbated by dysgenic fertility patterns even among white westerners ourselves, whereby it is those women with the traits least conducive to maintaining an advanced technological civilization (e.g. low intelligence, conscientiousness, work ethic) who are, on average, the most fecund, and hence disproportionately bequeath both their genes and their parenting techniques to the next generation, while improved medical care increasing facilitates the survival and reproduction of those among the sick and ill who otherwise would have been weeded out by selection.[15]

However, besides a few paragraphs dismissing and deriding the apocalyptic prognoses of early twentieth century eugenicists (p288), these are rational, if politically incorrect, reasons for pessimism that Ridley, the self-styled rational optimist, evidently does not deign – or perhaps dare – to discuss.

The Perverse Appeal of the Apocalypse

Ridley is right to observe that tales of imminent apocalypse have proven paradoxically popular throughout history.

Indeed, despite only being barely an Xennial and having lived most of my life in Britain, I have nevertheless already been fortunate enough to have survived several widely-prophesized apocalypses, from a Cold War nuclear apocalypse, to widely anticipated epidemics of BSE, HIV, SARS, bird flu, swine flu, the coronavirus and the ‘millennium bug’, all of which proved damp squibs.

Yet prophesizing imminent apocalypse is, on reflection, a rather odd prediction to make. It is rather like making a bet you cannot win: If you are right, then everyone dies, and nobody is around to congratulate you on your far-seeing prescience – and neither, in all probability, are you.

It is rather like betting on your own death (i.e. paying for life insurance). If you win (if you could call it ‘winning’), then, by definition, you will not be around to collect your winnings.

Why then are stories about the coming apocalypse so paradoxically popular? After all, no one surely relishes the prospect of imminent Armageddon.

One reason is that catastrophism sells. Scare-story headlines about imminent disaster sell more newspapers to anxious readers (or, in contemporary formulation, attract more clicks) than do headlines berating us for how good we have it.

Activist groups also have an incentive to exaggerate the scale of problems in order to attract funding. The same is true even of scientists, who likewise have every incentive to exaggerate the scale of the problems they are investigating (e.g. climate change), or at least neglect to correct the inflated claims of activists, in order to attract research funding.

Yet I suspect the paradoxical human appetite for pessimism is rooted ultimately in what psychiatrist Randolph Nesse refers to, in a medical contex, as ‘the Smoke Detector Principle’ – namely the observation that, when it comes to potential apocalypses, since false positives are less costly than false negatives, it is wise to err on the side of caution and prepare for the worst, just in case.

Our penchant for apocalypses may even have religious roots.

Belief in the imminence of the end time is a pervasive religious belief.

Thus, the early Christians, including in all probability Jesus himself (so historians speculate), believed that Judgement Day would occur within their own lifetimes.

Later on, Jehovah’s Witnesses believed the same thing, and actually set a date, or rather a succession of dates, rescheduling the apocalypse each time the much-heralded end time, like a British Rail train in the 1980s, invariably and inconsiderately failed to arrive on due schedule.

The same is true of countless other apocalyptic Millennarial religious cults, scattered across history.

Interestingly, former British Chancellor of the Exchequer, Nigel Lawson, suggests the scare over global warming thus reflects an ancient religious belief translated into the language of ostensibly secular modern science.

Thus, he observes, throughout history, God’s vengeance on the people for their sins has been conceived of as occurring through the medium of the weather (e.g. storms, floods, lightning bolts):

Throughout the ages… the weather has been an important part of the religious narrative. In primitive societies it was customary for extreme weather events to be explained as punishment from the gods for the sins of the people; and there is no shortage of examples of this theme in the the Bible, either, particularly, but not exclusively, in the Old Testament” (An Appeal to Reason: A Cool Look at Global Warming: p102-3).

Thus, Lawson concludes that, with the decline of traditional religion:

It is the quasi-religion of green alarmism and what has been well described as global salvationism… which has filled the vacuum, with reasoned questioning of its mantras regarded as little short of sacrilege” (An Appeal to Reason: p102)

In doing so, climate change alarmism has also replaced another substitute religion for the pseudo-secular that, like Christianity itself, now appears to be in its death throes, and that brought only suffering and destruction in its wake – namely, Marxism.

Thus, Lawson observes:

With the collapse of Marxism, and to all intents and purposes of other forms of socialism too, those who dislike capitalism, not least on the global scale, and its foremost exemplar, the United States, with equal passion, have been obliged to find a new creed. For many of them, green is the new red” (An Appeal to Reason: p101).

Global warming alarmism thus provides an ostensibly secular and scientific substitute for eschatology for the resolutely irreligious.

The Cult of Progress

On the other hand, Ridley surely exaggerates the ubiquity of pessimism.

While there is indeed a market for gloom-mongering prophets of doom, belief in the reality, and the inevitability, of social, economic, political and moral progress is also pervasive, especially (but not exclusively) on the political left.

Thus, Marxists have long held that the coming of communist utopia is not just desirable but wholly inevitable, if not just around the corner, as a necessary resolution of the contradictions allegedly inherent in capitalism, as Marx himself purported to have proven scientifically.

This belief too may have religious roots. The Marxist belief that we pass into communist utopia (i.e. heaven on earth) after the revolution may reflect a perversion of the Christian belief that we pass into heaven after death and the Apocalypse. Thus, Marxism is, as Edmund Wilson first put it, “the opiate of the intellectuals”.

Nowadays, though Marx has belatedly fallen from favour, leftists retain their belief in the inevitability of social and political progress. Indeed, they have even taken to referring to themselves as ‘progressives’ and dismissing anyone who does not agree with them of being ‘on the wrong side of history’.

On this view, the process of liberation began with the abolition of slavery, continued with the defeat of Nazi Germany and the granting of independence to former European colonies, proceeded onwards with the so-called civil rights movement in the USA in the 1950s and 60s, then successively degenerated into socalled women’s liberation, feminism, gay rights, gay pride, disabled rights, animal rights, transsexual rights etc.

Quite where this process will lead next, no one, least of all leftists themselves, seems very sure. Indeed, one suspects they dare not speculate.

Yesterday’s reductio ad absurdum of what was, in Britain, once dismissed ‘loony leftism’, the prospect of which everyone, just a few decades earlier, would have dismissed as preposterous scaremongering, is today’s reality, tomorrow’s mainstream, and the day after tomorrow’s relentlessly policed dogma and new orthodoxy. Of this, the recent furores over, first, gay marriage, and now transsexual bathroom rights, represent very much cases in point.

Yet the pervasive faith in progress is by no means not restricted to the left. On the contrary, as the disastrous invasions and occupations of Iraq and Afghanistan proved all too well, neoconservatives believe that Islamic tribal societies, and former Soviet republics, can be transformed into capitalist liberal democracies just as surely as unreconstructed Marxists once believed (and, in some cases, still do believe) that Islamic tribal societies and capitalist liberal democracies would themselves inevitably give way to communism.

Indeed, neoconservative political scientist Francis Fukuyama arguably went even further than Marx: the latter merely prophesized the coming end of history, the former insisted it had already occurred, and, in so doing, became instantly famous for being proven almost instantly wrong.

Meanwhile, free market libertarians like Ridley himself believe that Western-style economic development, industrialization and prosperity can come to Africa just as surely as surely as it came to Europe and East Asia.

Indeed, even Hitler was a believer in progress and utopia, his own envisaged Thousand Year Reich being almost as hopelessly utopian and unrealistic as the communist society envisaged by the Marxists.

Marx thought progress involved taking the means of production into common ownership; Thatcher thought that progress involved privatizing public utilities; Hitler though progress involved eliminating allegedly inferior races.

In short, left and right agree on the inevitability of progress. Each are, in this sense, ‘progressives’. They differ only on what they believe ‘progress’ entails.

Scientific and Political Progress

In conclusion, I agree with Ridley that scientific and technological advances will continue inexorably.

Scientific and technological progress is indeed inevitable and unstoppable. Any state or person that unilaterally renounces modern technologies will be outcompeted, both economically and militarily, and ultimately displaced, by those who wisely opt to do otherwise.

However, although technology improves, the uses to which technologies are employed will remain the same, since human nature itself remains so stubbornly intransigent.

Thus, as philosopher John Gray writes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):

Though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive animal that is also one of the most predatory and destructive” (Straw Dogs: p28, p4).

The inevitable result, Gray concludes, is that:

Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (Straw Dogs: p123).

References

Payne H (2010) Environmental Progress: The Parked Mustang Test, at Planet Gore: The Hot Blog, National Review, April 23, 2010.

Ridley M (2010) The Mustang Test, RationalOptimist.com, 25 May, 2010.


[1] Of course, the reason that there is a demand for additional housing in the UK is that the population of the country is rising, and population is rising entirely as a consequence immigration, since the settled population of Britain actually reproduces at well below replacement levels. The topic of immigration is one to which I return later in this review (see above). Another factor is increasing proportions of people living alone, due to reduced levels of marriage and cohabitation, and increased rates of divorce and separation.

[2] This example is said to be actually historical, and not purely hypoethetical. Thus, the idea that black swans did not actually exist was widely believed in Europe for centuries, supposedly originating with ancient Roman poet Juvenal’s poem Satire IV in the late first or early second century AD. Yet this conventional wisdom was supposedly overturned when  Dutch explorer Willem de Vlamingh sighted a black swan in Australia, to which continent the species is indigenous, in January 1697.

[3] Given his trenchant opposition to autarky, protectionism and tariffs, and support for free trade, it is interesting to note that Ridley was nevertheless a supporter of Brexit, despite the fact that promoting trade, competition and the free movement of goods, services and workers across international borders was a fundamental objective of European integration.
Presumably, like many Eurosceptics, Ridley believed that integration in the EU had now way beyond this sort of purely economic integration (i.e. a common market), as indeed it has, and that the benefits of continued membership of the EU, in terms of the free movement of goods, services, labour and capital was outweighed by the negatives.
However, it ought to be pointed out that European integration was, from its post-war inception, never purely economic. Indeed, economic integration necessarily entails some loss of political sovereignty, since economic policy is itself an aspect of politics.

[4] I agree with Ridley that free trade is indeed beneficial, and tariffs and protectionism counterproductive, at least in purely economic terms. However, I believe that there is a case for retaining some degree of self-sufficiency at the national level (i.e. autarky), so that, in the event that international trade breaks down, for example during wartime, the population is nevertheless able to subsist and maintain itself. Today, we in the west tend to see the prospect of a war that would affect us in this way as remote. This, however, may prove to be naïve.
Perhaps analogously a similar case can be made for maintaining some ability to ‘live off the land’ and, if necessary, become self-sufficient at the individual level (e.g. by hunting, fishing, and growing your own crops), so as to prepare for the unlikely circumstance of the domestic economic system breaking down, whether due to natural disaster, civil war and foreign invasion. This is, of course, the objective of so-called survivalists.

[5] Although slavery may indeed eventually have become “uneconomic”, as claimed by Ridley, thanks to fossil fuels, this is not, contrary to the implication of the quoted passage, the reason slavery was abolished in the nineteenth centiry, although it is indeed true that, at the time, many economists claimed that it would be cheaper to simply pay slaves rather incur the expense of forcibly enslaving and effectively imprisoning them, with all the costs this entailed. In fact, however, on the abolition of slavery in the British Empire, former slaves were unwilling to work in the horrendous conditions on sugar plantations of the Caribbean, preferring to eek out an existence through subsitance farming, and the plantations themselves became “uneconomic” until indentured labourers (slaves in all but name) were imported from Asia to take the place of the freed slaves.

[6] Despite pervasive myths of ‘noble savages’ existing in benign harmony with both nature and one another, the ‘nastiness’ and ‘brutishness’ of primitive premodern humans is beyond dispute. Indeed, even the !Kung San bushmen of the Kalahari Desert in Southern Africa, long extolled by anthropologists as ‘the gentle people’ and ‘the harmless people’, actually “have a murder rate higher than that of American inner cities” (The Blank Slate: p56). Thus, Steven Pinker reports:

The !Kung San of the Kalahari Desert are often held out as a relatively peaceful people, and so they are, compared with other foragers: their murder rate is only as high as Detroit’s” (How the Mind Works: p51).

However, if the life of early man was indeed ‘nasty and brutish’, the ‘shortness’ of the lives of premodern peoples is sometimes exaggerated. Thus, it is often claimed that, prior to recent times, the average lifespan was only about thirty years of age. In fact, however, this is misleading, and largely reflects the much higher rates of infant mortality.

[7] In fact, a return to a foraging lifestyle would not be ‘natural’ for most humans, since most humans are now to some extent evolutionarily adapted to agriculture, and some may even have become adapted to the industrial and post-industrial societies in which many of us now live. The prospect of returning to what is ‘natural’ is, then, simply impossible, because there is no such thing in the first place. Though evolutionary psychologists like to talk about the environment of evolutionary adaptedness, this is, in truth, a composite of environments, not a single time or place any researcher could identify and visit with the aid merely of a compass, a research grant and a time machine.

[8] The comically villainous Mr Burns in the hugely popular animated cartoon ‘The Simpsons’ both illustrates and reinforces the general perception of nuclear power in the western world. Of course, no doubt many wealthy businessmen and investors do indeed make large amounts of money out of nuclear energy. But many wealthy businessmen and investors also make large amounts of money investing in renewable energies.

[9] In fact, of course, there is not one single factor that distinguishes us from other animals – there are many such things, albeit mostly differences of degree rather than of kind.

[10] While Ridley may be right that “nature” as a whole “survived much faster warming lurches in climate during the ice ages than anything predicted for this century”, many individuals species did not. On the contrary, many species are thought to have gone extinct during these historical shifts between ice ages and interglacials.
Humans have indeed proven resilient in surviving in many different climates around the world. However, this is largely on account of our cultural inventiveness. Thus, on migrating to colder climates, we are able to keep warm by making fire, wearing clothes and building shelter, rather than having to gradually evolve thicker fur and other physiological adaptations to cold as other animals must do. Other animals lack this adaptability.
Therefore, if our concern goes beyond the human species alone, perhaps we should be concerned about such fluctuations in temperature. On the other hand, however, it is almost certainly the destiny of all species, humans very much included, to ultimately go extinct, or at least evolve into something new.

[11] More specifically, at least according to Bjørn Lomborg in his book Cool It, global warming will reduce farm yields and agricultural output in Africa and other tropical regions, but increase farm yields in Europe and other temperate zones, and the increases in the latter will be more than sufficient to offset the reduced agricultural output in Africa and the tropics.

[12] Many of the frequently offered for the decline in fertility rates in the west do not hold up to analysis. For example, many authorities credit (or sometimes blame) feminism, or the increase in female labour force participation, for the development. However, this theory seems to be falsified by the fact that fertility rates are even lower in countries such as Japan and South Korea, although rates of female labour force participation, and of feminist ideology, seem to have been much lower, at least until very recently.
My own favoured theory for the demographic transition, not mentioned by Ridley, implicates the greater availability of effective contraception technologies. Effective and widely available contraceptive technologies represent a recent invention and hence an evolutionary novelty’ that our species has not yet had sufficient time to evolve psychological mechanisms to deal effectively with yet.
The problem with testing this theory is that, until recently, many forms of contraception were illegal in many jurisdictions, and also taboo, and therefore use was often covert and surreptitious, such that it is difficult to gauge just how widely available and widely used various contraceptive technologies were, until recently.
However, some evidence in support of this theory is provided by the decline in fertility rates in countries such as the US and UK. Thus, in the US, the baby boom reached its peak, and thenceforth began a steep decline in 1960, exactly the same time that the contraceptive pill first came on the market. In Britain, the availability of the pill was initially quite restricted and, perhaps partly as a consequence, fertility rates peaked, and the downward trend began, somewhat later.
However, looking at the overall trends in fertility rates over time, the availability of contraception certainly cannot be the sole explanation for the changes observed.

[13] In fact, the survival of western civilization, and the form it may come to take, may depend, in part, upon which peoples and ethnicities western populations come to be predominantly replaced by.
Thus, it is often claimed by immigration restrictionists, especially those of a racialist bent, that immigrants from developing economies invariably recreate in the host nation to which they migrate the same problems that beset the country they left behind, often, ironically, the very factors (e.g. poverty, corruption) that motivated them to leave this previous homeland behind.
In fact, however, this is not always true. For example, though heirs to among the oldest and greatest civilizations of the ancient world, both India and China are, today, despite recent economic growth, still relatively  poor countries, at least as compared to countries in the West. Yet, paradoxically, people in the west of Indian and Chinese ancestry resident in the west (and indeed in other parts of the world as well) tend to be disproportionately wealthy, substantially wealthier, on average, than the white western populations among whom they live.
However, Chinese and Indian populations resident in the west also seem to have low birth rates, as does China itself, while the Fertility rate in India, while still just around replacement levels in the latest available data, seems to be in free fall. In short, for better or worse, it appears that the future is African, or, as increasing numbers of Africans migrate abroad, at least of African descent.

[14] For example, much is made of the success of the peace process, and subsequent settlement, in bringing relative peace to Northern Ireland. Yet Northern Ireland nevertheless remains, today, very much a divided society, in which ethnic tensions simmer below the surface, and no one would hold it up a good example of a united, cohesive, functional polity, let alone as an example all but the most conflict-ridden and divided of polities should ever seek to emulate.

[15] Of course, concerns regarding overpopulation, which I have discussed earlier in this piece, will only exacerbate dysgenic fertility patterns, since it is only those with high levels of altruism who even care about the problems posed for future generations by overpopulation, and it is only those with high levels of self-control who will be able to actually act of these concerns by restricting their fertility, and all these personality traits are socially desirable traits that we would wish to impart upon future generations and also partly heritable.

Mental Illness, Medicine, Malingering and Morality: The Myth of Mental Illness vs The Myth of Free Will

Thomas Szasz, Psychiatry: The Science of Lies New York: Syracuse University Press, 2008

The notion that psychiatric conditions, including schizophrenia, ADHD, depression, alcoholism and gambling addiction, are all illnesses ‘just like any other disease’ (i.e. just like smallpox, malaria or the flu) is obvious nonsense. 

Just as political pressure led to the reclasification of homosexuality as, not a mental illness, but a normal variation of human sexuality, so a similar campaign is currently underway in respect of gender dysphoria. Today, if someone is under the delusion that they are a member of the opposite sex, we pander to the delusion and provide them with hormone therapy, hormone blockers and sex reassignment surgery. It is as if, where a patient suffers from the delusion that they are Napoleon, then, instead of treating them for this delusion, we instead provide them with legions of troops with which to invade Prussia.

If indeed these conditions are to be called ‘diseases’, which, of course, depends on how we define ‘disease’, they are clearly diseases very much unlike the infections of pathogens with which we usually associate the word ‘disease’. 

For this reason, I had long meant to read the work of Thomas Szasz, a psychiatrist whose famous (or perhaps infamous) paper, The Myth of Mental Illness (Szasz 1960), and book of the same title, questioned the concept of mental illness and, in the process, rocked the very foundations of psychiatry when first published in the 1960s. I was moreover, as the preceding two paragraphs would suggest, in principle open, even sympathetic, to what I understood to be its central thesis. 

Eventually, I got around to reading instead Psychiatry: The Science of Lies, a more recent, and hence, I not unreasonably imagined, more up-to-date, work of Szasz’s on the same topic.[1]

I found that Szasz does indeed marshal many powerful arguments against what is sometimes called the disease model’ of mental health

Unfortunately, however, the paradigm with which he proposes to replace this model, namely a moralistic one based on the notion of ‘malingering’ and the concept of free will, is even more problematic, and less scientific, than the disease model that he proposes to do away with.  

Physiological Basis of Illness 

For Szasz, mental illness is simply a metaphor that has come to be taken altogether too literally. 

Mental illness is a metaphorical disease; that, in other words, bodily illness stands in the same relation to mental illness as a defective television stands to an objectionable television programme. To be sure, the word ‘sick’ is often used metaphorically… but only when we call minds ‘sick’ do we systematically mistake metaphor for fact; and send a doctor to ‘cure’ the ‘illness’. It’s as if a television viewer were to send for a TV repairman because he disapproves of the programme he is watching” (Myth of Mental Illness: p11). 

But what is a disease? What we habitually refer to as diseases are actually quite diverse in aetiology. 

Perhaps the paradigmatic disease is an infection. Thus, modern medicine began with, and much of modern medicine is still based on, the so-called ‘germ theory of disease’, which assumes that what we refer to as disease is caused by the effects of germs or ‘pathogens’ – i.e. microscopic parasites (e.g. bacteria, viruses), which inhabit and pass between human and animal hosts, causing the symptoms by which disease is diagnosed as part of their own life-cycle and evolutionary strategy.[2]

However, this model seemingly has little to offer psychiatry. 

Perhaps some mental illnesses are indeed caused by infections. 

Indeed, physicist-turned-anthropologist Gregory Cochran even controversially contends that homosexuality (which is not now considered by psychiatrists as a mental illness, despite its obviously biologically maladaptive effects – see below) may be caused by a virus

However, this is surely not true of the vast majority of what we term ‘mental illnesses’. 

However, not all physical diseases are caused by pathogens either. 

For example, developmental disorders and inherited conditions are also sometimes referred to as diseases, but these are caused by genes rather than germs

Likewise, cancer is sometimes called a disease, yet, while some cancers are indeed sometimes caused by an infection (for example, cervical cancer is usually caused by HPV, a sexually transmitted virus), many are not. 

What then do all these examples of ‘disease’ have in common and how, according to Szasz, do so-called mental illnesses differ conventional, bodily ailments? 

For Szasz, the key distinguishing factor is an identified underlying physiological cause for, or at least correlate of, the symptoms observed. Thus, Szasz writes: 

The traditional medical criterion for distinguishing the genuine from the facsimile – that is, real illness from malingering – was the presence of demonstrable change in bodily structure as revealed by means of clinical examination of the patient, laboratory tests on bodily fluids, or post-mortem study of the cadaver” (Myth of Mental Illness: p27) 

Thus, in all cases of what Szasz regards as ‘real’ disease, a real physiological correlate of some sort has been discovered, whether a microbe, a gene or a cancerous growth. 

In contrast, so-called mental illnesses were first identified, and named, purely on the basis of their symptomology, without any understanding of their underlying physiological cause. 

Of course, many diseases, including physical diseases, are, in practice, diagnosed by the symptoms they produce. A GP, for example, will typically diagnose flu without actually observing and identifying the flu virus itself inside the patient under a microscope. 

However, the existence of the virus, and its causal role in producing the symptoms observed, has indeed been demonstrated scientifically in other individuals afflicted with the same or similar symptoms. We therefore recognise the underlying cause of these symptoms (i.e. the virus) independently from the symptoms they produce. 

This is not true, however, for mental illnesses. The latter were named, identified and diagnosed long before there was any understanding of their underlying physiological basis. 

Rather than diseases, we might then more accurately call them syndromes, a word deriving from the Greek ‘σύνδρομον’, meaning ‘concurrence’, which is usually employed in medicine to refer simply to a cluster of signs and symptoms that seem to correlate together, whether or not the underlying cause is or is not understood.[3]

Causes and Correlates 

The main problem for Szasz’s position is that our understanding of the underlying physiological causes of psychiatric conditions – neurological, genetic and hormonal – has progressed enormously since he first authored The Myth of Mental Illness, the paper and the book, at the beginning of the 1960s. 

Yet reading ‘Psychiatry: The Science of Lies’, published in 2008, it seems that Szasz’s own position has advanced but little.[4]

Yet psychiatry, and psychology, have come a long way in the intervening half-century. 

Thus, in 1960, American psychiatry was still largely dominated by Freudian Fruedian psychoanalysis, a pseudoscience roughly on a par with phrenology, of which Szasz is rightly dismissive.[5]

Of particular relevance to Szasz’s thesis, the study of the underlying physiological basis for psychiatric disorders has also progressed massively.  

Every month, in a wide array of scientific journals, studies are published identifying neurological, genetic, hormonal and other physiological correlates for psychiatric conditions. 

In contrast, Szasz, although he never spells this out, seems to subscribe to an implicit Cartesian dualism, whereby human emotions, psychological states and behaviour are a priori assumed, in principle, to be irreducible to mere physiological processes.[6]

Szasz claims in Psychiatry: The Science of Lies that, once an underlying neurological basis for a mental illness has been identified, it ceases to be classified as a mental illness, and is instead classed as a neurological disorder. His paradigmatic example of this is Alzheimer’s disease (p2).[7]

Yet, today, the neurological correlates of many mental illnesses are increasingly understood. 

Nevertheless, despite the progress that has been made in identifying physiological correlates for mental disorders, there remains at least two differences between these correlates (neurological, genetic, hormonal etc.) and the recognised causes of both physiological and neurological diseases. 

First, in the case of mental illnesses, the neurological, genetic, hormonal and other physiological correlates remain just that, i.e. mere correlates

Here, I am not merely reiterating the familiar caution that correlation does not imply causation, but also emphasizing that the correlations in question tend to be far from perfect, and do not form the basis for a diagnosis, even in principle. 

In other words, as a rule, few such identified correlates are present in every single person diagnosed with the condition in question. The correlation is established only at the aggregate statistical level. 

Moreover, those persons who present the symptoms of a mental illness but who do not share the physiological correlate that has been shown to be associated with this mental illness are not henceforth identified as not truly suffering from the mental illness in question. 

In other words, not only is diagnosis determined, as a matter of convenience and practicality, by reference to symptoms (as is also often true for many physical illnesses), but mental illnesses remain, in the last instance, defined by the symptoms they produce, not any underlying physiological cause. 

Any physiological correlates for the condition are ultimately incidental and have not caused physicians to alter their basic definition of the condition itself. 

Second, the identified correlates are, again as a general rule, multiple, complex and cumulative in their effects. In other words, there is not one single identified physiological correlate of a given mental illness, but rather multiple identified correlates, often each having small cumulative effects of the probability of a person presenting symptoms. 

This second point might be taken as vindicating Szasz’s position that mental illnesses are not really illnesses. 

Thus, recent research on the genetic correlates of mental illnesses, as recently summarized by Robert Plomin in his book Blueprint: How DNA Makes Us Who We Are, has found that the genetic variants that cause psychiatric disorders are the exact same genetic variants which, when present in lesser magnitude, also cause normal, non-pathological variation in personality and temperament. 

This suggests that, at least at the genetic level (and thus presumably at the phenotypic level too), what we call mental illness is just an extreme presentation of what is normal variation in personality and behaviour. 

In other words, so-called mental illness simply represents the extreme tail-end of the normal bell curve distribution in personality attributes. 

This is most obviously true of the so-called personality disorders. Thus, a person extremely low in empathy, or the factor of personality referred to by psychometricians as agreeableness, might be diagnosed with anti-social personality disorder (or psychopathy). 

However, it is also true for so-called other mental disorders. For example, ADHD (attention deficit hyperactivity disorder) seems to be mere medical jargon for someone who is very impulsive, with a short attention span, and lacking self-discipline (i.e. low in the factor of personality that psychometricians call conscientiousness) – all traits which vary on a spectrum across the whole population. 

On the other hand, clinical depression, unlike personality, is a temporary condition from which most people recover. Nevertheless, it is so strongly predicted by the factor of personality known to psychometricians as neuroticism that psychologist Daniel Nettle writes: 

Neuroticism is not just a risk factor for depression. It is so closely associated with it that it is hard to see them as completely distinct” (Personality: p114). 

Yet calling someone ‘ill’ because they are at the extreme of a given facet of personality or temperament is not very helpful. It is roughly equivalent to calling a basketballer ‘ill’ because he is exceptionally tall, a jockey ‘ill’ because he is exceptionally small, or Albert Einsteinill’ because he was exceptionally intelligent

Mental illness or Malingering?

While Szasz has therefore correctly identified problems with the conventional disease model of mental health, the model which he proposes in its place is, in my view, even more problematic, and less scientific, than the disease model that he has rightly rejected as probematic and misleading. 

Most unhelpful is the central place given in his theory to the notion of malingering, i.e. the deliberate faking of symptoms by the patient. 

This analysis may be a useful way to understand the nineteenth century outbreak of so-called hysteria, to which Szasz devotes considerable attention, or indeed the modern diagnosis of Munchausen syndrome, which again involves complaining of imagined or exaggerated physical symptoms. 

It may also be a useful way to understand the recently developed diagnosis of chronic fatigue syndrome (CFS, formerly ME), which, like hysteria, involves the patient complaining of physical symptoms for which no physical cause has yet been identified. 

Interestingly from a psychological perspective, all three of these conditions are overwhelmingly diagnosed among women and girls rather than men and boys. 

However, malingering may also be a useful way to understand another psychiatric complaint that was primarily complained of by men, albeit for obvious historical reasons – namely, so-called ‘shell shock’ (now, classed as PTSD) among soldiers during World War One.[8]

Here, unlike with hysteria and CFS, the patient’s motive and rationale for faking the symptoms in question (if this is indeed what they were doing) is altogether more rational and comprehensible – namely, to avoid the horrors of trench warfare (from which women were, of course, exempt). 

However, this model of ‘malingering’ is clearly much less readily applicable to sufferers of, say, schizophrenia

Here, far from malingering or faking illness, those afflicted will often vehemently protest that they are not ill and that there is nothing wrong with them. However, their delusions are often such that, by any ordinary criteria, they are undoubtedly, in the colloquial if not the strict medical sense, completely fucking bonkers. 

The model of malingering can, therefore, only be taken so far. 

Defining Mental Illness? 

The fundamental fallacy at the heart of psychiatry is, according to Szasz, the mistaking of moral problems for medical ones. Thus, he opines: 

Psychiatrists cannot expect to solve moral problems by medical methods” (Myth of Mental Illness: p24). 

Szasz has a point. Despite employing the language of science, there is undoubtedly a moral dimension to defining what constitutes mental illness. 

Whether a given cluster of associated behaviours represents just a cluster of associated behaviours or a mental illness is not determined on the basis of objective scientific criteria. 

Rather, most American psychiatrists simply regard as a mental illness whatever the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association classifies as a mental disorder. 

This manual is treated as gospel by psychiatrists, yet there is no systematic or agreed criteria for inclusion within this supposedly authoritative work. 

Popular cliché has it that mental illnesses are caused by a ‘chemical imbalance’ in the brain.  

Certainly, if we are materialists, we must accept that it is ultimately the chemical composition of the brain that causes behaviour, pathological or otherwise. 

But on what criteria are we to say that a certain chemical composition of the brain is an ‘imbalance’ and another is ‘balanced’, one behaviour ‘pathological’ and one ‘normal’? 

The criteria on which we make this judgement is, as I see it, primarily a moral one.[9]

More specifically, mental illnesses are defined as such, at least in part, because the behavioral symptoms that they produce tend to cause suffering or distress either to the person defined as suffering from the illness, or to those around them. 

Thus, a person diagnosed with depression is themselves the victim of suffering or distress resulting from the condition; a person diagnosed with psychopathy, on the other hand, is likely to cause psychological distress to those around them with whom they come into contact. 

This is a moral, not a scientific, criterium, depending as it does on the notion of suffering or harm

Indeed, it is not only a moral question, but it is also one that has, in recent years, been heavily politicized. 

Thus, gay right activists actively and aggressively campaigned for many years to have homosexuality withdrawn from the DSM and reclassified as non-pathological, and, in 1974, they were successful.[10]

This campaign may have had laudable motives, namely to reduce the stigma associated with homosexuality and prejudice against homosexuals. Yet it clearly had nothing to do with science and everything to do with politics and morality. 

Indeed, homosexuality satisfies many criteria for illness.[11]

First, it is, despite some ingenious and some not so ingenious attempts to show otherwise, obviously biologically maladaptive. 

Whereas the politically correct view is that homosexuality is entirely natural, normal and non-pathological variation of normal sexuality, from a Darwinian perspective this view is obviously untenable. 

Homosexual sex cannot produce offspring. Homosexuality therefore involves a maladaptive misdirection of mating effort, which would surely strongly selected against by natural selection.[12]

Homosexuality is therefore best viewed as a malfunctioning of normal sexuality, just as cancer is a kind of malfunctioning of cell growth and division. In this sense, then, homosexuality is indeed best viewed as something akin to an illness. 

Second, homosexuality shows some degree of comorbidity with other forms of mental illness, such as depression.[13]

Finally, homosexuality is associated other undesirable life-outcomes, such as reduced longevity and, at least for male homosexuals, a greater lifetime susceptibility to various STDs.[14]

Yet, just as homosexuals successfully campaigned for the removal of homosexuality from the DSM, so ‘trans rights’ campaigners are currently embarking on a similar campaign in respect of gender dysphoria

The politically correct consensus today holds that an adult or child who claims to identify of the opposite ‘gender’ to their biological sex should be encouraged and supported in their ‘transition’, and provided with hormone therapy, hormone blockers and sex reassignment surgery, as requested. 

This is roughly the equivalent of, if a person is mentally ill and thinks they are Napoleon, then, instead of telling them that they are not Napoleon, instead we provide them with legions of troops with which to invade Prussia. 

Moving beyond the sphere of sexuality, some self-styled ‘neurodiversity’ activists have sought to reclassify autism as a normal variation of mental functioning, a claim that may appear superficially plausible in respect of certain forms of so-called ‘high functioning autism’, but is clearly untenable in respect of ‘low functioning autism’.[15]

Yet, on the other hand, there is oddly no similar, high-profile campaign to reclassify, say, anti-social personality disorder (ASPD) or psychopathy as a normal, non-pathological variant of human psychology. 

Yet psychopathy may indeed be biologically adaptive at least under some conditions (Mealey 1995). 

Yet no one proposes treating psychopathy as normal or natural variation in personality, even though it is likely just that. 

The reason that there is no campaign to remove psychopathy from the DSM is, of course, because, unlike homosexuals, transexuals and autistic people, psychopaths are hugely disproportionately likely to cause harm to innocent non-consenting third-parties. 

This is indeed a good reason to treat psychopathy and anti-social personality disorder as a problem for society at large. However, this is a moral not a scientific reason for regarding it as problematic. 

To return to the question of disorders of sexuality, another useful point of comparison is provided by paedophilia

From a purely biological perspective, paedophilia is analogous to homosexuality. Both are biologically maladaptive because they involve sexual attraction to a partner with whom reproduction is, for biological reasons, impossible.[16]

Yet, unlike in the case of homosexuality, there has been no mainstream political push for paedophilia to be reclassified as nonpathological or removed from the Diagnostic and Statistical Manual of Mental Disorders of the AMA.[17]

The reason for this is again, of course, obvious and entirely reasonable, yet it equally obviously has nothing to do with science and everything to do with morality – namely, whereas homosexual behaviour as between consenting adults is largely harmless, the same cannot be said for child sexual abuse.[18]

Perhaps an even better analogy would be between homosexuality and, say, necrophilia

Necrophilic sexual activity, like homosexual sexual activity, but quite unlike paedophilic sexual activity, represents something of a victimless crime. A corpse, by virtue of being dead, cannot suffer by virtue of being violated.[19]

Yet no one would argue that necrophilia is a healthy and natural variation on normal human sexuality. 

Of course, although numbers are hard to come by due to the attendent stigma, necrophilia is presumably much less common, and hence much less ‘normal’, than is homosexuality. However, if this is a legitimate reason for regarding homosexuality as more ‘normal’ than is necrophilia, then it is also a legitimate reason for regarding homosexuality itself as ‘abnormal’, because homosexuality is, of course, much less common than heterosexuality.

Necrophile rights is, therefore, the reductio ad absurdum of gay rights.[20]

Medicine or Morality? 

The encroachment of medicine upon morality continues apace, as part of what Szasz calls the medicalization of everyday life Thus, there is seemingly no moral failing or character defect that is not capable of being redefined as a mental disorder. 

Selfish people are now psychopaths, people lacking in willpower and with short attention spans now have ADHD

But if these are simply variations of personality, does it make much sense to call them diseases? 

Yet the distinction between ‘mad’ and ‘bad also has practical application in the operation of the criminal justice system. 

The assumption is that mentally ill offenders should not be punished for their wrongdoing, but rather treated for their illness, because they are not responsible for their actions. 

But, if we accept a materialist conception of mind, then all behaviour must have a basis in the brain. On what basis, then, do we determine that one person is mentally ill while another is in control of his faculties?

As Robert Wright observes: 

“[Since] in both British and American courts, women have used premenstrual syndrome to partly insulate themselves from criminal responsibility… can a ‘high-testosterone’ defense of male murderers be far behind?… If defense lawyers get their way and we persist in removing biochemically mediated actions from the realm of free will, then within decades [as science progresses] the realm will be infinitesimal” (The Moral Animal: p352-3).[21]

Yet a man claiming that, say, high testosterone caused his criminal behaviour is unlikely to be let off on this account, because, if high testosterone does indeed cause crime, then we have good reason to lock up high testosterone men precisely because they are likely to commit crimes.[22]

Szasz wants to resurrect the concept of free will and hold everyone, even those with mental illnesses, responsible for their actions. 

My view is the opposite: No one has free will. All behaviour, normal or pathological, is determined by the physical composition of the brain, which is, in turn, determined by some combination of heredity and environment. 

Indeed, determinism is not so much a finding of science as its basic underlying assumption and premise.[23]

In short, science rests on the assumption that all events have causes and that, by understanding the causes, we can predict behaviour. If this were not true, then there would be no point in doing science, and science would not be able to make any successful predictions. 

In short, criminal punishment must be based on consequentialist utilitarian considerations such as deterrence, incapacitation and rehabilitation rather than such unscientific moralistic notions as free will, just deserts and blame.[24]

A Moral Component to All Medicine? 

Szasz is right, then, to claim that there is a moral dimension to psychiatric diagnoses. 

This is why psychopathy is still regarded as a mental disorder even though it is likely an adaptive behavioural strategy and life history in certain circumstances (Mealey 1995). 

It is also why homosexuality is no longer regarded as a mental illness, despite its obviously biologically maladaptive consequences, yet there is no similar campaign to remove paedophilia from the DSM. 

Yet what Szasz fails to recognise is that there is also a moral element to the identification and diagnosis of physical illnesses too. 

Thus, physical illnesses, like psychiatric illnesses, are called illnesses, at least in part, because they cause pain, suffering and impairment in normal functioning to the person diagnosed as suffering from the illness. 

If, on the other hand, an infection did not produce any unpleasant symptoms, then the patient would surely never bother to seek medical treatment and thus the infection would probably never come to the attention of the medical profession in the first place. 

If it did come to their attention, would they still call it a disease? Would they expect time and resources attempting to ‘cure’ it? Hopefully not, as to do so would be a waste of time and resources. 

Extending this thought experiment, what if the infection in question, not only caused no negative symptoms, but actually had positive effects on the person infected.   

What if the infection in question caused people to be fitter, smarter, happier, kinder and more successful at their jobs? 

Would doctors still call the infection a ‘disease’, and the microscopic organism underlying it a ‘germ’? 

Actually, this hypothetical thought experiment may not be entirely hypothetical. 

After all, there are indeed surely many microorganisms that infect humans which have few or negligible effects, positive or negative, and with which neither patients nor doctors are especially concerned. 

On the other hand, some infections may be positively beneficial to their hosts. 

Take, for example, gastrointestinal microbiota (also known as gut microbiota). 

These are microorganisms that inhabit our digestive tracts, and those of other organisms, and are thought to have a positive beneficial effect on the health and functioning of the host organism. They have even been marketed as probiotics and good bacteria in the advertising campaigns for certain yoghurt-like drinks. 

Another less obvious example is perhaps provided by mitochondrial DNA

In our ancient evolutionary history, this began as the DNA of a separate organism, a bacterium, that infected host organisms, but ultimately formed a symbiotic and mutualistic relationship with us, and now plays a key role in the functioning of those organisms whose distant ancestors it first infected. 

In short, all medicine has a moral dimension.  

This is because medicine is an applied, not a pure, science. 

In other words, medicine aims not merely to understand disease in the abstract, but to treat it. 

We treat diseases to minimize human suffering, and the minimization of human suffering is ultimately a moral (or perhaps economic, since doctors are paid, and provide a service to their patients), rather than a purely scientific, endeavour. 

Endnotes

[1] Although this post is a review of Thomas Szasz’s Pyschiatry: The Science of Lies, readers may note that many of the quotations from Szasz in the review are actually taken from his earlier, more famous book, The Myth of Mental Illness, published some several decades previously. By way of explanation, while this essay is a review of Szasz’s Psychiatry: The Science of Lies, I listened to an audiobook version of this book, and do not have access to a print copy. It was therefore difficult to find source quotes from this book. In contrast, I own a copy of The Myth of Mental Illness, but have yet to read it in full. I thought it more useful to read a more recent statement of Szasz’s views, so as to find out how he has dealt with recent findings in biological psychiatry and behavioural genetics. Unfortunately, as I discuss above, it seems that Szasz has reacted to recent findings in biological psychiatry and behavioural genetics hardly at all, and includes few if any references to such developments in his more recent book.

[2] Thus, proponents of Darwinian medicine contend that many infections produce symptoms such as coughing, sneezing and diarrhea precisely because these symptoms facilitate the spread of the disease through contact with the bodily fluids expelled, hence promoting the pathogens’ own Darwinian fitness or reproductive success.

[3] For example, the underlying physical cause of chronic fatigue syndrome (CFS) is not fully understood. On the other hand, the underlying cause of acquired immunodeficiency syndrome (AIDS) is now understood, namely HIV infection, but, presumably because it involves increased susceptibility to many different infections, it is still referred to as a syndrome rather than a disease in and of itself.

[4] Indeed, according to Szasz himself, in an autobiographical interlude in ‘Psychiatry: The Science of Lies’, he had arrived at his opinion regarding the scientific status of psychiatry even earlier, when first making the decision to train to become a psychiatrist. Indeed, he claims to have made the decision to study psychiatry and qualify as a psychiatrist precisely in order to attack the field from within, with the authority which this professional qualification would confer upon him. This, it hardly needs to be said, is a very odd reason for a career choice.

[5] Attacking modern psychiatry by a critique of Freud is a bit like attacking neuroscience by critiquing nineteenth century phrenology. It involves constructing a straw man version of modern psychiatry. I am reminded in particular of Arthur Jensen’s review of infamous charlatan Stephen Jay Gould’s discredited The Mismeasure of Man, which Jensen titled The debunking of scientific fossils and straw persons, where he described Gould’s method of trying to discredit the modern science of IQ testing and intelligence research by citing the errors of nineteenth-century craniologists as roughly akin to “trying to condemn the modern automobile by merely pointing out the faults of the Model T”.

[6] In The Myth of Mental Illness, Szasz, writes: 

There remains a wide circle of physicians and allied scientists whose basic position concerning the problem of mental illness is essentially that expressed in Carl Wernicke’s famous dictum: ‘Mental diseases are brain diseases’. Because, in one sense, this is true of such conditions as paresis and the psychoses associated with systemic intoxications, it is argued that it is also true for all other things called mental diseases. It follows that it is only a matter of time until the correct physicochemical, including genetic, bases or cause’, of these disorders will be discovered. It is conceivable, of course, that significant physicochemical disturbances will be found in some ‘mental patients’ and in some ‘conditions’ now labeled ‘mental illnesses’. But this does not mean that all so-called mental diseases have biological ‘causes’, for the simple reason that it has become customary to use the term ‘mental illness’ to stigmatize, and thus control, those persons whose behavior offends society—or the psychiatrist making the ‘diagnosis’” (The Myth of Mental Illness: p103). 

Yet, if we accept a materialist conception of mind, then all behaviours, including those diagnostic of mental illness, must have a cause in the brain, though it is true that the same behaviours may result from quite different neuroanatomical causes.
It is certainly true that the concept of mental illness has been used to “stigmatize, and thus control, those persons whose behavior offends society”. So-called drapetomania provides an obvious example, albeit one that was never widely recognised by physicians, at least outside the American South. Another example would be the diagnosis of sluggish schizophrenia used to institutionalize political dissidents in the Soviet Union. Likewise, psychopathy (aka sociopathy or anti-social personality disorder) may, as I argue later in this post, have been classified as a mental disorder primarily because the behaviour of people diagnosed with this condition does indeed “offend society” and arguably demand the “control”, and sometimes detention, of such people.
However, this does not mean that the behaviours complained of (e.g. political dissidence, or anti-social behaviours) will not have neural or other physiological correlates. On the contrary they undoubtedly do, and psychologists have also investigated the neural and other physiological correlates of all behavours, not just those labelled as pathological and as ‘mental illnesses’.
However, Szasz does not quite go so far as to deny that behaviours have physical causes. On the contrary, in The Myth of Mental Illness, hedging his bets against future scientific advances, Szasz acknowledges:

I do not contend that human relations, or mental events, take place in a neurophysiological vacuum. It is more than likely that if a person, say an Englishman, decides to study French, certain chemical (or other) changes will occur in his brain as he learns the language. Nevertheless, I think it would be a mistake to infer from this assumption that the most significant or useful statements about this learning process must be expressed in the language of physics. This, however, is exactly what the organicist claims” (The Myth of Mental Illness: p102- 3). 

Here, Szasz makes a good point – but only up to a point. Whether we are what Szasz calls ‘organicists’ or not, I’m sure we can all agree that, for most purposes, it is not useful to explain the decision to learn French in terms of neurophysiology. To do so would be an example of what philosopher Daniel Dennett, in Darwin’s Dangerous Idea, calls ‘greedy reductionism’, which he distinguished from ‘good reductionism’, which is central to science.
However, it is not clear that the same is true of what we call mental illnesses. Often it may indeed be useful to understand mental illnesses in terms of their underlying physiological causes, including for therapeutic reasons, since understanding the physiological basis for behaviour that we deem undesirable may provide a means of changing these behaviours by altering the physical composition of the brain. For example, if the hormone serotonin is involved in regulating mood, then manipulating levels of serotonin in the brain, or their reabsorption may be a way of treating depression, anxiety and other mood disorders. Thus, SSRIs and SNRIs, which are thought to do just this, have been found to be effective in doing just this.
However, for other purposes, it may be useful to look at a different level of causation. For example, as I discuss in a later endnote, although it may be scientifically a nonsense, it may nevertheless be useful to inculcate a belief in free will among some psychiatric patients, since it may encourage them to overcome their problems rather adopting the fatalistic view that they are ill and there is hence nothing they can do to improve their predicament. Szasz sometimes seems to be arguing for something along these lines.

[7] In The Myth of Mental Illness, as quoted in the preceding endnote, Szasz also gives as examples of behavioural conditions with well-established physiological causes “paresis and the psychoses associated with systemic intoxications(The Myth of Mental Illness: p103).

[8] I hasten to emphasize in this context, lest I am misunderstood, I am not saying that Szasz’s model of ‘malingeringis indeed the appropriate way to understand conditions such as hysteria, Munchausen syndrome, chronic fatigue syndrome or shell shock – only that a reasonable case can be made to this effect. Personally, I do not regard myself as having a sufficient expertise on the topic to be willing to venture an opinion either way.

[9] Of course, we could determine whether a certain composition and structure of the brain is ‘balanced’ ‘imbalanced’ on non-moralistic, Darwinian criteria. In other words, if a certain composition/structure and the behaviour it produces is adaptive (i.e. contributes to the reproductive success or fitness of the organism) then we could call it ‘balanced’; if, on the other hand, it produces maladaptive behaviour we could call it ‘imbalanced’. However, this would produce a quite different inventory and classification of mental illnesses than that provided by the DSM of the APA and other similar publications, since, as we will see, homosexuality, being obviously biologically maladaptive, would presumably be classified as an ‘imbalance’ and hence a mental illness, whereas psychopathy, since it may well, under certain conditions, be adaptive, would be classed as non-pathological and hence ‘balanced’. This analysis, however, has little to do with mental illness as the concept is currently conceived.

[10] Oddly, Szasz himself is sometimes lauded by some politically correct-types as being among the first psychiatrists to deny that homosexuality was a mental illness. Yet, since he also denied that schizophrenia was a mental illness, and indeed rejected the whole concept of ‘mental illness’ as it is currently conceived, this is not necessarily as ‘progressive’ and ‘enlightened’ a view as it is sometimes credited as having been.

[11] Here, a few caveats are in order. Describing homosexuality as a mental illness no more indicates hatred towards homosexuals than describing schizophrenia as a mental illness indicates hatred towards people suffering from schizophrenia, or describing cancer as an illness indicates hatred towards people afflicted with cancer. In fact, regarding a person as suffering from an illness is generally more likely to elicit sympathy for the person so described than it is hatred.
Of course, being diagnosed with a disease may involve some stigma. But this is not the same as hatred.
Moreover, as should be clear from my conclusion, I am not, in fact, arguing that homosexuality should indeed be classified as a mental illness. Rather, I am simply pointing out that it is difficult a frame a useful definition of what constitutes a ‘mental disorder’ unless that definition includes moral criteria, which are necessarily extra-scientific. However, in the final section of this piece, I argue that there is indeed a moral component to all medicine, psychiatry included.
Of course, as I also discuss above, there are indeed some moral reasons for regarding homosexuality as undesirable, for example its association with reduced longevity, which is generally regarded as an undesirable outcome. However, whether homosexuality should indeed be classed as a ‘mental disorder’ strikes me as debatable and also dependent on the exact definition of ‘mental disorder’ adopted.

[12] If homosexuality is therefore maladaptive, this, of course, raises the question as to why homosexuality has not indeed been eliminated by natural selection. The first point to make here is that homosexuality is in fact quite rare. Although Kinsey famously originated the since-popularized claim that as many as 10% of the population are homosexual, reputable estimates using representative samples generally suggest less than 5% of the population identifies as exclusively or preferentially homosexual (though a larger proportion of people may have had homosexual experiences at some time, and the ‘closet factor’ makes it possible to argue that, even in an age of unprecedented tolerance and indeed celebration of homosexuality, and even in anonymous surveys, this may represent an underestimate due to underreporting).
Admittedly, there has recently been a massive increase in the numbers of teenage girls identifying as non-heterosexual, with numbers among this age group now slightly exceeding 10%. However, I suspect that this is more a matter of fashion than of sexuality. Thus, it is notable that the largest increase has been for identification as ‘bisexual’, which provides a convenient cover by which teenage girls can identify with the so-called ‘LGBT+ community’ while still pursuing normal, healthy relationships with opposite-sex boys or men. The vast majority of these girls will, I suspect, grow up to have sexual and romantic relationships primarily with members of the opposite sex.
Yet even these low figures are perhaps higher than one might expect, given that homosexuality would be strongly selected against by evolution. (However, it is important to remember that, when homosexuals were persecuted and hence mostly remained in the ‘closet’, homosexuality would have been less selected against, precisely because so many gay men and women would have married members of the opposite sex and reproduced if only to evade accusations of homosexuality. With greater tolerance, however, they no longer have any need to do so. The liberation of homosexuals may therefore, paradoxically, lead to their gradual disappearance through selection.)
A second point to emphasize is that, contrary to popular perception, homosexuality is not especially heritable. Indeed, it is rather less heritable than other behavioural traits about which it is much less politically correct to speculate regarding the heritability (e.g. criminality, intelligence).
If homosexuality is primarily caused by environmental factors, not genetics, then it would be more difficult for natural selection to weed it out. However, given that exclusive or preferential homosexuality would be strongly selected against by natural selection, humans should have evolved to be resistant to developing exclusive or preferential homosexuality under all environmental conditions that were encountered during evolutionary history. It is possible, however, environmental novelties atypical of the environments in which our psychological adaptations evolved are responsible for causing homosexuality.
For what it’s worth, my own favourite theory (although not necessarily the best supported theory) for the evolution of male homosexuality proposes that genes located on the X chromosome predispose a person to be sexually attracted to males. This attraction is adaptive for females, but maladaptive for males. However, since females have two X chromosomes and males only one, any X chromosome genes will find themselves in females twice as often as they find themselves in males. Therefore, any increase in fitness for females bearing these X chromosome genes only has to be half as great as the reproductive cost to males for the genes in question to be positively selected for.
This is sometimes called the ‘balancing selection theory of male homosexuality’. However, perahps more descriptive and memorable is Satoshi Kanazawa’s coinage, ‘the horny sister hypothesis’.
This theory also has some support, in that there is some evidence the female relatives of male homosexuals have a greater number of offspring than average and also that gay men report having more gay uncles on their mother’s than their father’s side, consistent with an X chromosome-linked trait (Hamer et al 1993; Camperio-Ciani et al 2004). Some genes on the X chromosome have also been linked to homosexuality (Hamer et al 1993; Hamer 1999).
On the other hand, other studies find no support for the hypothesis. For example, Bailey et al (1999) found that rates of reported homosexuality were no higher among maternal than among paternal male relatives, as did McKnight & Malcolm (1996). At any rate, as explained by Wilson and Rahman in their excellent book Born Gay: The Psychobiology of Sexual Orientation:

Increased rates of gay maternal relatives might also appear because of decreased rates of reproduction among gay men. A gay gene is unlikely to be inherited from a gay father because a gay man is unlikely to have children” (Born Gay: p51; see also Risch et al 1993).

[13] Gay rights activists assert that the only reason that homosexuality is associated with other forms of mental illness is because of the stigma to which homosexuals are subject on account of their sexuality. This has sometimes been termed the ‘social stress hypothesis’, ‘social stress model’ or ‘minority stress model’. There is indeed statistical support for the theory that the social stigma is indeed associated with higher rates of depression and other mental illnesses.
It is also notable that, while homosexuality is indeed consistently associated with higher levels of depression and suicide, conditions that can obviously be viewed as a direct response to social stigma, I am not aware of any evidence suggesting higher rates of, say, schizophrenia among homosexuals, which would less obviously, or at least less directly, result from social stress. However, I tend to agree with the conclusions of Mayer and McHugh, in their excellent review of the literature on this subject, that, while social stress may indeed explain some of the increased rate of mental illness among homosexuals, it is unlikely to account for the totality of it (Mayer & McHugh 2016).

[14] Yet, in describing the life outcomes associated with homosexuality, as undesirable, I am, of course, making am extra-scientific value judgement. Of course, the value judgement in question – namely that dying earlier and being disproportionately likely to contract STDs is a bad thing – is not especially controversial. However, it still illustrates the extent to which, as I discuss later in this post, definitions of mental illnesses, and indeed physical illnesses, always include a moral dimension – i.e. diseases are defined, in part, by the fact that they cause suffering, either to the person afflicted, or, in the case of some mental illnesses, to the people in contact with them.

[15] That autism is indeed maladaptive and pathological is also suggested by the well-established correlation between paternal age and autism in offspring, since this has been interpreted as reflecting the build up of deleterious mutations in the sperm of older males.

[16] Indeed, from a purely biological perspective, homosexuality is arguably even more biologically maladaptive than is paedophilia, since even very young children can, in some exceptional cases, become pregnant and even successfully birth offspring, yet same-sex partners are obviously completely incapable of producing offspring with one another.

[17] Indeed, far from there being any political pressure to remove paedophilia from the DSM of the AMA, as ocurred with homosexuality, there is instead increasing pressure to add hebephilia (i.e. attraction to pubescent and early-post-pubescent adolescents) to the DSM. If successful, this would probably lead to pressure to also add ‘ephebophilia’ (i.e. the biologically adaptive and normal male attraction to mid- to late-adolescents) to the DSM, and thereby effectively pathologize and medicalize, and further stigmatize, normal male sexuality.

[18] Of course, homosexual sex does have some dangers, such as STDs. However, the same is also true of heterosexual sex, although, for gay male sex, the risks are vastly elevated. Yet other perceived dangers result from only from heterosexual sex (e.g. unwanted pregnancies, marriage). Meanwhile, the other negative life outcomes associated with homosexuality (e.g. elevated risk of depression and suicide) probably result from a homosexual orientation rather than from gay sex as such. Thus, a celibate gay man is, I suspect, just as likely, if not more likely, to suffer depression than is a highly promiscuous gay man.
Yet, while gay sex may be mostly harmless, the same cannot, of course, be said for child sexual abuse. It may indeed be true that the long-term psychological effects of child sexual abuse are exaggerated. This was, of course, the infamous conclusion of the Rind et al meta-analysis, which resulted in much moral panic in the late-1990s (Rind et al 1998). This is especially likely to be the case when the sexual activity in question is consensual and involves post-pubertal, sexually mature (but still legally underage) teenagers. However, in such cases the sexual activity in question should not really be defined as ‘child sexual abuse’ in the first place, since it neither involves immature children in the biological sense, nor is it necessarily abusive. Yet, it must be emphasized, even if child sexual abuse does not cause long-term psychological harm, it may still cause immediate harm, namely the distress experienced by the victim at the time of the abuse.

[19] Of course, one might argue that the relatives of the deceased may suffer as a result of the idea of their dead relatives’ bodies being violated by necrophiles. However, much the same is also true of homosexuality. So-called ‘homophobes’, for example, may dislike the idea of their adult homosexual sons having consensual homosexual sex. Indeed, they may even dislike the idea of unrelated adult strangers being allowed to have consensual homosexual sex. This was indeed presumably the reason why homosexuality has been criminalized and prohibited in so many cultures across history in the first place, i.e. because other people were disgusted by the thought of it. However, we no longer regard this sort of puritanical, disapproval other people’s private lives as a sufficient reason to justify the criminalization of homosexual behaviour. Why then should it be a reason for criminalizing necrophilia?

[20] Other similar thought experiments involve the prohibitions on other sexual behaviours such as zoophilia and incest. In both these cases, however, the case is morally more complex, in the case of zoophilia on account of whether the animal participant suffers harm or has consented, and, in the case of incest, because of eugenic considerations, namely the higher rate of the expression of deleterious mutations among the offspring of incestuous unions.

[21] Indeed, the courts, in both Britain and America, have been all too willing to invent bogus pseudo-psychiatric diagnoses in order to excuse women, in particular, for culpability in their crimes, especially murder. For example, in Britain, the Infanticide Acts of 1922 and 1938 provide a defence against murder for women who kill their helpless new-born infants where “at the time of the act… the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child”. In terms of biology, physiology and psychology, this is, of course, a nonsense, and, of course, no equivalent defence is available for fathers, though, in practice, the treatment of mothers guilty of infanticide is more lenient still (Wilczynski and Morris 1993).
Similarly, in both Britain and America, women guilty of killing their husbands, often while the latter was asleep or otherwise similarly incapacitated, have been able to avoid being a murder conviction by claiming to have been suffering from so-called ‘battered women syndrome’. There is, of course, no equivalent defence for men, despite the consistent finding that men are somewhat more likely to be the victim of violence from their female intimate partners than women are to have been victimized by their male intimate partners (Fiebert 2014). This may partly explain why men who kill their wives receive, on average, sentences three time as long as women who kill their husbands (Langan & Dawson 1995).

[22] Of course, another possibility might be some form of hormone therapy to reduce the offender’s testosterone. Also, it must be acknowledged that this discussion is hypothetical. Whether testosterone is indeed correlated with criminal or violent behaviour is actually the subject of some dispute. Thus, Alan Mazur, a leading researcher in this area, argues that testosterone is not associated with aggression or violence as such, but rather only with dominance behaviours, which can also be manifested in non-violent ways. For example, a high-powered business tycoon is likely to be high in social dominance behaviours, but relatively unlikely to commit violent crimes. On the other hand, a prisoner, being of low status, may be able to exercise dominance only through violence. I am therefore giving the example of high testosterone only as a simplified hypothetical thought experiment.

[23] Of course, one finding of science, namely quantum indeterminism, complicates this assumption. Ironically, while determinism is the underlying premise of all scientific enquiry, nevertheless one finding of such enquiry is that, at the most fundamental level, determinism does not hold.

[24] Nevertheless, I am persuaded that there may be some value in the concept of free will, after all. Although it is a nonsense, it may, like some forms of religious belief, nevertheless be a useful nonsense, at least in some circumstances.
Thus, if a person is told that there is no free will, and that their behaviours are inevitable, this may encourage a certain fatalism and the belief that people cannot change their behaviours for the better. In fact, this is a fallacy. Actually, determinism does not suggest that people cannot change their behaviours. It merely concludes that whether people do indeed change their behaviours is itself determined. However, this philosophical distinction may be beyond many people’s understanding.
Thus, if people are led to believe that they cannot alter their own behaviour, then this may become something of a self-fulfilling prophecy, and thereby prevent self-improvement.
Therefore, just as religious beliefs may be untrue, but nevertheless serve a useful function in giving people a reason to live and to behave prosocially and for the benefit of society as a whole, so it may be beneficial to inculcate and encourage a belief in free will in order to encourage self-improvement, including among the mentally ill.

References

Bailey et al (1999) A Family History Study of Male Sexual Orientation Using Three Independent Samples, Behavior Genetics 29(2): 79–86. 
Camperio-Ciani (2004) Evidence for maternally inherited factors favouring male homosexuality and promoting female fecundity, Proceedings of the Royal Society B: Biological Sciences 271(1554): 2217–2221. 
Fiebert (2014) References Examining Assaults by Women on Their Spouses or Male Partners: An Updated Annotated Bibliography, Sexuality & Culture 18(2):405-467. 
Hammer et al (1993) A linkage between DNA markers on the X chromosome and male sexual orientation, Science 261(5119):321-7.  
Hammer (1999) Genetics and Male Sexual Orientation, Science 285(5429): 803. 
Langan & Dawson (1995) Spouse Murder Defendants in Large Urban Counties, U.S. Department of Justice Office of Justice Programs, Bureau of Justice Statistics: Executive Summary (NCJ-156831), September 1995. 
Mayer & McHugh (2016) Sexuality and Gender Findings from the Biological, Psychological, and Social Sciences, New Atlantis 50: Fall 2016. 
McKnight & Malcolm (2000) Is male homosexuality maternally linked? Evolution and Gender 2(3):229-252. 
Mealey (1995) The sociobiology of sociopathy: An integrated evolutionary model. Behavioral and Brain Sciences, 18(3): 523–599.
Rind et al(1998). A Meta-Analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples, Psychological Bulletin 124 (1): 22–53.
Risch et al (1993) Male Sexual Orientation and Genetic Evidence, Science 262(5142): 2063-2065. 
Szasz 1960 The Myth of Mental Illness. American Psychologist, 15, 113-118. 
Wilczynski & Morris (1993) Parents Who Kill their children, Criminal Law Review, 31-6.

Hitler, Hicks, Nietzsche and Nazism

Nietzsche and the Nazis: A Personal View by Stephen Hicks (Ockham’s Razor Publishing 2010) 

Scholarly (and not so scholarly) interpretations of Nietzsche always remind me somewhat of biblical interpretation

In both cases, the interpretations always seem to say at least as much about the philosophy, worldview and politics of the person doing the interpretation as they do about the content of the work ostensibly being interpreted. 

Thus, just as Christians can, depending on preference, choose between, say, Exodus 21:23–25 (an eye for an eye) or Matthew 5:39 (turn the other cheek), so authors of diametrically opposed political and philosophical worldviews can almost always claim to find something in Nietzsche’s corpus of writing to support their own perspective. 

Thus, in HL Mencken’s The Philosophy of Friedrich Nietzsche, Nietzsche appears as an aristocratic elitist, opposed to Christianity, Christian ethics, egalitarianism and ‘herd morality’, but also as a scientific materialist—much like, well, HL Mencken himself

Yet, among leftist postmodernists, Nietzsche’s moral philosophy is largely ignored, and he is cited instead as an opponent of scientific materialism who rejects the very concept of objective truth, including scientific truth—in short, a philosophical precursor to postmodernism.

Similarly, whereas German National Socialists selectively quoted passages from Nietzsche that appear highly critical of Jews, so modern Nietzschean apologists cite passages that profess great admiration for Jewish people, and other passages undoubtedly highly critical of both Germans and anti-Semites.  

There are indeed passages in Nietzsche’s work that, at least when quoted in isolation, can be interpreted as supporting any of these often mutually contradictory notions. 

In his book Nietzsche and the Nazis, professor of philosophy Stephen Hicks discusses the association between the thought of Friedrich Nietzsche and the most controversial of the many twentieth century movements to claim Nietzsche as their philosophical precursor, namely the National Socialist movement and regime in early- to mid-twentieth century Germany. 

Since he is a professor of philosophy rather than a historian, it is perhaps unsurprising that Hicks demonstrates a rather better understanding of the philosophy of Nietzsche than he does of the ideology of Hitler and the German National Socialist movement. 

Thus, if the Nazis stand accused of misinterpreting, misappropriating or misrepresenting the philosophy of Nietzsche, Hicks can claim to have outdone even them—for he has managed to misrepresent, not only the philosophy of Nietzsche, but also that of the Nazis as well. 

Philosophy as a Driving Force in History 

Hicks begins his book by making a powerful case for the importance of philosophy as a force in history and as a factor in the rise of German National Socialism in particular. 

Thus, he argues: 

The primary cause of Nazism lies in philosophy… The legacy of World War I, persistent economic troubles, modern communication technologies, and the personal psychologies of the Nazi leadership did play a role. But the most significant factor was the power of a set of abstract, philosophical ideas. National Socialism was a philosophy-intensive movement” (p10-1). 

This claim—namely, that “National Socialism was a philosophy-intensive movement”—may seem an odd one, especially since German National Socialism is usually regarded as a profoundly anti-intellectual movement. 

Moreover, to achieve any degree of success and longevity, all political movements, and political regimes, must inevitably make ideological compromises in the face of practical necessity, such that their actual policies are dictated at least as much pragmatic considerations of circumstance, opportunity and realpolitik as it is by pure ideological dictate.[1]

Yet, up to a point, Hicks is right. 

Indeed, Hitler even saw himself as, in some ways, a philosopher in his own right. Thus,  historian Ian Kershaw, in his celebrated biography of the German Führer, Hitler, 1889-1936: Hubris, observes: 

“In Mein Kampf, Hitler pictured himself as a rare genius who combined the qualities of the ‘programmatist’ and the ‘politician’. The ‘programmatist’ of a movement was the theoretician who did not concern himself with practical realities, but with ‘eternal truth’, as the great religious leaders had done. The ‘greatness’ of the ‘politician’ lay in the successful practical implementation of the ‘idea’ advanced by the ‘programmatist’. ‘Over long periods of humanity,’ he wrote, ‘it can once happen that the politician is wedded to the programmatist.’ His work did not concern short-term demands that any petty bourgeois could grasp, but looked to the future, with ‘aims which only the fewest grasp’… Seldom was it the case, in his view, that ‘a great theoretician’ was also ‘a great leader’… He concluded: ‘the combination of theoretician, organizer, and leader in one person is the rarest thing that can be found on this earth; this combination makes the great man.’ Unmistakably, Hitler meant himself” (Hitler, 1889-1936: Hubris: p251–2). 

Moreover, philosophical ideas have undoubtedly had a major impact on history in other times and places. 

For example, the French revolution, American revolution and Bolshevik Revolution may have been triggered and made possible by social and economic conditions then prevailing – but the regimes established in their aftermath were, at least in theory, based on the ideas of philosophers and political theorists.  

Thus, if the French revolution was modelled on the ideas of thinkers such as Locke, Rousseau and Voltaire, the American revolution on those of LockeMontesquieu, Benjamin Franklin, Thomas Jefferson and Thomas Paine and the Bolshevik Revolution on those of Marx, Lenin and Trotsky, among others, who then were the key thinkers, if any, behind the National Socialist movement in Germany? 

Hicks, for his part, tentatively ventures several leading candidates: 

Georg Hegel, Johann Fichte, even elements from Karl Marx” (p49).[2]

In an earlier chapter, as part of his attempt to argue against the notion that German National Socialism had no intellectual credibility, he also mentions several contemporaneous thinkers who, he claims, “supported the Nazis long before they came to power” and who could perhaps be themselves be considered intellectual forerunners for National Socialism, including Oswald Spengler, Martin Heidegger, and legal theorist Carl Schmitt (p9).[3]

Besides Hitler himself, and Rosenberg, each of whom considered themselves philosophical thinkers in their own right, other candidates who might merit honourable (or perhaps dishonourable) mention in this context include Hitler’s own early mentor Dietrich Eckart, racial theorists Arthur De Gobineau and Houston Stewart Chamberlain, the American Madison Grant, biologist Ernst Haeckel, geopolitical theorist Karl Haushofer, and, of course, the composer Richard Wagner – though most of these are not, of course, philosophers in the narrow sense.

Yet, at least according to Hicks, the best known and most controversial name atop any such list is almost inevitably going to be Friedrich Nietzsche (p49). 

Nietzsche’s Philosophy 

Although the association between Nietzsche with the Nazis continues to linger large in the popular imagination, mainstream Nietzsche scholarship in the years since World War II, especially the work of the influential Jewish philosopher and poet, Walter Kaufmann, has done much rehabilitate the reputation of Nietzsche, sanitize his philosophy and absolve him of any association with, let alone responsibility for, Fascism or National Socialism. 

Hick’s own treatment is rather more balanced. 

Before directly comparing and contrasting the various commonalities and differences between Nietzsche’s philosophy and that of the National Socialist movement and regime, Hick devotes one chapter to discussing the political philosophy and ideology of the Nazis, another to discussing their policies once in power, and a third to discussion of Nietzsche’s own philosophy, especially his views on morality and religion. 

As I have already mentioned, although Nietzche’s philosophy is the subject of many divergent interpretations, Hicks, in my view, mostly gets Nietzsche’s philosophy right. There are, however, a few problems.

Some are relatively trivial, perhaps even purely semantic. For example, Hicks equates Nietzsche’s Übermensch with Zarathustra himself, writing:

Nietzsche gives a name to his anticipated overman: He calls him Zarathustra, and he names his greatest literary and philosophical work in his honor” (p74)

Actually, as I understood Nietzsche’s Thus Spake Zarathustra (which is to say, not very much at all, since it is a notoriously incomprehensible work, and, in my view, far from Nietzsche’s “greatest literary and philosophical work”, as Hicks describes it), Nietzsche envisaged his fictional Zarathustra, not as himself the Übermensch, but rather as its herald and prophet.

Indeed, to my recollection, not only does Zarathustra never himself even claim to embody the Übermensch, but he also repeatedly asserts that the most contemporary man, Zarathustra himself presumably included, can ever even aspire to be is a bridge’ to the Übermensch , rather than the Übermensch himself.

A perhaps more substantial problem relates to Hick’s understanding of Nietzsche’s contrasting master’ and ‘slave moralities. Hicks associates the former with various traits, including:  

Pride, Self-esteem; Wealth; Ambition, boldness; Vengeance; Justice… Pleasure, Sensuality… Indulgence” (p60). 

Most of these associations are indeed unproblematically associated with Nietzsche’s ‘master morality’, but a few require further elaboration. 

For example, it may be true that Nietzsche’s ‘master morality’ is associated with the idea of “vengeance” as a virtue. However, associating the related, but distinct concept of “justice” exclusively with Nietzsche’s ‘master morality’ as Hicks does (p60; p62) strikes me as altogether more questionable. 

After all, the ‘slave morality’ of Christianity also concerns itself a great deal with “justice”. It just has a different conception of what constitutes justice, and also sometimes defers the achievement of “justice” to the afterlife, or to the Last Judgement and coming Kingdom of God (or, in pseudo-secular modern leftist versions, the coming communist utopia). 

Similarly problematic is Hicks’s characterization of Nietzsche’s ‘master morality’ as championing “indulgence”, as well as “pleasure [and] sensuality”, over “self-restraint” (p62; p60). 

This strikes me as, at best, an oversimplification of Nietzsche’s philosophy 

On the one hand, it is true that Nietzsche disparages and associates with ‘slave morality’ what Hume termed ‘the monkish values’, namely ideals of self-denial and asceticism. He sees them as both a sign of weakness and a denial of life itself, writing in Twilight of the Idols

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The same means, castration and extirpation, are instinctively chosen for waging war against a passion, by those who are too weak of will, too degenerate, to impose some sort of moderation upon it” (Twilight of the Idols: iv:2.). 

The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: ii:4). 

Yet it is clear that Nietzsche does not advocate complete surrender to indulgence, pleasure and sensuality either. 

Thus, in the first of the two passages quoted above, he envisages the strong as also imposing “some sort of moderation” without the need for complete abstinence. 

Indeed, in The Antichrist, Nietzsche goes further still, extolling: 

The most intelligent men, like the strongest [who] find their happiness where others would find only disaster: in the labyrinth, in being hard with themselves and with others, in effort; their delight is in self-mastery; in them asceticism becomes second nature, a necessity, an instinct” (The Antichrist: 57) 

Indeed, advocating complete and unrestrained surrender to indulgence, sensuality and pleasure is an obviously self-defeating philosophy. If someone really completely surrendered himself to indulgence, he would do presumably nothing all day except masturbate, shoot up heroin and eat cake. He would therefore achieve nothing of value. 

Thus, throughout his corpus of writing, Nietzsche repeatedly champions what he calls self-overcoming, which, though it goes well beyond this, clearly entails self-control

In short, to be effectively put into practice, the Nietzschean Will to Power necessarily requires willpower

Individualism vs Collectivism (and Authoritarianism) 

Another matter upon which Hicks arguably misreads Nietzsche is the question the extent to which Nietzsche’s philosophy is to be regarded as either individualist or a collectivist in ethos/orientation. 

This topic is, Hicks acknowledges, a controversial one upon which Nietzsche scholars disagree. It is, however, a topic of direct relevance to the extent of relationship between Nietzsche’s philosophy and the ideology of the Nazis, since the Nazis themselves were indisputably extremely collectivist in ethos, the collective to which they subordinated all other concerns, including individual rights and wants, being that of the nation, Volk or race. 

Hicks himself concludes that Nietzsche was much more of a collectivist than an individualist

“[Although] Nietzsche has a reputation for being an individualist [and] there certainly are individualist elements in Nietzsche’s philosophy… in my judgment his reputation for individualism is often much overstated (p87). 

Yet, elsewhere, Hicks comes close to contradicting himself, for, among the qualities that he associates with Nietzsche’s ‘master morality’, which Nietzsche himself clearly favours over the ‘slave morality’ of Christianity, are “Independence”, “Autonomy” and indeed “Individualism” (p60; p62). Yet these are all clearly individualist virtues.[4]

In reaching his conclusion that Nietzsche is primarily to be considered a collectivist rather than a true individualist, Hicks distinguishes three separate questions and, in the process, three different forms of individualism, namely: 

  1. Do individuals shape their own identities—or are their identities created by forces beyond their control?”; 
  1. Are individuals ends in themselves, with their own lives and purposes to pursue—or do individuals exist for the sake of something beyond themselves to which they are expected to subordinate their interests?”; and 
  1. Do the decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen—or are the decisive events of history a matter of collective action or larger forces at work?” (p88). 

With regard to the first of these questions, Nietzsche, according to Hicks, denies that men are masters of their own fate. Instead, Hicks contends that Nietzsche believes: 

Individuals are a product of their biological heritage” (p88). 

This may be correct, and certainly there is much in Nietzsche’s writing to support this conclusion.

Thus, for example, in Twilight of the Idols Nietzsche declares:

“The individual… is nothing in himself, no atom, no ‘link in the chain,’ no mere heritage from the past,—he represents the whole direct line of mankind up to his own life” (Twilight of the Idols: viii: 33).

However, even if human behaviour, and human decisions, are indeed a product of heredity, this does not in fact, strictly speaking, deny that individuals are nevertheless the authors of their own destiny. It merely asserts that the way in which we do indeed shape our own destiny is itself a product of our heredity. 

In other words, our actions and decisions may indeed be predetermined by hereditary factors, but they are still our decisions, simply because we ourselves are a product of these same biological forces. 

However, it is not at all clear that Nietzsche believes that all men determine their own fate. Rather, the great mass of mankind, whom he dismisses as ‘herd animals’, are, for Nietzsche, quite incapable of true individualism of this kind, and it is only men of a superior type who are truly free, membership of this superior caste itself being largely determined by heredity. 

Indeed, for Nietzsche, the superior type of man determines not only his own fate, but also often that of the society in which he lives and of mankind as a whole. 

This leads to the third of Hicks’s three types of individualism, namely the question of whether the “decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen”, or whether they are the consequence of factors outside of individual control such as economic factors, or perhaps the unfolding of some divine plan. 

On this topic, I suspect Nietzsche would side with Thomas Carlyle, and Hegel, that history is indeed shaped, in large part, by the actions of so-called ‘great men, or, in Hegelian terms, world historical figures’. This is among the reasons he places such importance on the emerging Übermensch.

Admittedly, Nietzsche repeatedly disparages Carlyle in many of his writings, and, in Ecce Homo, repudiates any notion of equating of his Übermensch with what he dismisses as Carlyle’s “hero cult” (Ecce Homo: iii, 1).

However, as Will Durant writes in The Story of Philosophy, Nietzsche often reserved his greatest scorn for those contemporaries, or near-contemporaries (e.g. the Darwinians and Social Darwinists), who had independently developed ideas that, in some respects, paralleled or anticipated his own, if only as a means of emphasizing his own originality and claim to priority, or, as Durant puts it, of “covering up his debts” (The Story of Philosophy: p373).

Indeed, we might even characterize this tendency of Nietzsche to disparage those whose ideas had anticipated his own as a form of what Nietzsche himself might characterize as ‘ressentiment’.

Hitler, of course, would also surely have agreed with Carlyle regarding the importance of great men, and indeed saw himself as just such a ‘world historical figure’.

Indeed, for better or worse, given Hitler’s gargantuan impact on world history from his coming to power in Germany in the 1930s arguably right up to the present day, we might even find ourselves reluctantly forced to agree with him.[5]

As I have written previously, it is ironic that:

The much-maligned ‘Great Man Theory of History’… became perennially unfashionable among historians at almost precisely the moment that, in the persons of first Lenin and later Hitler, it was proven so tragically true.”

Thus, just as the October revolution would surely never have occurred without Lenin as driving force and instigator, so the Nazis, though they may have existed, would surely never have come to power, let alone achieved the early diplomatic and military successes that briefly conferred upon them mastery over Europe, without Hitler as führer and chief political tactician.

Yet, for Nietzsche, individual freedom is restricted, or at least should be restricted, only to such ‘great men’, or at least to a wider, but still narrow, class of superior types, and not at all extended at all to the great mass of humanity. 

Thus, I believe that we can reconcile Nietzsche’s apparently conflicting statements regarding the merits of, on the one hand, individualism, and, on the other, collectivism, by recognizing that he endorsed individualism only for a small elite cadre of superior men. 

Indeed, for Nietzsche, the vast majority of mankind, namely those whom he disparages as ‘herd animals’, are simply incapable of such individualism and should hence be subject to a strict authoritarian control in the service of the superior caste of man. They were certainly not ‘ends in themselves as contended by Kant.

Indeed, Nietzsche’s prescription for the majority of mankind is not so much collectivist, as it is authoritarian, since Nietzsche regards the lives of such people, even as a collective, as essentially worthless. 

The mass of men must be controlled and denied freedom, not for the benefit of such men themselves even as a collective, but rather for the benefit of the superior type of man.[6]

Yet if the authoritarianism to be imposed upon the mass of mindkind ultimately serves the individualism of the superior type of man, so the individualism of this superior type of man itself also serves a higher purpose, namely the higher evolution of mankind, which, in Nietzsche’s view, necessarily depends on the superior type of man.

Therefore, Hicks himself concludes that, rather than the lives of the mass of mankind serving the interests of the higher man, even the individualism accorded the higher type of man, and even the Übermensch himself, ultimately serves the interest of the collective – namely, the human species as a whole.

Thus, in Beyond Good and Evil, Nietzsche ridicules individualism as a moral law, proclaiming, “What does nature care for the individual!”, and insisting instead:

The moral imperative of nature [does not] address itself to the individual… but to nations, races, ages, and ranks; above all, however, to the animal ‘man’ generally, to mankind” (Beyond Good and Evil: v:188). 

National Socialist Ideology 

As I have already said, however, Hicks’s understanding of Nietzsche’s philosophy is rather better than his understanding of the ideology of German National Socialism. 

This is not altogether surprising. Hicks is, after all, a professor of philosophy by background, not an historian.

Hicks lack of training in historical research is especially apparent in his handling of sources, which leaves a great deal to be desired.

For example, several quotations attributed to Hitler by Hicks are sourced, in their associated footnotes, to one of two works – namely,  The Voice of Destruction (aka Hitler Speaks) by Hermann Rauschning and Unmasked: Two Confidential Interviews with Hitler in 1931 – that are both now widely considered by historians to have been fraudulent, and to contain no authentic or reliable quotations from Hitler whatsoever.[7]

Other quotations are sourced to secondary sources, such as websites and biographies of Hitler, which makes it difficult to determine both the primary source from which the quotation is drawn, and in what context and to whom the remark was originally said or written.

This is an especially important point, not only because some sources (e.g. Rauschning) are very untrustworthy, but also because Hitler often carefully tailored his message to the specific audience he was addressing, and was certainly not above concealing or misrepresenting his real views and long-term objectives, especially when addressing the general public, foreign statesmen and political rivals.

Perhaps for this reason, Hicks seemingly misunderstands the true nature of the National Socialist ideology, and Hitler’s own Weltanschauung in particular.

However, in Hicks’s defence, the core tenets of Nazism are almost as difficult to pin down are those of Nietzsche. 

Unlike in the case of Nietzsche, this is not so much because of either the inherent complexity of the ideas, or the impenetrability of its presentation—though admittedly, while Nazi propaganda, and Hitler’s speeches, tend to be very straightforward, even crude, both Hitler’s Mein Kampf and Rosenberg’s The Myth of the Twentieth Century both make for a difficult read. 

Rather the problem is that German National Socialist thinking, or what passed for thinking among National Socialists, never really constituted a coherent ideology in the first place. 

After all, like any political party that achieves even a modicum of electoral success, let alone actually seriously aspires to win power, the Nazis necessarily represented a broad church.  

Members and supporters included people of many divergent and mutually contradictory opinions on various political, economic and social matters, not to mention ethical, philosophical and religious views and affiliations. 

If they had not done so, then the Party could never have attracted enough votes in order to win power in the first place. 

Indeed, the NSDAP was especially successful in presenting itself as ‘all things to all people’ and in adapting its message to whatever audience was being addressed at a given time. 

Therefore, it is quite difficult to pin down what exactly were the core tenets of German National Socialism, if indeed they had any. 

However, we can simplify our task somewhat by restricting ourselves to an altogether simpler question: namely what were the key tenets of Hitler’s own political philosophy? 

After all, one key tenet of German National Socialism that can surely be agreed upon is the so-called Führerprinzip’, whereby Hitler himself was to be the ultimate authority for all political decisions and policy. 

Therefore, rather than concerning ourselves with the political and philosophical views of the entire Nazi leadership, let alone the whole party or everyone who voted for them, we can instead restrict ourselves to a much simpler task – namely, determining the views of a single individual, namely the infamous Führer himself. 

This, of course, makes our task substantially easier.

However, we now encounter yet another problem: namely, it is often quite difficult to determine what Hitler’s real views actually were. 

Thus, as I have already noted, like all the best politicians, Hitler tailored and adapted his message to the audience that he was addressing at any given time. 

Thus, for example, when he delivered speeches before assembled business leaders and industrialists, his message was quite different from the one he would deliver before audiences composed predominantly of working-class socialists, and his message to foreign dignitaries, statesmen and the international community was quite different to the hawkish and militaristic one presented in Mein Kampf, to his leading generals  and before audiences of fanatical German nationalists

In short, like all successful politicians, Hitler was an adept liar, and what he said in public and actually believed in private were often two very different things. 

National Socialism and Religion 

Perhaps the area of greatest contrast between Hitler’s public pronouncements and his private views, as well as Hicks’ own most egregious misunderstanding of Nazi ideology, concerns religion. 

According to Hicks, Hitler and the Nazis were believing Christians. Thus, he reports: 

“[Hitler] himself sounded Christian themes explicitly in public pronouncements” (p84). 

However, the key words here are “in public pronouncements”. Hitler’s real views, as expressed in private conversations among confidents, seem to have been very different. 

Thus, Hitler was all too well aware that publicly attacking Christianity would prove an unpopular stance with large sections of the public, and would not only alienate much of his erstwhile support but also provoke opposition from powerful figures in the churches whom he could ill afford to alienate. 

Hitler therefore postponed his eagerly envisaged kirchenkampf, or settling of accounts with the churches, until after the war, if only because he wished to avoid fighting a war on multiple fronts. 

Thus, Speer, in his post-war memoirs, noting that “in Berlin, surrounded by male cohorts, [Hitler] spoke more coarsely and bluntly than he ever did elsewhere”, quotes Hitler as declaring in such company more than once: 

Once I have settled my other problems… I’ll have my reckoning with the church. I’ll have it reeling on the ropes” (Inside the Third Reich: p123). 

Hicks also asserts: 

The Nazis took great pains to distinguish the Jews and the Christians, condemning Judaism and embracing a generic type of Christianity” (p83).  

In fact, the form of Christianity that was, at least in public, espoused by the Nazis, namely what they called Positive Christianity was far from “a generic type of Christianity” but rather a very idiosyncratic, indeed quite heretical, take on the Christian faith, which attempted to divest Christianity of its Jewish influences and portray Jesus as an Aryan hero fighting against Jewish power, while even incorporating elements of Gnosticism and Germanic paganism

Moreover, far from attempting to deny the connection between Christianity and Judaism, there is some evidence that Hitler actually followed Nietzsche in directly linking Christianity to Jewish influence. Thus, in his diary, Goebbels quotes Hitler directly linking Christianity and Judaism:  

“[Hitler] views Christianity as a symptom of decay. Rightly so. It is a branch of the Jewish race. This can be seen in the similarity of religious rites. Both (Judaism and Christianity) have no point of contact to the animal element” (The Goebbels Diaries, 1939-1941: p77). 

Likewise, in his Table Talk, carefully recorded by Bormann and others, Hitler declares on the night of the 11th July: 

The heaviest blow that ever struck humanity was the coming of Christianity. Bolshevism is Christianity’s illegitimate child. Both are inventions of the Jew” (Table Talk: p7). 

Here, in linking Christianity and Judaism, and attributing Jewish origins to Christianity, Hitler is, of course, following Nietzsche, since a central theme of the latter’s The Antichrist is that Christianity is indeed very much a Jewish invention. 

Indeed, the whole thrust of this quotation will immediately be familiar to anyone who has read Nietzsche’s The Antichrist. Thus, just as Hitler describes Christianity as “the heaviest blow that ever struck humanity”, so Nietzsche himself declared: 

Christianity remains to this day the greatest misfortune of humanity” (The Antichrist: 51). 

Similarly, just as Hitler describes “Bolshevism” as “Christianity’s illegitimate child”, so Nietzsche anticipates him in detecting this family resemblance, in The Antichrist declaring: 

The anarchist and the Christian have the same ancestry” (The Antichrist: 57). 

Thus, in this single quoted passage, Hitler aptly summarizes the central themes of The Antichrist in a single paragraph, the only difference being that, in Hitler’s rendering, the implicit anti-Semitic subtext of Nietzsche’s work is made explicit. 

Elsewhere in Table Talk, Hitler echoes other distinctly Nietzschean themes with regard to Christianity.  

Thus, just as Nietzsche famously condemned Christianity as a expression of slave morality and ‘ressentiment’ with its origins among the Jewish priestly class, so Hitler declares: 

Christianity is a prototype of Bolshevism: the mobilisation by the Jew of the masses of slaves with the object of undermining society” (Table Talk: p75-6). 

This theme is classically Nietzschean.

Another common theme is the notion of Christianity as rejection of life itself. Thus, in a passage that I have already quoted above, Nietzsche declares: 

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: iv:1) 

Hitler echoes a similar theme, himself declaring in one passage where he elucidates a social Darwinism ethic

Christianity is a rebellion against natural law, a protest against nature. Taken to its logical extreme, Christianity would mean the systematic cultivation of the human failure” (Table Talk: p51). 

In short, in his various condemnations of Christianity from Table Talk, Hitler is clearly drawing on his own reading of Nietzsche. Indeed, in some passages (e.g.Table Talk: p7; p75-6), he could almost be accused of plagiarism. 

Historians like to belittle the idea that Hitler was at all erudite or well-read, suggesting that, although famously an avid reader, his reading material was likely largely limited to such material Streicher’s Der Stürmer and a few similarly crude antisemitic pamphlets circulating in the dosshouses of pre-War Vienna. 

Hicks rightly rejects this view. From these quotations from Hitler’s Table Talk alone, I would submit that it is clear that Hitler had read Nietzsche.

Thus, although, as we will see, Nietzsche was certainly no Nazi or proto-National Socialist, nevetheless Hitler himself may indeed have regarded himself, in his own distorted way, as in some sense a ‘Nietzschean’.[8]

National Socialism and Socialism 

Another area where Hicks misinterprets Nazi ideology, upon which many other reviewers have rather predictably fixated, is the vexed and perennial question of the extent to which the National Socialist regime, which, of course, in name at least, purported to be socialist, is indeed accurately described as such. 

Mainstream historians generally reject the view that the Nazis were in any sense truly socialist

Partly this rejection of the notion that the Nazis were at all socialist may reflect the fact that many of the historians writing about this period of history are themselves socialist, or at least sympathetic to socialism, and hence wish to absolve socialism of any association with, let alone responsibility for, National Socialism.[9]

Hicks, who, for his part, seems to be something of a libertarian as far as I can make out, has a very different conclusion: namely that the National Socialists were indeed socialists and that socialism was in fact a central plank of their political programme. 

Thus, Hicks asserts: 

The Nazis stood for socialism and the principal of the central direction of the economy for the common good” (p106). 

Certainly, Hicks is correct that the Nazis stood for “the central direction of the economy”, albeit not so much “for the common good” of humanity, nor even of all German citizens, as for the “for the common good” only of ethnic Germans, with this “common good” being defined in Hitler’s own idiosyncratic terms and involving many of these ethnic Germans dying in his pointless wars of conquest. 

Thus, Hayek, who equates socialism with big government and a planned economy, argues in The Road to Serfdom that the Nazis, and the Fascists of Italy, were indeed socialist

However, I would argue that socialism is most usefully defined as entailing, not only the central direction of the economy, but also economic redistribution and the promotion of socio-economic equality.[10]

Yet, in Nazi Germany, the central direction of the economy was primarily geared, not towards promoting socioeconomic equality, but rather towards preparing the nation and economy for war, in addition to various useful and not so useful public works projects and vanity architectural projects.[11]

To prove the Nazis were socialist, Hicks relies extensively on the party’s 25-point programme

Yet this document was issued in 1920, when Hitler had yet to establish full control over the nascent movement, and still reflected the socialist ethos of many of the movement’s founders, whom Hitler was later to displace. 

Thus, German National Socialism, like Italian Fascism, did indeed very much begin on the left, attempting to combine socialism with nationalism, and thereby provide an alternative to the internationalist ethos of orthodox Marxism.  

However, long before either movement had ever even come within distant sight of power, each had already toned down, if not abandoned, much of their earlier socialist rhetoric. 

Certainly, although he declared the party programme as inviolable and immutable and blocked any attempt to amend or repudiate it, Hitler also took few steps whatever to actually implement most of the socialist provisions in the 25-point programme.[12]

Hicks also reports: 

So strong was the Nazi party’s commitment to socialism that in 1921 the party entered into negotiations to merge with another socialist party, the German Socialist Party” (p17). 

Yet the party in question, the German Socialist Party was, much like the NSDAP itself, as much nationalist in orientation and ideology as it was socialist. Moreover, although Hicks admits “the negotiations fell through”, what he does not mention is that the deal was scuppered by Hitler himself, then not yet the movement’s leader but already the NSDAP’s most dynamic organizer and speaker, who specifically vetoed any notion of a merger, threatening to resign if he did not have his way. 

To further buttress his claim that the Nazis were indeed socialist, Hicks also quotes extensively from Joseph Goebbels, Hitler’s Minister for Propaganda (p18). 

Goebbels was indeed among the most powerful figures in the Nazi leadership besides Hitler himself, and the quotations attributed to him by Hicks do indeed suggest leftist socialist sympathies

However, Goebbels was, in this respect, something of an exception and outlier among the National Socialist leadership, since he had defected from the Strasserist wing of the Party, which is widely recognized as being relatively more left-wing in orientation, and as taking the ‘socialism’ in ‘National Socialism’ relatively more seriously, than did most of the rest of the party leadership, but which was first marginalized then suppressed under Hitler’s leadership long before the Nazis came to power, with most remaining sympathizers, Goebbels excepted, purged or fleeing during the Night of the Long Knives

Goebbels may have retained some socialist sympathies thereafter. However, despite his power and prominence in the Nazi regime, he does not seem to have had any great success at steering the regime towards socialist redistribution or other leftist policies

In short, while National Socialism may have begun on the left, by the time the regime attained power, and certainly while they were in power, their policies were not especially socialist, at least in the sense of being economically redistributive or egalitarian. 

Nevertheless, it is indeed true that, with their centrally-planned economy and large government-funded public works projects, the National Socialist regime probably had more in common with the contemporary left, at least in a purely economic sense, than it would with the neoconservative, neoliberal free market ideology that has long been the dominant force in Anglo-American conservatism. 

Thus, whether the Nazis were indeed ‘socialist’, ultimately depends on precisely how we define the wordsocialist’. 

Nazi Antisemitism 

Yet one aspect of National Socialist ideology was indeed, in my view, left-wing and socialist in origin—namely their anti-Semitism

Of course, anti-Semitism is usually associated with the political right, more especially the so-called ‘far right’. 

However, in my view, anti-Semitism is always fundamentally leftist in nature. 

Thus, Marxists claim that society is controlled by a conspiracy of wealthy capitalists who control the mass media and exploit and oppress everyone else. 

Nazis and anti-Semites, on the other hand, claim that society is controlled by a conspiracy of wealthy Jewish capitalists who control the mass media and exploit and oppress everyone else. 

The distinction between Nazism and Marxism is, then, largely tangential.

Antisemites and Nazis believe that our capitalist oppressors are all, or mostly, Jewish. Marxists, on the other hand, take no stance on the matter either way and generally prefer not to talk about it.

Indeed, columnist Rod Liddle even claims:

Many psychoanalysts believe that the Left’s aversion to capitalism is simply a displaced loathing of Jews” (Liddle 2005).

Or, as a famous nineteenth century German political slogan had it: 

Antisemitism is the socialism of fools.

Indeed, anti-Semites who blame all the problems of the world on the Jews always remind me of Marxists who blame all the problems of the world on capitalism and capitalists, feminists who blame their problems on men, and black people who blame all their personal problems on ‘the White Man’. 

Interestingly, Nietzsche himself recognized this same parallel, writing of what he calls “ressentiment”, an important concept in his philosophy, with connotations of repressed or sublimated envy and inferiority complex, that: 

This plant blooms its prettiest at present among Anarchists and anti-Semites” (On the Genealogy of Morals: ii: 11). 

In other words, Nietzsche seems to be recognizing that both socialism and anti-Semitism reflect what modern conservatives often term ‘the politics of envy’. 

Thus, in The Will to Power, Nietzsche observes: 

The anti-Semites do not forgive the Jews for having both intellectand money’” (The Will to Power: IV:864). 

Nietzschean Antisemitism

Yet Jews themselves are, in Nietzsche’s thinking, by no means immune from the “ressentiment” that he also diagnoses in socialists and antisemites

“If Nietzsche rejected the anti-Semitism of his sister, brother-in-law and former idol, Wagner, he nevertheless constructed in its place a new anti-Semitism all of his own, which, far from blaming the Jews for the crucifixion of Christ, instead blamed them for the genesis of Christianity itself—a theme directly echoed by Hitler in his Table Talk.”

On the contrary, it is Jewish ressentiment vis a vis successive waves of conquerors—especially the Romans—that, in Nietzsche’s thinking, birthed Christianity, slave morality and the original transvaluation of values that he so deplores. 

Thus, Nietzsche relates in Beyond Good and Evil that: 

The Jews—a people ‘born for slavery,’ as Tacitus and the whole ancient world say of them; the chosen people among the nations, as they themselves say and believe—the Jews performed the miracle of the inversion of valuations, by means of which life on earth obtained a new and dangerous charm for a couple of millenniums. Their prophets fused into one the expressions ‘rich,’ ‘godless,’ ‘wicked,’ ‘violent,’ ‘sensual,’ and for the first time coined the word ‘world’ as a term of reproach. In this inversion of valuations (in which is also included the use of the word ‘poor’ as synonymous with ‘saint’ and ‘friend’) the significance of the Jewish people is to be found; it is with them that the slave-insurrection in morals commences” (Beyond Good and Evil: V: 195).[13]

Thus, in The Antichrist, Nietzsche talks of “the Christian” as “simply a Jew of the ‘reformed’ confession”, and “the Jew all over again—the threefold Jew” (The Antichrist: 44), concluding: 

Christianity is to be understood only by examining the soil from which it sprung—it is not a reaction against Jewish instincts; it is their inevitable product” (The Antichrist: 24). 

All of this, it is clear from the tone and context, is not at all intended as a complement—either to Jews or Christians

Thus, lest we have any doubts on this matter, Nietzsche declares in Twilight of the Idols

Christianity as sprung from Jewish roots and comprehensible only as grown upon this soil, represents the counter-movement against that morality of breeding, of race and of privilege:—it is essentially an anti-Aryan religion: Christianity is the transvaluation of all Aryan values, the triumph of Chandala values, the proclaimed gospel of the poor and of the low, the general insurrection of all the down-trodden, the wretched, the bungled and the botched, against the ‘race,’—the immortal revenge of the Chandala as the religion of love” (Twilight of the Idols: VI:4). 

While modern apologists may selectively cite passages from Nietzsche in order to portray him as a philo-Semite and admirer of the Jewish people, it is clear that, by modern political correct standards, many of Nietzsche’s statements about Jews are very politically-incorrect, and it is doubtful that he would be able to get away with them today.

Thus, if Nietzsche rejected the anti-Semitism of his sister, brother-in-law and former idol, Wagner, he nevertheless constructed in its place a new anti-Semitism all of his own, which, far from blaming the Jews for the crucifixion of Christ, instead blamed them for the genesis of Christianity itself—a theme that is, as we have seen, directly echoed by Hitler in his Table Talk

Thus, Nietzsche remarks in The Antichrist

“[Jewish] influence has so falsified the reasoning of mankind in this matter that today the Christian can cherish anti-Semitism without realizing that it is no more than the final consequence of Judaism” (The Antichrist: 24). 

An even more interesting passage regarding the Jewish people appears just a paragraph later, where Nietzsche observes: 

The Jews are the very opposite of décadents: they have simply been forced into appearing in that guise, and with a degree of skill approaching the non plus ultra of histrionic genius they have managed to put themselves at the head of all décadent movements (for example, the Christianity of Paul), and so make of them something stronger than any party… To the sort of men who reach out for power under Judaism and Christianity,—that is to say, to the priestly class—décadence is no more than a means to an end. Men of this sort have a vital interest in making mankind sick” (The Antichrist: 24). 

Here, Nietzsche echoes, or perhaps even originates, what is today a familiar theme in anti-Semitic discourse—namely, that Jews champion subversive and destructive ideologies (Marxism, feminism, multiculturalism, mass migration of unassimilable minorities) only to weaken the Gentile power structure and thereby enhance their own power.[14]

This idea finds its most sophisticated (though still flawed) contemporary exposition in the work of evolutionary psychologist and contemporary antisemite Kevin MacDonald, who, in his book, The Culture of Critique (reviewed here), conceptualizes a range of twentieth century intellectual movements such as psychoanalysis, Boasian anthropology and immigration reform as what he calls ‘group evolutionary strategies’ that function to promote the survival and success of the Jews in diaspora. 

Nietzsche, however, goes further and extends this idea to the genesis of Christianity itself. 

Thus, in Nietzsche’s view, Christianity, as an outgrowth of Judaism and an invention of Paul and the Jewish ‘priestly class’, is itself a part of what Macdonald would call a ‘Jewish group evolutionary strategy’ designed in order to undermine the goyish Roman civilization under whose yoke Jews had been subjugated. 

Nietzsche, a professed anti-Christian but an admirer of the ancient Greeks (or at least of some of them), and even more so of the Romans, would likely agree with Tertullian that Jerusalem has little to do with Athens – or indeed with Rome. However, Hicks observes: 

As evidence of whether Rome or Judea is winning, [Nietzsche] invites us to consider to whom one kneels down before in Rome today” (p70). 

Racialism and the Germans 

Yet, with regard to their racial views, Nietzsche and the Nazis differ, not only in their attitude towards Jews, but also in their attitude towards Germans. 

Thus, according to Hicks: 

The Nazis believe the German Aryan to be racially superior—while Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

Yet, here, Hicks is only half right. While it certainly true that the Nazis extolled the German people, and the so-called ‘Aryan race’, as a master race, it is not at all clear that Nietzsche indeed believed that the superior type of man can be found among all races. 

Actually, besides a few comments about Jews, mostly favourable, and a few more about the Germans and the English (plus occassionally the French), almost always disparaging, Nietzsche actually says surprisingly little about race

However, on reflection, this is not at all surprising, since, being resident throughout his life in a Europe that was then very much monoracial, Nietzsche probably little if any direct contact with nonwhite races or peoples. 

Moreover, living as he did in the nineteenth century, when European power was at its apex, and much of the world controlled by European colonial empires, Nietzsche, like most of his European contemporaries, probably took white European racial superiority very much for granted. 

It is therefore only natural that his primary concern was the relative superiority and status of the various European subtypes – hence his occasional comments regarding Jews, English, Germans and occasionally other groups such as the French. 

Hicks asserts: 

The Nazis believe contemporary German culture to be the highest and the best hope for the world—while Nietzsche holds contemporary German culture to be degenerate and to be infecting the rest of the world” (p85). 

Yet this is something of a simplification of National Socialist ideology. 

In fact, the Nazis too believed that the Germany of their own time – namely the Weimar Republic – was decadent and corrupt. 

Indeed, a belief in both national degeneration and in the need for national spiritual rebirth and awakening has been identified as a key defining element in fascism.[15]

Thus, Nietzsche’s own belief in the decadence of contemporary western civilization, and arguably also his belief in the coming Übermensch promising spiritual revitalization, is, in many respects, a paradigmatically and prototypically fascist model. [16]

Of course, the Nazis only believed that German culture was corrupt and decadent before they had themselves come to power and hence supposedly remedied this situation.  

In contrast, Nietzsche never had the chance to rejuvenate the German culture and civilization of his own time – and nor did he live to see the coming Übermensch.[17]

The Blond Beast’  

Hicks contends that Nietzsche’s employment of the phrase “the blond beast” in The Genealogy of Morals is not a racial reference to the characteristically blond hair of Nordic Germans, as it has sometimes been interpreted, but rather a reference to the blond mane of the lion. 

Actually, I suspect Nietzsche may have intended a double-meaning, referring to both the stereotypically blond complexion of the Germanic warrior and to the mane of the lion, and hence comparing the two. 

Indeed, the use of such a double-meaning would be typical of Nietzsche’s poetic, literary and distinctly non-philosophical (or at least not traditionally philosophical) style of writing. 

Thus, even in one of the passages from The Genealogy of Morals employing this metaphor that is quoted by Hicks himself, Nietzsche explicitly refers to the “the blond Germanic beast [emphasis added]” (quoted: p78).[18]

It is true that, in another passage from the same work, Nietzche contends that “the splendid blond beast” lies at “the bottom of all these noble races”, among whom he includes, not just the Germanic, but also such distinctly non-Nordic races as “the Roman, Arabian… [and] Japanese nobility”, among others (quoted: p79). 

Here, the reference to the Japanese “nobility”, rather than the Japanese people as a whole, is, I suspect, key, since, as we have seen, Nietzsche clearly regards the superior type of man, if present at all, as always necessarily a minority among all races. 

However, in referring to “noble races”, Nietzsche necessarily implies that certain other races are not so “noble”. Just as to say that certain men are ‘superior’ necessarily implies that others are inferior, since superiority is a relative concept, so to talk of “noble races” necessarily supposes the existence of ignoble races too. 

Thus, if the superior type of man, in Nietzsche’s view, only ever represents a small minority of the population among any race, it does not necessarily follow that, in his view, such types are to be found among all races. 

Hicks is therefore wrong to conclude that: 

Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

In short, just because Nietzsche believed that vast majority of contemporary Germans were poltroons, Chandala, ‘beer drinkers’ and ‘herd animals’, it does not necessarily follow that he also believes that an Australian Aboriginal can be an Übermensch

A Nordicist, Aryanist, Völkisch Milieu? 

Thus, for all his condemnation of Germans and German nationalism, one cannot help forming the impression on reading Nietzsche that he very much existed within, if not a German nationalist milieu, then at least a broader Nordicist, Aryanist and Völkisch intellectual milieu – the same milieu that birthed certain key strands in the National Socialist Weltanschauung

This is apparent in the very opening lines of The Antichrist, where Nietzsche declares himself, and his envisaged readership, as “Hyperboreans”, a term popular among proto-Nazi occultists, such as some members of the Thule Society, the group which itself birthed what was to become the NSDAP, and which had named itself after the supposed capital of the mythical Hyperborea.[19]

It is also apparent when, in Twilight of the Idols, he disparages Christianity as specifically an “anti-Aryan religion… [and] the transvaluation of all Aryan values” (Twilight of the Idols: VI:4). 

Apologists sometimes insist that Nietzsche, as a philologist by training, was only using the word Aryan in the linguistic sense, i.e. where we would today say ‘Indo-European

However, Nietzsche was writing in a time and place, namely Germany in the nineteenth century, when Aryanist ideas were very much in vogue, and, given his own familiarity with such ideas through his sister and brother-in-law, not to mention his former idol Wagner, it would be naïve to think that Nietzsche was not all too aware of the full connotations of this word. 

Moreover, his references to “Aryan values” and “anti-Aryan religion”, referring, as they do, to values and religion, clearly go beyond merely linguistic descriptors, and, though they may envisage a mere cultural inheritance from the proto-Indo-Europeans, nevertheless seem, in my reading, to envisage, not so much a scientific biological conception of race, including race differences in behaviour and psychology, as much as they anticipate the mystical, quasi-religious and slightly bonkers ‘spiritual racialism’ of Nietzsche’s self-professed successors, Spengler and Evola

Less obviously, this affinity for Nazi-style ‘Aryanism’ is also apparent in Nietzsche’s extolment for the Law of Manu and Indian caste system, and his adoption of the Sanskrit term Chandala (also sometimes rendered as ‘Tschandala’ or ‘caṇḍāla’) as a derogatory term for the ‘herd animals’ whom he so disparages. This is because, although South Asians are obviously far from racially Nordic, proto-Nazi Völkisch esotericists (and their post-war successors) nevertheless had a curious obsession with Hindu religion and caste, and it is from India that the Nazis seemingly took both the swastika symbol and the very word ‘Aryan’. 

Indeed, even Nietzsche’s odd decision to name his prophet of the coming Übermensch, and mouthpiece for his own philosophy, after the Iranian religious figure, Zarathustra, despite the fact that the philosophy of the historical Zoroaster, at least as it is remembered today, had little in common with Nietzsche’s own philosophy, but rather represented almost its polar opposite (which may have been Nietzsche’s point), may have reflected the fact that the historical Zoroaster was, of course, Iranian, and hence quintessentially ‘Aryan’.

Will Durant, in The Story of Philosophy, writes: 

Nietzsche was the child of Darwin and the brother of Bismarck. It does not matter that he ridiculed the English evolutionists and the German nationalists: he was accustomed to denounce those who had most influenced him; it was his unconscious way of covering up his debts” (The Story of Philosophy: p373).[20]

This perhaps goes some way to making sense of Nietzsche’s ambiguous relationship to Darwin, whose theory he so often singles out for criticism. 

Perhaps something similar can be said of Nietzsche’s relationship, not only to German nationalism, but also to anti-Semitism, since, as a former disciple of Wagner, he existed within a German nationalist and anti-Semitic intellectual milieu, from which he sought to distinguish himself but which he never wholly relinquished. 

Thus, if Nietzsche condemned the crude antiSemitism of Wagner, his sister and brother-in-law, he nevertheless constructed in its place a new antiSemitism that blamed the Jews, not for the crucifixion of Christ, but rather for the very invention of Christianity, Christian ethics and the entire edifice of what he called ‘slave morality’ and the ‘transvaluation of values’. 

Nietzschean Philosemitism or Mere ‘Backhanded Complements’?

Thus, even Nietzsche’s many apparently favorable comments regarding the Jews can often be interpreted as backhanded complements

As a character from a Michel Houellebecq novel observes: 

All anti-Semites agree that the Jews have a certain superiority. If you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113). 

Nietzsche himself would, of course, view these implicit, inadvertant concessions of Jewish superiority in anti-Semitic literature as further proof that anti-Semitic sentiments are indeed rooted in repressed envy and what Nietzsche famously termed ‘ressentiment’.

Indeed, Nazi propaganda provides a good illustration of just this tendency for anti-Semitic sentiments to inadvertantly reveal an impicit perception of Jewish superiority

Thus, in claiming that Jews, who only ever represented only a tiny minority of the Weimar-era German population, nevertheless dominated the media, banking, commerce and the professions, Nazi propaganda often came close to inadvertently implicitly conceding Jewish superiority – since to dominate the economy of a mighty power like Germany, despite only ever representing a tiny minority of the population, is hardly a feat indicative of inferiority

Indeed, Nazi propaganda came close to self-contradiction, since, if Jews did indeed dominate the Weimar-era economy to the extent claimed in Nazi propaganda, this not only suggests that the Jews themselves are far from inferior to the German Gentile Goyim whom they had ostensibly so oppressed and subjugated, but also that the Germans themselves, in allowing themselves to be so dominated by this tiny minority of Jews in their midst, were something rather less than the Aryan Übermensch and master race of Hitler’s own demented imagining. 

Such backhanded complements can be understood, in Nietzschean terms, as a form of what Nietzsche himself would have termed ‘ressentiment’.

Thus, many antisemites have praised the Jews for their tenacity, resilience, survival, alleged clannishness and ethnocentrism, and, perhaps most ominously, their supposed racial purity

For example, Houston Stewart Chamberlain, a major influence on Nazi race theory and mentor to Hitler himself, nevertheless insisted:

The Jews deserve admiration, for they have acted with absolute consistency according to the logic and truth of their own individuality and never for a moment have they allowed themselves to forget the sacredness of physical laws because of foolish humanitarian day-dreams which they shared only when such a policy was to their advantage” (Foundations of the Nineteenth Century: p531).[21]

Similarly, contemporary antisemite Kevin MacDonald, arguing that Jews might serve as a model for less ethnocentric white westerners to emulate, professes to:

Greatly admire Jews as a group that has pursued its interests over thousands of years, while retaining its ethnic coherence and intensity of group commitment (Macdonald 2004). 

Indeed, even Hitler himself came close to philosemitism in one passage of Mein Kampf, where he declares: 

“The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self-preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf).[22]

Many of Nietzsche’s own apparently complementary remarks regarding the Jewish people directly echoe the earlier statements of these acknowledged antisemites, as where Nietzsche, like these other writers extols the Jews for their resilience, tenacity and survival under adverse conditions and alleged racial purity, writing:

“The Jews… are beyond all doubt the strongest, toughest, and purest race at present living in Europe, they know how to succeed even under the worst conditions (in fact better than under favourable ones)” (Beyond Good and Evil: viii:251).

Thus, Hicks himself credits Nietzsche with deploring the slave morality that was their legacy, but nevertheless recognizing that this slave morality was a highly successful strategy in enabling them to survive and prosper in diaspora as a defeated and banished people. Thus, Nietzsche admires them as: 

Inheritors of a cultural tradition that has enabled them to survive and even flourish despite great adversity… [and] would at the very least have to grant, however grudgingly, that the Jews have hit upon a survival strategy and kept their cultural identity for well over two thousand years” (p82). 

Thus, in one of his many backhanded complements, Nietzsche declares:  

The Jews are the most remarkable people in the history of the world, for when they were confronted with the question, to be or not to be, they chose, with perfectly unearthly deliberation, to be at any price: this price involved a radical falsification of all nature, of all naturalness, of all reality, of the whole inner world, as well as of the outer” (The Antichrist: 24). 

Defeating Nazism 

In Hicks’s final chapter, he discusses how best Nazism can be defeated. In doing so, he seemingly presupposes that Nazism is, not only an evil that must be defeated, but moreover the ultimate evil that must be defeated at all costs and that we must therefore structure our entire economic and political system in order to achieve this goal and prevent any possibility of Nazism’s reemergence. 

In doing so, he identifies what he sees as “the direct opposite of what the Nazis stood for” as necessarily “the best antidote to National Socialism we have” (p106-7). 

Yet, to assume that there is a “direct opposite” to each of the Nazis’ central tenets assumes that all political positions can be conceptualized on a single dimensional axis, with the Nazis at one end and Hicks’s own rational free market utopia at the other. 

In reality, the political spectrum is multidimensional and there are many quite different alternatives to each of the tenets identified by Hicks as integral to Nazism, not just a single opposite. 

More importantly, it is not at all clear that the best way to defeat any ideology is necessarily to embrace its polar opposite. 

On the contrary, embracing an opposite form of extremism often only provokes a counter-reaction and is hence counterproductive. In contrast, often the best way to defeat extremism is to actually address some of the legitimate issues raised by the extremists and offer practical, realistic solutions and compromise – i.e. moderation rather than extremism. 

Thus, in the UK, the two main post-war electoral manifestations of what was arguably a resurgent Nazi-style racial nationalism were the National Front in the 1970s and the British National Party (BNP) in the 2000s, each of whom achieved some rather modest electoral successes, and inspired a great deal of media-led moral-panic, in their respective heydays before quickly fading into obscurity and electoral irrelevance. 

Yet each were defeated, not by the emergence of an opposite extremism of either left or right, nor by the often violent agitation and activism of self-styled ‘anti-fascists’, but rather by the emergence of political figures or movements that addressed some of the legitimate issues raised by the extremist groups, especially regarding immigration, but cloaked them in more moderate language. 

Thus, in the 2000s, the BNP was largely outflanked by the rise of the UKIP, which increasingly echoed many of the BNP’s rhetoric regarding mass immigration, but largely avoided any association with racism, white supremacism or neo-Nazism. In short, UKIP outflanked the BNP by being precisely what the BNP had long pretended to be – namely, a non-racist, anti-immigration civic nationalist party – only, in the case of UKIP, the act actually appeared to be genuine.

Meanwhile, in the 1970s, the collapse and implosion of the National Front was largely credited to the rise of Margaret Thatcher, who, in one infamous interview, empathized with the fear of many British people that their country being “swamped by people with a different culture”, though, in truth, once in power, she did little to arrest or even slow, let alone reverse, this ongoing and now surely irreversible process of demographic transformation

Misreading Nietzsche 

Why, then, has Nietzsche come to be so misunderstood? How is it that this nineteenth-century German philosopher has come to be claimed as a precursor by everyone from Fascists and libertarians to leftist postmodernists. 

The fault, in my view, lies largely with Nietzsche himself, in particular his obscure, cryptic, esoteric writing style, especially in his infamously indecipherable, Thus Spake Zarathustra, but to some extent throughout his entire corpus. 

Indeed, Nietzsche, perhaps to his credit, even admits to adopting a deliberately impenetrable prose style, not so much admitting as proudly declaring as much in one parenthesis from Beyond Good and Evil that has been variously translated as: 

I obviously do everything to be ‘hard to understand’ myself

Or: 

I do everything to be difficultly understood myself”  (Beyond Good and Evil: II, 27).

Admittedly, here, the wording, or at least the various English renderings, is itself not entirely clear in its meaning. However, the fact that even this single seemingly simple sentence lends itself to somewhat different interpretations only illustrates the scale of the problem. 

In my view, as I have written previously, philosophers who adopt an aphoristic style of writing generally substitute bad poetry for good arguments. 

Thus, in one sense at least leftist postmodernists are right to claim Nietzsche as a philosophical precursor: He, like them, delights in pretentious obfuscation and obscurantism

The best writers, in my view, generally present their ideas in the clearest and simplest language that the complexity of their ideas permit. 

Indeed, the most profound thinkers generally have no need increase the complexity of ideas that are already inherently complex through deliberately obscure or impenetrable language. 

In contrast, it is only those with banal and unoriginal ideas who adopt deliberately complex and confusing language in order to conceal the banality and unoriginality of their thinking. 

Thus, Richard DawkinsFirst Law of the Conservation of Difficulty states: 

Obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity.”  

What applies to an academic subject applies equally to individual writers – namely, as a general rule, the greater the abstruseness of the prose style, the less the substance and insight. 

Yet, unlike the postmodernists, poststucturalists, deconstructionalists, contemporary continental philosophers and other assorted ‘professional damned fools’ who so often claim him as a precursor, Nietzsche is indeed, in my view, an important, profound and original thinker albeit not quite as brilliant and profound as he evidently regards himself. 

Moreover, far from replacing good philosophy with bad poetry, Nietzsche is, besides being a profound and original thinker, also, despite his sometimes abstruse style, nevertheless a magnificent prose stylist, the brilliance of whose writing shines through even in translation. 

Conclusion – Was Nietzsche a Nazi? 

The Nazis, we are repeatedly reassured by leftists, misunderstood Nietzsche. Either that or they deliberated misrepresented and misappropriated him. At any rate, one thing is clear – they were wrong. 

This argument is largely correct – as far as it goes. 

The Nazis did indeed engage in a disingenuous and highly selective reading of Nietzsche’s work, selectively quoting his words out of context, and conveniently ignoring, or even suppressing, those passages of his writing where he explicitly condemns both antiSemitism and German nationalism

The problem with this view is not that it is wrong – but rather with what it leaves out. 

Nietzsche may not have been a Nazi, but he was certainly an elitist and anti-egalitarian, opposed to socialism, liberalism, democracy and pretty much the entire liberal democratic political and social worldview of the contemporary west.

Indeed, although, today, in America at least, atheism tends to be associated with leftist, or at least liberal, views, and Christianity with conservatism and the right, Nietzsche opposed socialism precisely because he saw it as an inheritance of the very JudeoChristianslave morality’ to which his philosophy stood in opposition, albeit divested of the very religious foundation which provided this moral system with its ultimate justification and basis.

Thus, in The Will to Power, he observes that “socialists appeal to the Christian instincts” and bewails “the socialistic ideal” as merely “the residue of Christianity and of Rousseau in the de-Christianised world” (The Will to Power: III, 765; IV, 1017). Likewise, he laments of the English in Twilight of the Idols:

They are rid of the Christian God and therefore think it all the more incumbent upon them to hold tight to Christian morality” (Twilight of the Idols: IX, 5).

While Nietzsche would certainly have disapproved of many aspects of Nazi ideology, it is not at all clear that he would have considered our own twenty-first century western culture as any better. Indeed he may well have considered it considerably worse.

It must be emphasized that Nietzsche’s anti-egalitarianism led him to reject, not only socialism, but also democracy itself, Nietzsche lamenting that even our ostensible ‘rulers’ (i.e. politicians) are themselves so infected by ‘slave morality’ and ‘herd instinct’ that they often come to regard themselves as ruled by, and servants of, the people whom they ostensibly rule. Yet, he nevertheless rejoices:

In spite of all, what a blessing, what a deliverance from a weight becoming unendurable, is the appearance of an absolute ruler for these gregarious Europeans—of this fact the effect of the appearance of Napoleon was the last great proof” (Beyond Good and Evil: V, 199).

Yet, today, of course, Napoleon no longer stands as the “the last great proof” of this fact. For, since that time, other absolute tyrants—Hitler, Stalin, Mussolini—have emerged in his place, and each, despite (or indeed perhaps because of) their ruthless suppression of their respective peoples, nevertheless enjoyed huge popular support among these very same peoples, far surpassing that of most, if not all, elected democratic and constitutional rulers in Europe during the same time period.

Thus, it is indeed true that Nietzsche was no National Socialist, but neither was he, by any means, a socialist of any other type, nor indeed any other variety of leftist or liberal, and his views on such matters as hierarchy, inequality and democracy, or indeed the role of Jewish people in western history, were far from politically correct by modern standards. 

Indeed, the worldview of this most elitist and anti-egalitarian of thinkers is arguably even less reconcilable with contemporary left-liberal notions of social justice than is that of the Nazis themselves.  

Thus, if the Nazis did indeed misappropriate Nietzsche’s philosophy, then this misappropriation was as nothing compared to the attempt of some leftists, post-modernists, post-structuralists and other such ‘professional damned fools’ to claim this most anti-egalitarian and elitist of thinkers on behalf of the left

Endnotes

[1] The claim that the foreign policies of governmental regimes of all ideological persuasions are governed less by their ideology than by power politics, is, of course, a central tenet, indeed perhaps the central tenet of the realist school of international relations theory. Indeed, Hitler himself provided a good example of this when, despite his ideological opposition to Judeo-Bolshevism and desire for lebensraum in the East, not to mention disparaging racial attitude to the Slavic peoples, nevertheless, rebuffed in his efforts to come to an understanding with Britain and France, or form an alliance with Poland, instead sent Ribbentrop to negotiate a non-aggression pact with the Soviet Union. It can even be argued that it was Hitler’s abandonment of pragmatic realpolitik in favour of ideological imperative, when he later invaded the Soviet Union that led to his own, and his regime’s, demise.

[2] Curiously missing from all such lists of philosophical influences on Hitler and Nazism is Nietzsche’s own early idol, Arthur Schopenhauer. Yet it was Schopenhauer’s The World as Will and Representation, that Hitler claimed to have carried with him in the trenches in his knapsack throughout the First World War, and Schopenhauer even has the dubious distinction of having his antisemitic comments regarding Jews favourably quoted by Hitler in Mein Kampf. Indeed, according to the recollections of filmmaker Leni Riefenstahl, Hitler professed to prefer Schopenhauer over Nietzsche, the Fürher being quoted by her as observing: 

I can’t really do much with Nietzsche… He is more an artist than a philosopher; he doesn’t have the crystal-clear understanding of Schopenhauer. Of course, I value Nietzsche as a genius. He writes possibly the most beautiful language that German literature has to offer us today, but he is not my guide” (quoted: Hitler’s Private Library: p107). 

Somewhat disconcertingly, this assessment of Nietzsche – namely as “more… artist than philosopher” and far from “crystal-clear” in his writing style, but nevertheless a brilliant prose stylist, the beauty of whose writing shines through even in English translation – actually rather reflects my own judgement. Moreover, I too am an admirer of Schopenhauer’s writings, albeit not so much his philosophy, let alone his almost mystical metaphysics, but more his almost protoDarwinian biologism and theory of human behaviour and psychology.
Yet, on reflection, Schopenhauer is surely rightly omitted from lists of the philosophical influences on Nazism. Save for the antisemitic remarks quoted in Mein Kampf, which are hardly an integral part of Schopenhauer’s philosophy, there is little in Schopenhauer’s body of writing, let alone in his philosophical writings, that can be seen to jibe with National Socialism policy or ideology.
Indeed, Schopenhauer’s philosophy, to the extent it is prescriptive at all, advocates an ascetic withdrawal from worldly affairs, including politics, and championed art as a form of escapism. This hardly provides a basis for state policy of any kind.
Admittedly, it is true that Hitler’s lifestyle, in some ways, did indeed accord with the ascetic abstinance advised by Schopenhauer. Thus, in many respects, even as dictator, the Fürher nevertheless lived a frugal, spartan life, being, in later lifereportedly, a vegetarian, who also abstained from alcohol. He also, for most of his adult life, seems to have had little active sex life. Also in accord with Schopenhauer’s teaching, he was also an art lover who seemingly found escapism in both movies and the operas of Wagner, the latter himself a disciple of Schopenhauer.
However, the NSDAP programme, like all political programmes, necessarily involved active engagement with the world in order to, as they saw it, improve things, something Schopenhauer did not generally advocate, and would, I suspect, have dismissed as largely futile.
Thus, modern left-liberal apologists for Nietzsche often attempt to characterize Nietzsche as a largely apolitical thinker. This is, of course, deluded apologetics. However, as applied to Schopenhauer, the claim is indeed largely valid.
Indeed, Hitler himself aptly summarized why Schopenhauer’s philosophy could never be a basis for any type of active political programme, let alone the radical programme of the NSDAP, in a comment quoted by Hanfstaengl, where he bemoans Schopenhauer’s influence on his former mentor Eckart, remarking: 

Schopenhauer has done Eckart no good. He has made him a doubting Thomas, who only looks forward to a Nirvana. Where would I get if I listened to all his [Schopenhauer’s] transcendental talk? A nice ultimate wisdom that: To reduce on[e]self to a minimum of desire and will. Once will is gone all is gone. This life is War” (quoted in: Hitler’s Philosophers: p24). 

Thus, while the quotation attributed to Hitler by Riefenstahl, and quoted in this endnote a few paragraphs above, in which he professed to prefer the philosophy of Schopenhauer over that of Nietzsche, may indeed be an authentic recollection, nevertheless it appears that, over time, the German Fürher was to revise that opinion. Thus, many years later, in 1944, Hitler was recorded as concluding:

Schopenhauer’s pessimism, which springs partly, I think, from his own line of philosophical thought and partly from subjective feeling and the experiences of his own personal life, has been far surpassed by Nietzsche” (Table Talk: p720).

[3] Hicks does not mention the figure who was, in my perhaps eccentric view, the greatest thinker associated with the NSDAP, namely Nobel Prize winning ethologist Konrad Lorenz, perhaps because, unlike the other thinkers whom he does discuss, Lorenz only joined the NSDAP several years after they had come to power, and his association with the NSDAP could therefore be dismissed as purely opportunistic. Alternatively, Hicks may have overlooked Lorenz simply because Lorenz was a biologist rather than a philosopher, though it should be noted that Lorenz also made important contributions to philosophy as well, in particular his pioneering work in evolutionary epistemology.

[4] It is true that Nietzsche does not actually envisage or advocate a return to the ‘master morality’ of an earlier age, but rather the construction of a new morality, the outline of which could, at the time he wrote, only be foreseen in rough outline. Nevertheless, it is clear he favoured this ‘master morality’ over the ‘slave morality’ that he associated with Christianity and our own post-Christian ethics, and also that he viewed the coming morality of the Übermensch as having much more in common with the ‘master morality’ of old than with the Christian ‘slave morality’ he so disparages. 

[5] Hitler exerted a direct impact on world history from 1933 until his death in 1945. Yet Hitler, or at least the spectre of Hitler, continues to exert an indirect but not insubstantial impact on contemporary world politics to this day, as a kind of ‘bogeyman’, whom we define our views in opposition to, and invoke as a kind of threat or form of guilt-by-association. This is most obvious in the familiar ‘reductio ad Hitlerum’.
Of course, in considering the question of whether Hitler may indeed qualify as a ‘great man’, we are not using the word ‘great’ in a moral or acclamatory sense. Rather, we are employing the term in the older sense, meaning ‘large in size’. This exculpatory clarificiation we might aptly term the Farrakhan defence

[6] Collectivists are, almost by definition, authoritarian, since collectivism necessarily demands that individual rights and freedoms be curtailed, restricted or abrogated for the benefit of the collective, and this invariably requires coercion because people have evolved to selfishly promote their own inclusive fitness at the expense of that of rivals and competitors. However, authoritarianism can also be justified on non-collectivist grounds. Nietzsche’s proposed restrictions of the individual liberty of the ‘herd animal’ and ‘Chandala’ seem to me to be justified, not by reference to the individual or collective interests of such ‘Chandala’, but rather by reference to the interests of the superior man and of the higher evolution of mankind.

[7] The second of these is a pair of interviews that were supposedly conducted with Hitler by German journalist Richard Breiting in 1931, to which Hicks sources several supposed quotations from Hitler (p117; p122; p124; p125; p133). Unfortunately, however, the interviews, only published in 1968 by Yugoslavian journalist Edouard Calic several decades after they were supposedly conducted, contain anachronistic material and are hence almost certainly post-war forgeries. Richard Evans, for example, described them as having obviously been in large part, if not completely, made up by Calic himself (Evans 2014).
The other is Hermann Rauschning’s The Voice of Destruction, published in Britain under the title Hitler Speaks, to which Hicks sources several quotations from Hitler (p120; p125; 126; p134). This is now widely recognised as a fraudulent work of wartime propaganda. Historians now believe that Rauschning actually only met with Hitler on a few occasions, was certainly not a close confident and that most, if not all, of the conversations with Hitler recounted in The Voice of Destruction are pure inventions.
Thus, for example, Ian Kershaw in the first volume of his Hitler biography, Hitler, 1889–1936: Hubris, makes sure to emphasize in his preface: 

I have on no single occasion cited Hermann Rauschning’s Hitler Speaks [the title under which The Voice of Destruction was published in Britain], a work now regarded to have so little authenticity that it is best to disregard it altogether” (Hitler, 1889–1936: Hubris: pxvi). 

Similarly, Richard Evans definitively concludes:

Nothing was genuine in Rauschning’s book: his ‘conversations with Hitler’ had no more taken place than his conversations with Göring. He had been put up to writing the book by Winston Churchill’s literary agent, Emery Reeves, who was also responsible for another highly dubious set of memoirs, the industrialist Fritz Thyssen’s I Paid Hitler” (Evans 2014).

Admittedly, Rauschning’s work was once taken seriously by mainstream historians, and The Voice of Destruction is cited repeatedly in such early but still-celebrated works as Trevor-Roper’s The Last Days of Hitler, first published in 1947, and Bullock’s Hitler: A Study in Tyranny, first published in 1952.  However, Hicks’s own book was published in 2006, by which time Rauschning’s work had already long previously been discredited as a historical wource. 
Indeed, it is something of an indictment of the standards, not to mention the politicized and moralistic tenor, of what we might call ‘Hitler historiography’ that this work was ever taken seriously by historians in the first place. First published in the USA in 1940, it was clearly a work of anti-Nazi wartime propaganda and much of the material is quite fantastic in content.
For example, there are bizarre passages about Hitler having been “long been in bondage to a magic which might well have been described, not only in metaphor but in literal fact, as that of evil spirits” and of Hitler “wak[ing] at night with convulsive shrieks”, and one such passage describes how Hitler: 

Stood swaying in his room, looking wildly about him. “He! He! He’s been here!” he gasped. His lips were blue. Sweat streamed down his face. Suddenly he began to reel off figures, and odd words and broken phrases, entirely devoid of sense. It sounded horrible. He used strangely composed and entirely un-German word-formations. Then he stood quite still, only his lips moving. He was massaged and offered something to drink. Then he suddenly broke out — “There, there! In the comer! Who’s that.?” He stamped and shrieked in the familiar way. He was shown that there was nothing out of the ordinary in the room, and then he gradually grew calm” (The Voice of Destruction: p256) 

Yet, oddy, the first doubts regarding the authenticity of the conversations reported in The Voice of Destruction were raised, not by mainstream historians studying the Third Reich, but rather by an obscure Swiss researcher, Wolfgang Haenel, who first presented his thesis at a conference organized by a research institute widely associated with so-called ‘holocaust denial’. Moreover, other self-styled ‘holocaust revisionists’ were among the first to endorse Haenel’s critique. Yet his conclusions are now belatedly accepted by virtually all mainstream scholars in the field. This perhaps suggests that such ‘revisionist’ research is not always without value.

[8] It must be acknowledged here that the question of the religious views of Hitler is a matter of some controversy. It is sometimes suggested that the hostile view of Christianity expressed in Hitler’s Table Talk reflect less the opinion of Hitler, and more those of of Hitler’s private secretary, Martin Bormann, who was responsible for transcribing much of this material. Bormann is indeed known to have been hostile to Christianity, and Speer, who disliked Bormann, indeed remarks in his memoirs that:

If in the course of such a monologue Hitler had pronounced a more negative judgment upon the church, Bormann would undoubtedly have taken from his jacket pocket one of the white cards he always carried with him. For he noted down all Hitler’s remarks that seemed to him important; and there was hardly anything he wrote down more eagerly than deprecating comments on the church” (Inside the Third Reich: p95). 

However, it is important to note that Speer does not deny that Hitler himself did indeed make such remarks. Indeed, it is hardly likely that Bormann, a faithful, if not obsequious, acolyte of the Fürher, would ever dare to falsely attribute to Hitler remarks which the latter had never uttered or views to which he did not subscribe. At any rate, the views attributed to Hitler in Table Talk are amply corroborated in other sources, such as in Goebbels’s diaries and indeed in Speer’s memoirs, both of which I have also quoted above.
It is also true that, elsewhere in Table Talk, Hitler talks approvingly of Jesus as “most certainly not a Jew”, and as fighting “against the materialism of His age, and, therefore, against the Jews”. This is, of course, a very odd and eccentric, not to mention historically unsupported, perspective on the historical Jesus.
However, it is interesting to note that, despite his disdain for Christianity, Nietzsche too, despite his more orthodox view of the historical Jesus, nevertheless professes to admire Jesus in The Antichrist. Indeed, in repeatedly placing the blame for Christianity not on Jesus himself, but rather on Paul of Tarsus, whom he accuses, again echoing Nietzsche, of transforming Christianity into “a rallying point for slaves of all kinds against the élite, the masters and those in dominant authority” (Table Talk: p722), Hitler is therefore again following Nietzsche, who, in The Antichrist, similarly condemns Paul as the true founder of modern Christianity and of the Christian slave morality that infected western man.
Just to clarify, I am not here suggesting that Hitler’s views with respect to Christianity are identical to those of Nietzsche. On the contrary, they clearly differ in several respects, not least in their differing historical perspectives on the historial Jesus.
Nevertheless, Hitler’s religious views, as expressed in his Table Talk, clearly mirror those of Nietzsche in certain key respects, not least in seeing Christianity as the greatest tragedy to befall humanity, as inimical to life itself, and as a malign invention of or inheritance from Jews and Judaism. Given these parallels, it seems almost certain that the German Führer had read the works of Nietzsche and, to some extent, been influenced by his ideas.
Interestingly, elsewhere in his Table Talk, Hitler also condemns atheism, describing it as “a return to the state of the animal” and argues that “the notion of divinity gives most men the opportunity to concretise the feeling they have of supernatural” (Table Talk: p123; p61). Hitler also often referred to God, and especially providence, in a metaphoric sense. Indeed, he even himself professes a belief in a God, albeit of a decidedly non-Chrisitian Pantheistic form, defining God as “the dominion of natural laws throughout the whole universe” (Table Talk: p6).
However, this only demonstrates that there are other forms of theism, and deism, besides Christianity, and that one can be opposed to Christianity without being opposed to all religion. Thus, Goebbels declares in his Diary: 

The Fuhrer is deeply religious, though completely anti-Christian” (The Goebbels diaries, 1939-1941: p77). 

The general impression from Table Talk is that Hitler sees himself, perhaps surprisingly, as a scientific materialist, albeit one who, like, it must be said, no few modern scientific materialists, actually often knows embarrassingly little about science. (For example, in Table Talk, Hitler repeatedly endorses Hörbiger’s World Ice Theory, comparing Hörbiger to Copernicus in his impact on cosmology, and even proposing opposing the “pseudo-science of the Catholic Church” with the ‘science’ of PtolemyCopernicus, and, yes, Hörbiger: Table Talk: p249; p324; p445.)

[9] After all, socialists already have the horrors of Mau, Stalin, Pol Pot and communist North Korea among many others on their hands. To be associated also with National Socialism in Germany as well would effectively make socialism responsible for, or at least associated with, virtually all of the great atrocities of the twentieth century, rather than merely the vast majority of them. 

[10] Interestingly, although dictionary definitions available on the internet vary considerably, most definitions of ‘socialism tend to be much narrower than my definition, emphasizing, in particular, common or public ownership of the means of production. Partly, this reflects, I suspect, the different connotations of the word in British- and American-English. Thus, in America, where, until recently, socialism was widely seen as anathema, the term was associated with, and indeed barely distinguished from, communism or Marxism. In Britain, however, where the Labour Party, one of the two main parties of the post-war era, traditionally styled itself ‘socialist’, despite generally advocating and pursuing policies that would be closer to what would be called, on continental Europe, ‘social democracy’, the word has much less radical connotations.

[11] Admittedly, reducing unemployment also seems to have been a further objective of some of the large public works projects undertaken under the Nazis (e.g. the construction of the autobahns), and this can indeed be seen as a socialist objective. However, socialists are, of course, not alone in seeing job creation as desirable and high rates of unemployment as undesirable. On the contrary, the desirability of job creation and of reducing unemployment is widely accepted across the political spectrum. Politicians differ primarily on the best way to achieve this goal. Those on the left are more likely to favour increasing public sector employment, including through the sorts of public works projects employed by the Nazis. Neo-liberals are more likely to favour cutting taxes, in order to increase spending and investment, which they theorize will increase private sector employment.

[12] It is possible Hitler’s own views evolved over time, and he too may initially have been more sympathetic to socialist policies. Thus, still largely unexplained is the full story of Hitler’s apparent involvement with the short-lived revolutionary communist regime in Munich in 1919, led by the Jewish communist Kurt Eisner. Ron Rosenbaum writes:

One piece of evidence adduced for this view documents Hitler’s successful candidacy for a position on the soldier’s council in a regiment that remained loyal to the short-lived Bolshevik regime that ruled Munich for a few weeks in April 1919. Another is a piece of faded, scratchy newsreel footage showing the February 1919 funeral procession for Kurt Eisner, the assassinated Jewish leader of the socialist regime then in power. Slowed down and studied, the funeral footage shows a figure who looks remarkably like Hitler marching in a detachment of soldiers, all wearing armbands on their uniforms in tribute to Eisner and the socialist regime that preceded the Bolshevik one” (Explaining Hitler: pxxxvii). 

If Hitler was indeed briefly a supporter of the Peoples’ State of Bavaria, which remains far from proven, and this reflected more than mere opportunism and a desire for self-advancement, then it remains to be proven when his later antiSemitic and anti-Marxist views became crystalized. It is clear that, by the time he joined the nascent DAP, Hitler was already a confirmed anti-Semite. However, perhaps he still remained something of a socialist at this time. Indeed, this might explain why he ever joined the German Workers’ Party, which, at that early time, indeed seems to have had a broadly socialist, as well as nationalist, orientation. 

[13] In fact, Nietzsche is wrong to credit the Jews as the first to perform this transvaluation of values that elevated asceticism, poverty and abstinence from worldly pleasures into a positive value. On the contrary, similar and analogous notions of asceticism seem to have had an entirely independent, and apparently prior, origin in the Indian subcontinent, in the form of both Buddhism and especially Jainism

[14] The supposed proof of this theory in to be found in the state of Israel, where Jews find themselves as a majority, and where, far from embodying the sort of ideals of multiculturalism and tolerance that Jews have typically been associated with championing in the west, there is an apartheid state, the persecution of the country’s Palestinian minority, an immigration policy that overtly discriminates against non-Jews, not to mention increasing levels of conservatism and religiosity, proving, so the theory goes, that Jewish subversive iconoclasm is intended only for external Gentile consumption. 

[15] This is, for example, an integral part of the influential definition of fascism espoused by historian and political theorist Roger Griffin in his book, The Nature of Fascism.

[16] In fact, whether Nietzsche indeed envisaged the Übermensch in this way – namely as a real-world coming savior promising a new transvaluation of values and revitablization of society and civilization that would restore the warrior ethos of the ancients – is not at all clear. In fact, the concept of the Übermensch is mentioned quite infrequently in his writings, largely in Thus Spake Zarathustra and Ecce Homo, and is neither fully developed nor clearly explained. It has even been suggested that the importance of this concept in Nietzsche’s thought has been exaggerated, partly on account of its use in use in the title of George Bernard Shaw’s famous play, Man and Superman, which explores Nietzschean themes.
Elsewhere in his writing, Nietzsche is seemingly resolutely ‘blackpilled’ regarding the inevitability of moral and spiritual decline and the impossibility of any recovery. Thus, in Twilight of the Idols, he reproaches the conservatives for attempting to turn back the clock, declaring that an arrest, let alone a reverse, in the degeneration of mankind and civilization is an impossibility:

It cannot be helped: we must go forward,—that is to say step by step further and further into decadence (—this is my definition of modern ‘progress’). We can hinder this development, and by so doing dam up and accumulate degeneration itself and render it more convulsive, more volcanic: we cannot do more” (Twilight of the Idols: VIII, 43).

In other words, not only is God indeed dead (as are Zeus, Jupiter, Thor and Wotan), but, unlike Jesus in the Gospels, he can never be resurrected.

[17] Of course, another difference between Nietzsche and the Nazis is that the contemporary German culture that each regarded as decadent were separated from each other by several decades. Thus, while Hitler may have despised the German culture of the 1920s as decadent, he nevertheless admired in many respects the German culture of Nietzsche’s time and certainly regarded this Germany as superior to the Weimar-era Germany in which he found himself after the First World War. 
Nevertheless, Hitler did not regard the Germany of Nietzsche’s own time as any kind of ‘golden age’ or ‘lost Eden’. On the contrary, he would have deplored the Germany of Nietzsche’s day both for its alleged domination by Jews and the fact that, even after Bismarck’s supposed unification of Germany, Hitler’s own native Austria remained outside the German Reich.
Thus, neither Nietzsche nor Hitler were mere reactionaries nostalgically looking to turn back the clock. On the contrary, Nietzsche considers this an imposibility, as indicated in the passage from Twilight of the Idols quoted in the immediately preceding endnote.
Thus, just as Nietzsche does not yearn for a return to the master morality or paganism of pre-Christian Europe and classical antiquity, but rather for the coming Übermensch and new transvaluation of values that he would deliver, so Hitler’s own ‘golden age’ was to be found, not in the nineteenth century, nor even in classical antiquity, but rather in the new and utopian thousand year Reich he envisaged and sought to construct.

[18] Other English translations render the German as the “blond Teutonic beast [emphasis added]”. At any rate, regardless of the precise translation, it is clear that a reference to the ancient Germanic peoples is intended. 

[19] The influence of such occult ideas on the Nazi leadership is much exaggeraged in some popular, sensationalist histories (or pseudohistories), television documentaries and works of fiction dealing with the Nazis. However, the influence of Völkisch occultism on the development of the National Socialist movement is not entirely a myth, and is evident, not only in the name of the Thule Society, which birthed the NSDAP, but also, for example, in the adoption by the movement of the swastika symbol as an emblem and later a flag. Indeed, although generally regarded as dismissive of such bizarre esoteric notions, and wary of their influence on some of his followers (notably Himmler and Hess) who did not share his skepticism, even Hitler himself professed belief in World Ice Theory in his Table Talk (p249; p324; p445).

[20] Nietzsche has an odd attitude to Darwinism and social Darwinism. On the one hand, he frequently disparages Darwin and Darwinism.On the other hand, his moral philosophy directly parallels that of the social Darwinists, albeit bereft of the Darwinian theory that provides the ostensible justification and basis for this theory of prescriptive ethics
Interestingly, Hitler too has an ambiguous, and, in some respects, similar, relationship with both Darwinism and social Darwinism. On the one hand, Hitler, like Nietzsche, frequently espouses views that read very much like social Darwinism. For example, in Mein Kampf, Hitler writes:

Those who want to live, let them fight, and those who do not want to fight in this world of eternal struggle do not deserve to live” (Mein Kampf).

Similarly, in his Table Talk, Hitler is quoted as declaring:

By means of the struggle, the elites are continually renewed. The law of selection justifies this incessant struggle, by allowing the survival of the fittest” (Hitler’s Table Talk: p33).

Both these quotations definitely sound like social Darwinism. Yet, interestingly, Hitler never actually mentions Darwin or Darwinism, his reference to the law of selection” being the closest he comes to referencing the theory of evolution, and even this is ambiguous, at least in the English rendering. Moreover, in a different passage from Table Talk, Hitler seemingly emphatically rejects the theory of evolution, demanding: 

Where do we acquire the right to believe that man has not always been what he is now? The study of nature teaches us that, in the animal kingdom just as much as in the vegetable kingdom, variations have occurred. They’ve occurred within the species, but none of these variations has an importance comparable with that which separates man from the monkey — assuming that this transformation really took place” (Hitler’s Table Talk: p248). 

What are we to make of this? Clearly, Hitler often contradicted himself and seemingly expressed contradictory and inconsistent views.
Moreover, both Hitler and Nietzsche didn’t really understand Darwin’s theory of evolution. Thus, Nietzsche suggested that the struggle between individuals concerns, not mere survival, but rather power (e.g. Twilight of the Idols: xiii:14). In fact, it concerns neither survival nor power as such – but rather reproductive success (which tends to correlate with power, especially among men, which is why men, in particular, are known to seek power). Thus, Spencer’s phrase, survival of the fittest, is useful only once we recognise that the ‘survival’ promoted by selection is the survival of genes rather than of individual organisms themsevles.
But we must recognize that it is possible, and quite logically consistent, to espouse something very similar in content to a social Darwinist moral framework without actually justifying this moral framework by reference to Darwinism.
In short, both Nietzsche and Hitler seem to be advocating something akin to ‘social Darwinism without the Darwinism’.

[21] If Hitler was influenced by Chamberlain, then Chamberlain himself was a disciple of Arthur de Gobineau. The latter, though considered by many as the ultimate progenitor of Nazi race theory, was, far from anti-Semitic, actually positively effusive in his praise for and admiration of the Jewish people. Even Chamberlain, though widely regarded as an anti-Semite, at least with respect to the Ashkenazim, nevertheless professed to admire Sephardi Jews, not least on account of their supposed ‘racial purity’, in particular their refusal to intermingle and intermarry with the Ashkenazim.

[22] The exact connotations of this passage may depend on the translation. The version I have quoted comes from the Manheim edition. However, a different translation renders the passage, not as The mightiest counterpart to the Aryan is represented by the Jew, but rather The Jew offers the most striking contrast to the Aryan”. This alternative translation has rather different, and less flattering, connotations, given that Hitler famously extolled Aryans as the master race. 

The Biology of Beauty

Nancy Etcoff, Survival of the Prettiest: The Science of Beauty (New York: Anchor Books 2000) 

Beauty is in the eye of the beholder.  

This much is true by very definition. After all, the Oxford English Dictionary defines beauty as: 

A combination of qualities, such as shape, colour, or form, that pleases the aesthetic senses, especially the sight’. 

If beauty is in the eye of the beholder, then the ‘eye of the beholder’ has been shaped by a process of natural, and sexual, selection to find certain things beautful — and, if beauty is in the ‘eye of the beholder’, then sexiness is located in a different part of the male anatomy but similarly subjective

Thus, beauty is defined as that which is pleasing to an external observer. It therefore presupposes the existence of an external observer, separate from the person or thing that is credited with beauty, from whose perspective the thing or individual is credited with beauty.[1]

Moreover, perceptions of beauty do indeed differ.  

To some extent, preferences differ between individuals, and between different races and cultures. More obviously, and to a far greater extent, they also differ as between species.  

Thus, a male chimpanzee would presumably consider a female chimpanzee as more beautiful than a woman. The average human male, however, would likely disagree – though it might depend on the woman. 

As William James wrote in 1890: 

To the lion it is the lioness which is made to be loved; to the bear, the she-bear. To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her” (Principles of Psychology (vol 2): p387). 

Beauty is therefore not an intrinsic property of the person or object that is described as beautiful, but rather a quality attributed to that person or object by a third-party in accordance with their own subjective tastes. 

However, if beauty is then indeed a subjective assessment, that does not mean it is an entirely arbitrary one. 

On the contrary, if beauty is indeed in the ‘eye of the beholder’ then it must be remembered that the ‘eye of the beholder’—and, more importantly, the brain to which that eye is attached—has been shaped by a process of both natural and sexual selection

In other words, we have evolved to find some things beautiful, and others ugly, because doing so enhanced the reproductive success of our ancestors. 

Thus, just as we have evolved to find the sight of excrement, blood and disease disgusting, because each were potential sources of infection, and the sight of snakes, lions and spiders fear-inducing, because each likewise represented a potential threat to our survival when encountered in the ancestral environment in which we evolved, so we have evolved to find the sight of certain things pleasing on the eye. 

Of course, not only people can be beautiful. Landscapes, skylines, works of art, flowers and birds can all be described as ‘beautiful’. 

Just as we have evolved to find individuals of the opposite sex attractive for reasons of reproduction, so these other aspects of aesthetic preference may also have been shaped by natural selection. 

Thus, some research has suggested that our perception of certain landscapes as beautiful may reflect psychological adaptations that evolved in the context of habitat selection (Orians & Heerwagen 1992).  

However, Nancy Etcoff does not discuss such research. Instead, in ‘Survival of the Prettiest’, her focus is almost exclusively on what we might term ‘sexual beauty’. 

Yet, if beauty is indeed in the ‘in the eye of the beholder’, then sexiness is surely located in a different part of the male anatomy, but equally subjective in nature. 

Indeed, as I shall discuss below, even in the context of mate preferences, ‘sexiness’ and ‘beauty’ are hardly synonyms. As an illustration, Etcoff herself quotes that infamous but occasionally insightful pseudo-scientist and all-round charlatan, Sigmund Freud, whom she quotes as observing:  

The genitals themselves, the sight of which is always exciting, are nevertheless hardly ever judged to be beautiful; the quality of beauty seems, instead, to attach to certain secondary sexual characters” (p19: quoted from Civilization and its Discontents). 

Empirical Research 

Of the many books that have been written about the evolutionary psychology of sexual attraction (and I say this as someone who has read, at one time or another, a good number of them), a common complaint is that they are full of untested, or even untestable, speculation – i.e. what that other infamous scientific charlatan Stephen Jay Gould famously referred to as just so stories

This is not a criticism that could ever be levelled at Nancy Etcoff’s ‘Survival of the Prettiest’. On the contrary, as befits Etcoff’s background as a working scientist (not a mere journalist or popularizer), it is, from start to finish, it is full of data from published studies, demonstrating, among other things, the correlates of physical attractiveness, as well as the real-world payoffs associated with physical attractiveness (what is sometimes popularly referred to as ‘lookism’). 

Indeed, in contrast to other scientific works dealing with a similar subject-matter, one of my main criticisms of this otherwise excellent work would be that, while rich in data, it is actually somewhat deficient in theory. 

Youthfulness, Fertility, Reproductive Value and Attractiveness 

A good example of this deficiency in theory is provided by Etcoff’s discussion of the relationship between age and attractiveness. Thus, one of the main and recurrent themes of ‘Survival of the Prettiest’ is that, among women, sexual attractiveness is consistently associated with indicators of youth. Thus, she writes: 

Physical beauty is like athletic skill: it peaks young. Extreme beauty is rare and almost always found, if at all, in people before they reach the age of thirty-five” (p63). 

Yet Etcoff addresses only briefly the question of why it is that youthful women or girls are perceived as more attractive – or, to put the matter more accurately, why it is that males are sexually and romantically attracted to females of youthful appearance. 

Etcoff’s answer is: fertility

Female fertility rapidly declines with age, before ceasing altogether with menopause

There is, therefore, in Darwinian terms, no benefit in a male being sexually attracted to an older, post-menopausal female, since any mating effort expended would be wasted, as any resulting sexual union could not produce offspring. 

As for the menopause itself, this, Etcoff speculates, citing scientific polymath, popularizer and part-time sociobiologist Jared Diamond, evolved because human offspring enjoy a long period of helpless dependence on their mother, without whom they cannot survive. 

Therefore, after a certain age, it pays women to focus on caring for existing offspring, or even grandchildren, rather than producing new offspring whom, given their own mortality, they will likely not be around long enough to raise to maturity (p73).[2]

This theory has sometimes been termed the grandmother hypothesis.

However, the decline in female fertility with age is perhaps not sufficient to explain the male preference for youth. 

After all, women’s fertility is said to peak in their early- to mid-twenties.[3]

However, men’s (and boy’s) sexual interest, if anything, seems to peak in respect of females, if anything, somewhat younger, namely in their late-teens (Kenrick & Keefe 1992). 

To explain this, Douglas Kenrick and Richard Keefe propose, following a suggestion of Donald Symons, that this is because girls at this age, while less fertile, have higher reproductive value, a concept drawn from ecology, population genetics and demography, which refers to an individual’s expected future reproductive output given their current age (Kenrick & Keefe 1992). 

Reproductive value in human females (and in males too) peaks just after puberty, when a girl first becomes capable of bearing offspring. 

Before then, there is always the risk she will die before reaching sexual maturity; after, her reproductive value declines with each passing year as she approaches menopause. 

Thus, Kenrick and Keefe, like Symons before them, argue that, since most human reproduction occurs within long-term pair-bonds, it is to the evolutionary advantage of males to form long-term pair-bonds with females of maximal reproductive value (i.e. mid to late teens), so that, by so doing, they can monopolize the entirety of that woman’s reproductive output over the coming years. 

Yet the closest Etcoff gets to discussing this is a single sentence where she writes: 

Men often prefer the physical signs of a woman below peak fertility (under age twenty). Its like signing a contract a year before you want to start the job” (p72). 

Yet the theme of indicators of youth being a correlate of female attractiveness is a major theme of her book. 

Thus, Etcoff reports that, in a survey of traditional cultures: 

The highest frequency of brides was in the twelve to fifteen years of age category… Girls at this age are preternaturally beautiful” (p57). 

It is perhaps true that “girls at this age are preternaturally beautiful” – and Etcoff, being female, can perhaps even get away with saying this without being accused of being a pervert or ‘paedophile’ for even suggesting such a thing. 

Nevertheless, this age “twelve to fifteen” seems rather younger than most men’s, and even most teenage boys, ideal sexual partners, at least in western societies. 

Thus, for example, Kenrick and Keefe inferred from their data that around eighteen was the preferred age of sexual partner for most males, even those somewhat younger than this themselves.[4]

Of course, in primitive, non-western cultures, women may lose their looks more quickly, due to inferior health and nutrition, the relative unavailability of beauty treatments and because they usually undergo repeated childbirth from puberty onward, which takes a toll on their health and bodies. 

On the other hand, however, obesity is more prevalent in the West, decreases sexual attractiveness and increases with age. 

Moreover, girls in the west now reach puberty somewhat earlier than in previous centuries, and perhaps earlier than in the developing world, probably due to improved nutrition and health. This suggests that females develop secondary sexual characteristics (e.g. large hips and breasts) that are perceived as attractive because they are indicators of fertility, and hence come to be attractive to males, rather earlier than in premodern or primitive cultures. 

Perhaps Etcoff is right that girls “in the twelve to fifteen years of age category… are preternaturally beautiful” – though this is surely an overgeneralization and does not apply to every girl of this age. 

However, if ‘beauty’ peaks very early, I suspect ‘sexiness’ peaks rather later, perhaps late-teens into early or even mid-twenties. 

Thus, the latter is dependent on secondary sexual characteristics that develop only in late-puberty, namely larger breasts, buttocks and hips

Thus, Etcoff reports, rather disturbingly, that: 

When [the] facial proportions [of magazine cover girls] are fed into a computer, it guesstimates their age to be between six and seven years of age” (p151; citing Jones 1995). 

But, of course, as Etcoff is at pains to emphasize in the next sentence, the women pictured do not actually look like they are of this age, either in their faces let alone their bodies. 

Instead, she cites Douglas Jones, the author of the study upon which this claim is based, as arguing that the neural network’s estimate of their age can be explained by their display of “supernormal stimuli”, which she defines as “attractive features… exaggerated beyond proportions normally found in nature (at least in adults)” (p151). 

Yet much the same could be said of the unrealistically large, surgically-enhanced breasts favored among, for example, glamour models. These abnormally large breasts are likewise an example of “supernormal stimuli” that may never be found naturally, as suggested by Doyle & Pazhoohi (2012)

But large breasts are indicators of sexual maturity that are rarely present in girls before their late-teens. 

In other words, if the beauty of girls’ faces peaks at a very young age, the sexiness of their bodies peaks rather later. 

Perhaps this distinction between what we can term ‘beauty’ and ‘sexiness’ can be made sense of in terms of a distinction between what David Buss calls short-term and long-term mating strategies

Thus, if fertility peaks in the mid-twenties, then, in respect of short-term mating (i.e. one-night stands, casual sex, hook-ups and other one-off sexual encounters), men should presumably prefer partners of a somewhat greater age than their preferences in respect of long-term partners – i.e. of maximal fertility rather than maximum reproductive value – since in the case of short-term mating strategies there is no question of monopolizing the woman or girl’s long-term future reproductive output. 

In contrast, cues of beauty, as evinced by relatively younger females, might trigger a greater willingness for males to invest in a long-term relationship. 

This ironically suggests, contrary to contemporary popular perception, males’ sexual or romantic interest in respect of relatively younger women and girls (i.e. those still in their teens) would tend to reflect more ‘honourable intentions’ (i.e. more focussed on marriage or a long-term relationship rather than mere casual sex) than does their interest in older women. 

However, as far as I am aware, no study has ever demonstrated differences in men’s preferences regarding the preferred age-range of their casual sex partners as compared to their preferences in respect of longer-term partners. This is perhaps because, since commitment-free casual sex is almost invariably a win-win situation for men, and most men’s opportunities in this arena likely to be few and far between, there has been little selection acting on men to discriminate at all in respect of short-term partners. 

Are There Sex Differences in Sexiness? 

Another major theme of ‘Survival of the Prettiest’ is that the payoffs for good-looks are greater for women than for men. 

Beauty is most obviously advantageous in a mating context. But women convert this advantage into an economic one through marriage. Thus, Etcoff reports: 

The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (p65; see also Udry & Eckland 1984; Hamermesh & Biddle 1994). 

However, there is no such advantage accruing to better-looking male students. 

On the hand, according to Catherine Hakim, in her book Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom (which I have reviewed here, here and here) in the workplace, the wage premium associated with being better looking is actually, perhaps surprisingly, greater for men than for women. 

For Hakim herself: 

This is clear evidence of sex discrimination… as all studies show women score higher than men on attractiveness” (Money, Honey: p246). 

However, as I explain in my review of her book, the better view is that, since beauty opens up so many other avenues to social advancement for women, notably through marriage, relatively more beautiful women corresponding reduce their work-effort in the workplace since they have need of pursuing social advancement through their careers when they can far more easily achieve it through marriage. 

After all, by bother to earn money when you can simply marry it instead. 

According to Etcoff, there is only one sphere where being more beautiful is actually disadvantageous for women, namely in respect of same-sex friendships: 

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (p50; citing Krebs & Adinolfy 1975). 

She does not speculate as to why this is so. An obvious explanation is envy and dislike of the sexual competition that beautiful women represent. 

However, an alternative explanation is perhaps that beautiful women do indeed come to have less likeable personalities. Perhaps, having grown used to receiving preferential treatment from and being fawned over by men, beautiful women become entitled and spoilt. 

Men might overlook these flaws on account of their looks, but, other women, immune to their charms, may be a different story altogether.[5]

All this, of course, raises the question as to why the payoffs for good looks are so much greater for women than for men? 

Etcoff does not address this, but, from a Darwinian perspective, it is actually something of a paradox which I have discussed previously

After all, among other species, it is males for whom beauty affords a greater payoff in terms of the ultimate currency of natural selection – i.e. reproductive success. 

It is therefore male birds who usually evolve more beautiful plumages, while females of the same species are often quite drab, the classic example being the peacock and peahen

The ultimate evolutionary explanation for this pattern is called Bateman’s principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972). 

The basis of this theory is this: Females must make a greater minimal investment in offspring in order to successfully reproduce. For example, among humans, females must commit themselves to nine months pregnancy, plus breastfeeding, whereas a male must contribute, at minimum, only a single ejaculate. Females therefore represent the limiting factor in mammalian reproduction for access to whom males compete. 

One way in which they compete is by display (e.g. lekking). Hence the evolution of the elaborate tail of the peacock

Yet, among humans, it is females who seem more concerned with using their beauty to attract mates. 

Of course, women use makeup and clothing to attract men rather than growing or evolving long tails. 

However, behavior is no less subject to selection than morphology, so the paradox remains.[6]

Indeed, the most promising example of a morphological trait in humans that may have evolved primarily for attracting members of the opposite sex (i.e. a ‘peacock’s tail’) is, again, a female trait – namely, breasts

This is, of course, the argument that was, to my knowledge, first developed by ethologist Desmond Morris in his book The Naked Ape, which I have reviewed here, and which I discuss in greater depth here

As Etcoff herself writes: 

Female breasts are like no others in the mammalian world. Humans are the only mammals who develop rounded breasts at puberty and keep them whether or not they are producing milk… In humans, breast size is not related to the amount or quality of milk that the breast produces” (p187).[7]

Instead, human breasts are, save during pregnancy and lactation, composed predominantly of, not milk, but fat. 

This is in stark contrast to the situation among other mammals, who develop breasts only during pregnancy. 

Breasts are not sex symbols to other mammals, anything but, since they indicate a pregnant or lactating and infertile female. To chimps, gorillas and orangutans, breasts are sexual turn-offs” (p187). 

Why then does sexual selection seem, at least on this evidence, to have acted more strongly on women than men? 

Richard Dawkins, in The Selfish Gene (which I have reviewed here), was among the first to allude to this anomaly, lamenting: 

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (The Selfish Gene: p165). 

Yet this is surely not the case with regard to casual sex (i.e. hook-ups and one-night stands). Here, it is very much men who ardently pursue and women who are sought after. 

For example, in one study at a University campus, 72% of male students agreed to go to bed with a female stranger who propositioned them to this effect, yet not a single one of the 96 females approached agreed to the same request from a male stranger (Clark and Hatfield 1989). 

(What percentage of the students sued the university for sexual harassment was not revealed.) 

Indeed, patterns of everything from prostitution to pornography consumption confirm this – see The Evolution of Human Sexuality (which I have reviewed here). 

Yet humans are unusual among mammals in also forming long-term pair-bonds where male parental investment is the norm. Here, men have every incentive to be as selective as females in their choice of partner. 

In particular, in Western societies practising what Richard Alexander called socially-imposed monogamy (i.e. where there exist large differentials in male resource holdings, but polygynous marriage is unlawful) competition among women for exclusive rights to resource-abundant alpha males may be intense (Gaulin and Boser 1990). 

In short, the advantage to a woman in becoming the sole wife of a multi-millionaire is substantial. 

This, then, may explain the unusual intensity of sexual selection among human females. 

Why, though, is there not evidence of similar sexual selection operating among males? 

Perhaps the answer is that, since, in most cultures, arranged marriages are the norm, female choice actually played little role in human evolution. 

As Darwin himself observed in The Descent of Man as an explanation as to why intersexual selection seems, unlike among most other species, to operated more strongly on human females than on men:

Man is more powerful in body and mind than woman, and in the savage state he keeps her in a far more abject state of bondage than does the male of any other animal; therefore it is not surprising that he should have gained the power of selection” (The Descent of Man).

Instead, male mating success may have depended less upon what Darwin called intersexual selection and more upon intrasexual selection – i.e. less upon female choice and more upon male-male fighting ability (see Puts 2010). 

Male Attractiveness and Fighting Ability 

Paradoxically, this is reflected even in the very traits that women find attractive in men. 

Thus, although Etcoff’s book is titled ‘The Evolution of Prettiness’, and ‘prettiness’ is usually an adjective applied to women, and, when applied to men, is—perhaps tellingly—rarely a complement, Etcoff does discuss male attractiveness too.  

However, Etcoff acknowledges that male attractiveness is a more complex matter than female attractiveness: 

We have a clearer idea of what is going on with female beauty. A handsome male turns out to be a bit harder to describe, although people reach consensus almost as easily when they see him” (p155).[8]

Yet what is notable about the factors that Etcoff describes as attractive among men is that they all seem to be related to fighting ability. 

This is most obviously true of height (p172-176) and muscularity (p176-80). 

Indeed, in a section titled “No Pecs, No Sex”, though she focuses on the role of pectoral muscles in determining attractiveness, Etcoff nevertheless acknowledges: 

Pectoral muscles are the human male’s antlers. Their weapons of war” (p177). 

Thus, height and muscularity have obvious functional utility. 

This in stark contrast to traits such as the peacock’s tail, which are often a positive handicap to their owner. Indeed, one influential theory of sexual selection contends that it is precisely because they represent a handicap that they have evolved as a sexually-selected fitness indicator, because only a genetically superior male is capable of bearing the handicap of such an unwieldy ornament, and hence possession of such a handicap is paradoxically an honest signal of health. 

Yet, if men’s bodies have evolved more for fighting than attracting mates, the same is perhaps less obviously true of their faces. 

Thus, anthropologist David Puts proposes: 

Even [male] facial structure may be designed for fighting: heavy brow ridges protect eyes from blows, and robust mandibles lessen the risk of catastrophic jaw fractures” (Puts 2010: p168). 

Indeed, looking at the facial features of a highly dominant, masculine male face, like that of Mike Tyson, for example, one gets the distinct impression that, if you were foolish enough to try punching it, it would likely do more damage to your hand than to his face. 

Thus, if some faces are, as cliché contends, highly ‘punchable’, then others are presumably at the opposite end of this spectrum. 

This also explains some male secondary sexual characteristics that otherwise seem anomalous, for example, beards. These have actually been found in some studies “to decrease attractiveness to women, yet have strong positive effects on men’s appearance of dominance” (Puts 2010: p166). 

David Puts concludes: 

Men’s traits look designed to make men appear threatening, or enable them to inflict real harm. Men’s beards and deep voices seem designed specifically to increase apparent size and dominance” (Puts 2010: p168). 

Interestingly, Etcoff herself anticipates this theory, writing: 

Beautiful ornaments [in males] develop not just to charm the opposite sex with bright colors and lovely songs, but to intimidate rivals and win the intrasex competition—think of huge antlers. When evolutionists talk about the beauty of human males, they often refer more to their weapons of war than their charms, to their antlers rather than their bright colors. In other words, male beauty is thought to have evolved at least partly in response to male appraisal” (p74) 

Of course, these same traits are also often attractive to females. 

After all, if a tall muscular man has higher reproductive success because he is better at fighting, then it pays women to preferentially mate with tall, muscular men so that their male offspring will inherit these traits and hence themselves have high reproductive success, helping the spread the women’s own genes by piggybacking on the superior male’s genes.  

This is a version of sexy son theory

In addition, males with fighting prowess are better able to protect and provision their mates. 

However, this attractiveness to females is obviously secondary to the primary role in male-male fighting. 

Moreover, Etcoff admits, highly masculine faces are not always attractive. 

Thus, unlike the “supernormal” or “hyperfeminine” female faces that men find most attractive in women, women rated “hypermasculine” faces as less attractive (p158). This, she speculates, is because they are perceived as overaggressive and unlikely to invest in offspring

As to whether such men are indeed less willing to invest in offspring, this Etcoff does not discuss and there appears to be little evidence on the topic. But the association of testosterone with both physiological and psychological masculinization suggests that the hypothesis is at least plausible

Etcoff concludes: 

For men, the trick is to look masculine but not exaggeratedly masculine, which results in a ‘Neanderthal’ look suggesting coldness or cruelty” (p159). 

Examples of males with perhaps overly masculine faces are perhaps certain boxers, who tend to have highly masculine facial morphology (e.g. heavy brow ridges, deep set eyes, wide muscular jaws), but are rarely described as handsome. 

For example, I doubt anyone would ever call Mike Tyson handsome. But, then, no one would ever call him exactly ugly either – at least not to his face. 

An extreme example might be the Russian boxer Nikolai Valuev, whose extreme neanderthal-like physiognomy was much remarked on. 

Another example that sprung to mind was the footballer Wayne Rooney (also, perhaps not uncoincidentally, said to have been a talented boxer) who, when he first became famous, was immediately tagged by the newspapers, media and comedians as ugly despite – or indeed because of – his highly masculine, indeed thuggish, facial physiognomy

Likewise, Etcoff reports that large eyes are perceived as attractive in men, but these are a neotenous trait, associated with both immature infants and indeed with female beauty (p158). 

This odd finding Etcoff attributes to the fact that large eyes, as an infantile trait, evoke women’s nurturance, a trait that evolved in the context of parental investment rather than mate choice

Yet this is contrary to the general principle in evolutionary psychology of modularity of mind and the domain specificity of psychological adaptations, whereby it is assumed that that psychological adaptations for mate choice and for parental investment represent domain-specific modules with little or no overlap. 

Clearly, for psychological adaptations in one of these domains to be applied in the other would result in highly maladaptive behaviours, such as sexual attraction to infants and to your own close biological relatives.[9]

In addition to being more complex and less easy to make sense of than female beauty, male physical attractiveness is also of less importance in determining female mate choice than is female beauty in male mate choice

In particular, she acknowledges that male status often trumps handsomeness. Thus, she quotes a delightfully cynical, not especially poetic, line from the ancient Roman poet Ovid, who wrote: 

Girls praise a poem, but go for expensive presents. Any illiterate oaf can catch their eye, provided he’s rich” (quoted: p75). 

A perhaps more memorable formulation of the same idea is quoted on the same page from a less illustrious source, namely boxing promoter, numbers racketeer and convicted killer Don King, on a subject I have already discussed, namely the handsomeness (or not) of Mike Tyson, King remarking: 

Any man with forty two million looks exactly like Clark Gable” (quoted: p75). 

Endnotes

[1] I perhaps belabor this rather obvious point only because one prominent evolutionary psychologist, Satoshi Kanazawa, argues that, since many aspects of beauty standards are cross-culturally universal, beauty standards are not ‘in the eye of the beholder’. I agree with Kanazawa on the substantive issue that beauty standards are indeed mostly cross-culturally universal among humans (albeit not entirely so). However, I nevertheless argue, perhaps somewhat pedantically, that beauty remains strictly in the ‘eye of the beholder’, but it is simply that the ‘eye of the beholder’ (and the brain to which is attached) has been shaped by a process of natural selection so as to make different humans share the same beauty standards. 

[2] While Jared Diamond has indeed made many original contributions to many fields, this idea does not in fact originate with him, even though Etcoff oddly cites him as a source. Indeed, as far as I am aware, it is even especially associated with Diamond. Instead, it may actually originatea by another, lesser known, but arguably even more brilliant evolutionary biologist, namely George C Williams (Williams 1957). 

[3] Actually, pregnancy rates peak surprisingly young, perhaps even disturbingly young, with girls in their mid- to late-teens being most likely to become pregnant from any single act of sexual intercourse, all else being equal. However, the high pregnancy rates of teenage girls are said to be partially offset by their greater risk of birth complications. Therefore, female fertility is said to peak among women in their early- to mid-twenties.

[4] This Kenrick and Keefe inferred from, among other evidence, an analysis of lonely hearts advertisements, wherein, although the age of the female sexual/romantic partner sought was related to the advertised age of the man placing the ad (which Kenrick and Keefe inferred was a reflection of the fact that their own age delimited the age-range of the sexual partners whom they would be able to attract, and whom it would be socially acceptable for them to seek out) nevertheless the older the man, the greater the age-difference he sought in a partner. In addition, they reported evidence of surveys suggesting that, in contrast to older men, younger teenage boys, in an ideal world, actually preferred somewhat older sexual partners, suggesting that the ideal age of sexual partner for males of any age was around eighteen years of age (Kenrick & Keefe 1992).

[5] Etcoff also does not discuss whether the same is true of exceptionally handsome men – i.e. do exceptionally handsome men, like beautiful women, also have problems maintaining same-sex friendships. I suspect that this is not so, since male status and self-esteem is not usually based on handsomeness as such – though it may be based on things related to handsomeness, such as height, athleticism, earnings, and perceived ‘success with women’. Interestingly, however, French novelist Michel Houellebecq argues otherwise in his novel, Whatever, in which, after describing the jealousy of one of the main characters, the short ugly Raphael Tisserand, towards an particularly handsome male colleague, writes: 

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63) 

[6] Thus, in other non-human species, behaviour is often subject to sexual selection, in, for example, mating displays, or the remarkable, elaborate and often beautiful, but non-functional, nests built by male bowerbirds, which Geoffrey Miller sees as analogous to human art. 

[7] An alternative theory for the evolution of human breasts is that they evolved, not as a sexually selected ornament, but rather as a storehouse of nutrients, analogous to the camel’s humps, upon which women can draw during pregnancy. On this view, the sexual dimorphism of their presentation (i.e. the fact that, although men do have breasts, they are usually much less developed than those of women) reflects, not sexual selection, but rather the calaric demands of pregnancy. 
However, these two alternative hypotheses are not mutually incompatible. On the contrary, they may be mutually reinforcing. Thus, Etcoff herself mentions the possibility that breasts are attractive precisely because: 

Breasts honestly advertise the presence of fat reserves needed to sustain a pregnancy” (p178.) 

On this view, men see fatty breasts as attractive in a sex partner precisely because only women with sufficient reserves of fat to grow large breasts are likely to be capable of successfully gestating an infant for nine months. 

[8] Personally, as a heterosexual male, I have always had difficulty recognizing ‘handsomeness’ in men, and I found this part of Etcoff’s book especially interesting for this reason. In my defence, this is, I suspect, partly because many rich and famous male celebrities are celebrated as ‘sex symbols’ and described as ‘handsome’ even though their status as ‘sex symbols’ owes more to the fact they are rich and famous than their actual looks. Thus, male celebrities sometimes become sex symbols despite their looks, rather than because of them. Many famous rock stars, for example, are not especially handsome but nevertheless succeed in becoming highly promiscuous and much sought after by women and girls as sexual and romantic partners. In contrast, men did not suddenly start idealizing fat or physically unattractive female celebrities as sexy and beautiful simply because they are rich famous celebrities.
Add to this the fact that much of what passes for good looks in both sexes is, ironically, normalness – i.e. a lack of abnormalities and averageness – and identifying which men women consider ‘handsome’ had, before reading Etcoff’s book, always escaped me.
However, Etcoff, for her part, might well call me deluded. Men, she reports, only claim they cannot tell which men are handsome and which are not, perhaps to avoid being accused of homosexuality

Although men think they cannot judge another man’s beauty, the agree among themselves and with women about which men are the handsomest” (p138). 

Nevertheless, there is indeed some evidence that judging male handsomeness is not as clear cut as Etcoff seems to suggests. Thus, it has been found that, not only do men claim to have difficulty telling handsome men from ugly men, but also women themselves are more likely to disagree among themselves about the physical attractiveness of members of the opposite sex as compared to men (Wood & Brumbaugh 2009Wake Forest University 2009). 
Indeed, not only do women not always agree with one another regarding the attractiveness of men, sometimes they can’t even agree with themselves. Thus, Etcoff reports: 

A woman makes her evaluations of men more slowly, and if another woman offers a different opinion, she may change her mind” (p76). 

This indecisiveness, for Etcoff, actually makes good evolutionary sense:

If women take a second look, compare notes with other women, or change their minds after more thought, it is not out of indecisiveness but out of wisdom. Mate choice is not just about fertility—most men are fertile most or all of their lives—but about finding a helpmate to bring up the baby” (p77). 

Another possible reason why women may consult other women as to whether a given man is attractive or not is sexy son theory
On this view, it pays for women to mate with men who are perceived as attractive by other women because then any offspring whom they bear by these men will likely inherit the very traits that made the father attractive to women, and hence themselves be attractive to women and hence be successful in spreading the woman’s own genes to subsequent generations. 
In other words, being attractive to other women is itself an attractive trait in a male. However, sexy son theory is not discussed by Etcoff.

[9] Another study discussed by Etcoff also reported anomalous results, finding that women actually preferred somewhat feminized male faces over both masculinized and average male faces (Perrett et al 1998). However, Etcoff cautions that: 

The Perrett study is the only empirical evidence to date that some degree of feminization may be attractive in a man’s face” (p159). 

Other studies concur that male faces that are somewhat, but not excessively, masculinized as compared to the average male face are preferred by women. 
However, one study published just after the first edition of ‘Survival of the Prettiest’ was written, holds the possibility of reconciling these conflicting findings. This study reported cyclical changes in female preferences, with women preferring more masculinized faces only when they are in the most fertile phase of their cycle, and at other times preferring more feminine features (Penton-Voak & Perrett 2000). 
This, together with other evidence, has been controversially interpreted as suggesting that human females practice a so-called dual mating strategy, preferring males with more feminine faces, supposedly a marker for a greater willingness to invest in offspring, as social partners, while surreptitiously attempting to cuckold these ‘beta providers’ with DNA from high-T alpha males, by preferentially mating with the latter when they are most likely to be ovulating (see also Penton-Voak et al 1999Bellis & Baker 1990). 
However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology
While the intensity of women’s sex drive does indeed seem to fluctuate cyclically, the evidence for more fine-grained changes in female mate preferences should be treated with caution. 

References 

Bateman (1948), Intra-sexual selection in DrosophilaHeredity, 2(3): 349–368. 
Bellis & Baker (1990). Do females promote sperm competition?: Data for humansAnimal Behavior, 40: 997-999. 
Clark & Hatfield (1989) Gender differences in receptivity to sexual offers. Journal of Psychology & Human Sexuality, 2(1), 39–55 
Doyle & Pazhoohi (2012) Natural and Augmented Breasts: Is What is Not Natural Most Attractive? Human Ethology Bulletin 27(4):4-14. 
Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005. 
Gildersleeve et al (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic reviewPsychological Bulletin 140(5):1205-59. 
Hamermesh & Biddle (1994) Beauty and the Labor Market, American Economic Review 84(5):1174-1194.
Jones 1995 Sexual selection, physical attractiveness, and facial neoteny: Cross-cultural evidence and implications, Current Anthropology, 36(5):723–748. 
Kenrick & Keefe (1992) Age preferences in mates reflect sex differences in mating strategies. Behavioral and Brain Sciences 15(1):75-133. 
Orians & Heerwagen (1992) Evolved responses to landscapes. In Barkow, Cosmides & Tooby (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 555–579). Oxford University Press. 
Penton-Voak et al (1999) Menstrual cycle alters face preferencesNature 399 741-2. 
Penton-Voak & Perrett DI (2000) Female preference for male faces changes cyclically: Further evidence. Evolution and Human Behavoir 21(1):39–48. 
Perrett et al (1998) Effects of sexual dimorphism on facial attractiveness. Nature 394(6696):884-7. 
Puts (2013) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31(3):157-175. 
Wake Forest University (2009) Rating Attractiveness: Consensus Among Men, Not Women, Study Finds. ScienceDaily. ScienceDaily, 27 June 2009. 
Trivers (1972) Parental investment and sexual selectionSexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 
Williams (1957) Pleiotropy, natural selection, and the evolution of senescence. Evolution. 11(4): 398–411. 
Wood & Brumbaugh (2009) Using Revealed Mate Preferences to Evaluate Market Force and Differential Preference Explanations for Mate Selection, Journal of Personality and Social Psychology 96(6):1226-44.
Udry & Eckland (1984) Benefits of Being Attractive: Differential Payoffs for Men and Women, Psychological Reports 54(1):47–56.
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.  

Selwyn Raab’s ‘Five Families’: A History of the New York Mafia, Heavily Slanted Towards Recent Times

Selwyn Raab, Five Families: The Rise, Decline and Resurgence of American’s Most Powerful Mafia Empires (London: Robson Books 2006) 

With Italian-American organized crime now surely in terminal decline, the time is ripe for a definitive history of the New York Mafia. Unfortunately, Selwyn Raab’s ‘Five Families: The Rise, Decline, and Resurgence of America’s Most Powerful Mafia Empires’ is not it.[1]

Focus on Late-Twentieth Century

Its first failing as a history of the New York Mafia is that, despite its length, the book gives only cursory coverage to the early history of the New York Mafia. 

Instead, it is heavily weighted towards the recent history of the five families. 

This is perhaps unsurprising. After all, the author, Selwyn Raab is, by background, a journalist not an historian. 

Indeed, it is surely no coincidence that Raab’s history only starts to become in-depth at about the time he began covering the activities of the New York mob in real-time as reporter for The New York Times in 1974.

To give an idea of this bias I will cite page numbers. 

The book comprises over 700 pages, plus title pages, ‘Prologue’, ‘Introduction’, ‘Afterword’, ‘Epilogue’, two appendices, ‘Bibliography’ and ‘Index’, themselves comprising a further 100 or so pages. 

The first two chapters are introductory, and mostly cite examples of Mafia activities from the mid- to late twentieth century. 

The chronological narrative begins in in Chapter 3, titled ‘Roots’, which purports to cover both the origin of the New York Mafia and its prehistory. 

In doing so, Raab repeats uncritically the Sicilian Mafia’s own romantic foundation myth, claiming that the Mafia began during Sicily’s long history of foreign occupation as a form of  “self-preservation against perceived corrupt oppressors (p14). 

Indeed, even his supposedly “less romantic and more likelyetymology for the word ‘Mafia is that it derives from “a combined Sicilian-Arabic slang expression that means acting as a protector against the arrogance of the powerful” (p14). 

Actually, according to historian John Dickie, rather than protecting the common people against corrupt oppression by outsiders, the Sicilian Mafia was itself corrupt, exploitative and oppressive from the very beginning (see Dickie’s books, Costa Nostra and Blood Brotherhoods). 

Raab is vague on the precise origins of the Sicilian Mafia, but does insist that mafia cosche evolved “over hundreds of years” (p14).

This is, again, likely a Mafia myth. The Mafia, like the Freemasons (from whom its initiation rituals are, at least according to Dickie, likely borrowed), exaggerates its age to enhance its venerability and mystique.[2]

Of course, Raab’s text is a history of the New York Mafia. One can therefore overlook his inadequate treatment of its Sicilian prehistory. 

Unfortunately, his treatment of early Mafia activity in New York itself is barely better. 

Early turn-of-the-century New York Mafiosi like Giuseppe Morello and Lupo the Wolf are not even mentioned. Nor are their successors, the Terranova brothers. Neither is there any mention of the barrel murders, counterfeiting trial or Mafia-Camorra War

Even their nemesis, Italian-born NYPD officer, Joe Petrosino, murdered in Sicily while investigating the backgrounds of transplanted Mafiosi with a view to deportation, merits only a cursory two and a bit pages – something almost as derisory as the “bare, benchless concrete slab serv[ing] as a road divider and pedestrian-safety island” that ostensibly commemorates him in Lower Manhattan today (p19- 21). 

There are just nineteen pages in Raab’s chapter on the New York Mafia’s ‘Roots’. The next chapter is titled ‘The Castallammarese War’, and focuses upon the gang war of that name, which began in 1930, although the chapter begins with a discussion of the effects of the national Prohibition law that came into force in 1920. 

Therefore, since the Morello Family seems to have had its roots in the 1890s, that’s over twenty years of New York Mafia history (not to mention, according to Raab, some several centuries of Sicilian Mafia history) passed over in less than twenty pages. 

Readers interested in the origins of the five families, and indeed how there came to be five families in the first place, should look elsewhere. I would recommend instead Mike Dash’s The First Family, which uncovers the formerly forgotten history of the first New York Mafia family, the Morello family, the ancestor of today’s Genovese Family, arguably still the most powerful mafia family in America to this day. 

Although I have yet to read it, James Jacobs’ The Mob and the City also comes highly recommended in many quarters. 

Whereas Raab’s account of the first few decades of American Mafia history is particularly inadequate, the coverage of the next few decades of organized crime history, is barely better. 

Here, we get the familiar potted history of the New York Mafia with the each of the usual suspects – Luciano, Anastasia, Costello, Genovese – successively assuming center stage. 

Moreover, despite his ostensible focus on Italian-American organized crime, unlike in the Mafia itself (which, though it has survived countless RICO prosecutions, would surely would never survive a class-action lawsuit for racial discrimination), non-Italians are not arbitrarily excluded from Raab’s account.

On the contrary, each of make their usual, almost obligatory, cameos—Bugsy Siegel assassinated in the Los Angeles home of Virginia Hill, Abe ‘Kid Twist’ Relesaccidentally’ falling from a sixth-floor window, and, of course, the shadowy and much-mythologized Meyer Lansky always lurking in the background like a familiar anti-Semitic conspiracy theory

It is not that Raab actually misses anything out, but rather that he doesn’t really add much. 

Instead, we get another regurgitation of the familiar Mafia history with which anyone who has had the misfortunate of reading any of the countless earlier popular histories of the American Mafia will be all too familiar. 

Then, after just 100 pages, we are already at the Appalachin meeting in 1957. 

That’s over fifty years of twentieth century American Mafia history condensed in less than 100 pages. More to the point, it’s over half the entire period of American Mafia history covered by Raab’s book (which was published in 2005) covered in less than a seventh of the total text. 

After a brief diversion, namely two chapters discussing supposed Mafia involvement in the Kennedy assassination, we are into the 1970s, and now Raab’s coverage suddenly becomes in-depth and authoritative. 

Is All Publicity Bad Publicity?

The period of New York Mafia history upon which Raab’s text focusses (namely from the 1970s until the turn of the century) may indeed have marked the high point of Mafia mystique, with blockbuster movies like the overrated Godfather’ trilogy glamourizing Italian-American organized crime like never before.

However,it arguably also marked the beginning of the New York Mafia’s decline.[3]

Indeed, the Mafia’s notoriety during this period may even have been a factor in its decline. After all, publicity and media infamy are, for a criminal organization, at best a mixed blessing.  

True, a media-cultivated aura of power and untouchability may discourage victims from running to the police, and also deter rival criminals from attempting to challenge mafia hegemony. 

However, criminal conspiracies operate best when they are outside the public eye, let alone the scrutinizing glare of the journalists, movie-makers, government and law enforcement. 

There is, after all, a reason why the Mafia is a secret society whose very existence is, at least in theory, a closely-guarded secret.

It is no accident, then, that those crime bosses who openly courted the limelight and revelled in their own notoriety did not enjoy long and successful careers, Al Capone and John Gotti representing the two best known American cases of organized crime bosses who made the mistake of courting media atention.[4]

Thus, John Gotti inevitably takes up more than his share of chapters in Raab’s book, just as, during his lifetime, he enjoyed more than his share of headlines in Raab’s own New York Times. In short, the so-called ‘Dapper Don’ invariably made for good copy. 

However, courting the media is rarely a sensible way to run a crime empire. 

A famous adage of the marketing industry supposedly has it that all publicity is good publicity.

This may be true, or at least close to being true, in, say, the realm of rock or rap music, where controversy is often a principal selling point.

However, in the world of organized crime, almost the exact opposite could be said to be true. 

Thus, much of the press coverage of Gotti may indeed have been flattering, even fawning, or at least perceived by Gotti as such. Certainly he himself often seemed to revel in his own infamy and also became something of a folk hero to some sections of the public. 

However, the more he became a folk hero by thumbing his nose at the authorities, the more of a threat he posed to those authorities, in part precisely because he had become something of a folk hero.

The result was that, although the press initially dubbed him ‘The Teflon Don’, because, supposedly, no charges would ever stick, Gotti actually enjoyed less than a decade of freedom as Gambino family boss before being convicted and imprisoned. 

By courting the limelight, he also invited the attention of, not just the media, but also of law enforcement and thereby ensured that his fifteen minutes of fame would be followed by a lifetime of incarceration. 

A rather more sensible approach was perhaps that adopted by a lesser-known contemporary of, and rival to, Gotti, namely Genovese family boss Vincent ‘The Chin’ Gigante, who, far from courting publicity like Gotti, let ‘front bossFat Tony Salerno take the bulk of law enforcement heat, while himself attempting, initially quite successfully, to pass under the radar. 

While fictional Mafia boss Tony Soprano spent the bulk of the television series in which he played the leading role attempting to conceal his visits to a psychiatrist from his Mafia colleagues, Gigante made sure his own (supposed) mental health difficulties were as public as possible, feigning mental illness for decades in order to avoid law enforcement attention. 

Nicknamed ‘The Oddfather’ by the press for his bizarre antics, he was regularly pictured walking the streets of Greenwich Village in a bathrobe and was said to regularly check into a local psychiatric hospital whenever law enforcement heat was getting too much.[5]

Wary of phone taps and bugs, Gigante also insisted that other members of the crime family of which he was head never mention him by name, but rather, if they had to refer to him, simply to point towards their chin or curl their fingers into the shape of a letter ‘c’. 

These precautions had law enforcement fooled for years, and it was long believed in law enforcement circles that Gigante was retired and the real boss was indeed front boss Tony Salerno

Largely as a result, Gigante enjoyed at least a decade and a half as Genovese boss before he too belatedly joined his erstwhile rival John Gotti behind bars. 

A History of the Mafia – or of Law Enforcement Efforts to Destroy Them?

Of course, the secrecy with which mafiosi like Gigante took pains to veil their affairs presents a challenge, not just to law enforcement, but also to the historian. 

After all, criminals are, almost by definition, dishonest.[6]

Even those mafiosi who did break rank, and the code of omertà, by providing testimony to the authorities, or sometimes publishing memoirs and giving interviews on television (or, most recently, even starting their own youtube channels), are notoriously unreliable sources of information, being prone to both exaggerate their own role and importance in events, while also (rather contradictorily) minimizing their role in any serious prosecutable offences for which they have yet to serve time. 

Perhaps a more trustworthy source of information—or so one would hope—is law enforcement.  

Yet, relying on the latter as a source, Raab’s account inevitably ends up being as much a history of law enforcement efforts to bring the Mafiosi to justice as it is of the Mafia itself. 

Thus, for example, a whole chapter, entitled ‘The Birth of RICO’, is devoted to the development and passage into law of the Racketeer Influenced and Corrupt Organizations Act or RICO Act of 1970

Indeed, amusingly, but perhaps not especially plausibly, Raab even suggests that the name of this act, or rather the acronym by which the Act, and prosecutions under it, became known, may have been inspired by the once-famous final line of seminal 1930s Warner Brothers gangster movie, Little Ceasar, Raab reporting that George Robert Blakely, the lawyer largely responsible for the drafting of the Act: 

Refuses to explain the reason for the RICO acronym. But he is a crime-film buff and admits that one of his favorite movies is Little Caesar, a 1931 production loosely modeled on Al Capone’s life… Dying in an alley after a gun battle with the police, Little Caesar [Caesar Enrico ‘Rico’ Bandello] gasps one of Hollywood’s famous closing lines—also Blakey’s implied message to the Mob: ‘Mother of Mercy—is this the end of Rico?’” (p177). 

Of course, the passage into law of the RICO statute, as it turned out, was indeed a seminal event in American Mafia history, facilitating, as it did, the successful prosection and incarceration of countless Mafia bosses and other organized crime figures.

Nevertheless, in this chapter, and indeed elsewhere in the book, the five families themselves inevitably fade very much into the background, and Raab concentrates instead on the tactics of and conflicts among law enforcement themselves. 

Yet, in Raab’s defence, such material is often no less interesting than the stories of mafiosi themselves. 

Indeed, one thing to emerge from portions of Raab’s narrative is that conflicts and turf wars between different branches, levels and layers of law enforcement—local, state and federal—were often as fiercely, if less bloodily, fought over as were territorial disputes among mafiosi themselves. 

After all, mafiosi rarely take the trouble to commit crimes in only the jurisdiction of a single police precinct. Therefore, the jurisdiction of different branches and levels of law enforcement frequently overlapped.  

Yet, such was the fear of police corruption and mafia infiltration, different branches of law enforcement rarely trusted one another enough to share intelligence with other branches of law enforcement, lest a confidential source, informant, undercover agent, phone tap, bug or wire be thereby compromised, let alone to allow a rival branch of law enforcement to take the lion’s share of the credit, and newspaper headlines, for bringing a high-profile mafia scalp to justice. 

A ‘Pax Mafiosa’ in New York?

In contrast, territorial disputes between crime families actually seem to have been surprisingly muted, and were usually ironed out through ‘sit-downs’ (i.e. effectively an appeal to arbitration by a higher authority) rather than resort to violence. 

Thus, despite its familiarity as a formulaic cliché of mafia movies from The Godfather onwards, there appears to have never actually been another war between rival Mafia families in New York after the Castallammarese War ended in 1931. 

Mafia wars did ocassionally occur—e.g. the Banana War, First, Second and Third Colombo Wars. However, these were all intra-family affairs, involving control over a single family, rather than conflict between different families, though families did sometimes attempt to sponsor ‘regime change’ in other families.[7]

The Castallammarese War therefore stands as the New York Mafia equivalent of the First World War, with each of the nascent five family factions joined together in two grand coalitions, just as, before and during the World War One, the great powers (and a host of lesser powers) joined together in two grand alliances

However, whereas the First World War only promised to be the war to end all wars, the Castallammarese War has some claim to actually delivering on this promise, with the independent sovereignty of each of the five families thenceforth mutually respected in a sort of Westphalian Peace, or Pax Mafiosa that lasted for the better part of a century. 

In The Godfather (the novel, not the film), Michael Corleone quotes his father as claiming, had “the [five] Families had been running the State Department there would never have been World War II”, because the rival powers would have been smart enough to iron out their problems without resort to unnecessary bloodshed and economic expense. 

On the evidence of New York Mafia history as recounted by Raab in ‘The Five Families’, Don Corleone may, perhaps surprisingly, have had a point. 

Perhaps, then, our world leaders and statesmen could learn indeed something from lowlife criminals about the importance of avoiding the unnecessary bloodshed and expense of war. 

Honor Among Thieves – and Among Men of Honor? 

Another general conclusion that can be drawn from Raab’s history is that, if there is, as cliché contends, but little honor among thieves, there is seemingly scarcely any more honor even among self-styled ‘men of honor’. 

This is even true of the most influential figure in American Mafia history, Charles ‘Lucky’ Luciano, described by Raab in one photo caption as “the visionary godfather and designer of the modern Mafia”, and elsewhere as “the Mafia’s visionary criminal genius”, who is even credited, in some tellings, with creating the Commission and even the five families themselves.[8]

Yet Luciano was a serial traitor. 

First, he betrayed his ostensible ally, Joe ‘The Boss’ Masseria, in the Castellammarese War, setting him up for assassination by his rival Salvatore Maranzano. Then, just a few months later, he betrayed and arranged the murder of Maranzano as well, leaving Luciano himself free to take the position of, if not capo di tutti capi, then at least the most powerful mafiosi in New York, and probably in America, if not the world. 

In this series of betrayals, Luciano set the pattern for the twentieth century mob. 

The key is to make sure that you betray what turns out to be the losing side, if only on account of your betrayal.

The powerful Gambino crime family provides a particularly good illustration of this. Indeed, for much of the twentieth century, staging an interal coup or arranging the assassination of the current incumbent seems to have been almost the accepted means of securing the succession.

Thus, John Gotti famously became boss of the family by arranging the murder of his own boss, Paul Castellano, just as Castellano’s own predecessor, the eponymous Carlo Gambino, had himself allegedly been complicit in the murder of his own former boss, Albert Anastasia, who was himself the main suspect in the murder of his own predecessor, Vincent Mangano

However, such treachery was by no means restricted to the Gambinos. On the contrary, Joe Colombo became boss of the crime family now renamed in his honor by betraying his own boss Joe Magliocco (and Bonanno boss Joe Bonnano) to the bosses of the three other families whom he had been ordered by them to to kill. 

Meanwhile, one of Colombo’s successors, Carmine ‘The Snake’ Gigante, had also been at war with his own boss, Joe Profaci, in the First Colombo War, but then, in a further betrayal, switched allegiances, setting up his former allies, the Gallo brothers, for assassination by the Profaci leadership. For his trouble, Gigante earned himself the perhaps unflattering sobriquet of ‘The Snake’, but also ultimately the leadership of the crime family.

As for Luciano himself, not only was he a serial traitor, he was also guilty of what was, in Mafia eyes, an even more egregious and unpardonable transgression—namely, he was a police informer

Thus, during his trial for prostitution offences, Raab reveals: 

The most embarrassing moment for the proud Mafia don was Dewey’s disclosure that in 1923, when he was twenty-five, Luciano had evaded a narcotics arrest by informing on a dealer with a larger cache of drugs. 
‘You’re just a stool pigeon,’ Dewey belittled him. ‘Isn’t that it?’ 
‘I told them what I knew,’ a downcast Luciano replied” (p55). 

In this, Luciano was again to set a pattern that, somewhat later in the century, many other mafiosi would eagerly follow. 

Indeed, by the end of the century, the fabled Mafia code of omertà seems to have been, rather like its earlier supposed ban on drug-dealing, almost as often honored in the breach as actually complied with, at least for mafiosi otherwise facing long spells of incarceration with little prospect of release.

At least since Abe ‘Kid Twist’ Reles, who, being non-Italian, was not, of course, a ‘made man’, and who, at any rate, died under mysterious circumstances, none, to my recollection, ever paid the ultimate price for their betrayal. 

Instead, the main consequence of their breaking the code of omertà seems to have been reduced sentences, government protection under the witness protection program and an premature end to their Mafia careers.

However, an end to their mafia careers rarely meant an end to their criminal careers, and few turncoat mafiosi seem to have gone straight, let alone been genuinely repentant.

The most famous case is that of Gambino underboss, and Gotti nemesis, Sammy ‘The Bull’ Gravano, then the highest-ranking New York mafiosi ever to become a cooperating witness, who helped put John Gotti and a score of other leading mafiosi behind bars with his testimony.

In return for this testimony, Gravano was to serve less than five years in prison, despite admitting involvement in as many as nineteen murders.

In defence of this exceptionally lenient sentence, Leo Glasser, the judge responsible for sentencing both Gravano and Gotti, naïvely insisted that Gravano’s craven treachery was “the bravest thing I have ever seen” and declared “there has never been a defendant of his stature in organized crime who has made the leap he has made from one social planet to another” (p449). 

However, just a few years after his release, Gravano was convicted of masterminding a multi-million-dollar ecstasy ring in Arizona, where the authorities had relocated him for his own protection. 

His status as a notorious mafia stoolie seems to have impeded his reentry into the crime world hardly at all. 

On the contrary, it seems to have been precisely his status as a famed former Gambino family underboss that recommended him to the starstruck young ecstasy trafficking crew who, having befriended his son, were only too happy to allow the infamous New York crime boss Sammy Gravano to assume leadership of the crime ring which they themselves had established and built up. 

By the end of the century, only the secretive and close-knit Bonnano Family, long the only New York family still to restrict membership to those of full-Sicilian (not just Southern Italian) ancestry, could brag that they were, perhaps for this reason, the only New York family never to have had a fully-inducted member become a cooperating government witness.  

Yet even this claim, though technically true, was largely disingenuous. 

Indeed, the Bonannos had actually been expelled from the Commission for reportedly being on the verge of inducting undercover FBI agent Joe Pistone (alias ‘Donnie Brasco’) into the family just before his status as an undercover FBI agent and infiltrator had been revealed by the authorities.

Nevertheless, this did not stop Bonanno boss Joe Massino:

Proudly inform[ing] the new soldiers of the family’s unique record among all of the nation’s borgatas as the only American clan that had never spawned a stool pigeon or cooperative government witness” (p640).

It is therefore somewhat ironic that, in 2004, Massino himself who would become the first ever actual boss of a New York family to become a cooperating witness. 

Mafia Decline 

Besides its inadequate treatment of early New York Mafia history (see above), the other main reason that Raab’s ‘Five Families’ cannot be regarded as the definitive history of the New York Mafia is that Raab himself evidently doesn’t believe the story is over. On the contrary, in his subtitle, he predicts, and, in his Afterword, reports a ‘resurgence’.

The reason Raab wrongly predicts a Mafia revival is that he fails to understand the ultimate reason behind mafia malaise, attributing it primarily to law enforcement success: 

The combined federal and state campaigns were arguably the most successful anticrime expedition in American history. Over a span of two decades, twenty-four Mob families, once the best-organized and most affluent criminal associations in the nation, were virtually eliminated or seriously undermined” (p689). 

The real reason for Mafia decline is demographic. 

Italian-Americans no longer live in close-knit urban ghettos. Indeed, outside of Staten Island, few even live in New York City proper (i.e. the five boroughs). 

Italian Harlem has long previously transformed into Spanish Harlem and, beyond the tourist trap, Italian restaurants and annual parade, there is now little of Italy left in what little remains of Manhattan’s Little Italy

Even Bensonhurst, perhaps the last neighborhood in New York to be strongly associated with Italian-Americans, was never really an urban ghetto, being neither deprived nor monoethnic, and is now majority nonwhite.[9]

Italian-Americans are now often middle-class, and the smart ambitious ones aspire to be professionals and legitimate businessmen rather than criminals.

Indeed, I would argue that Italian-Americans no longer even still exist as a distinct demographic. They are now fully integrated into the American mainstream. 

Indeed, I suspect that, as with the infamous plastic paddy phenomenon with respect to Irish ancestry, few self-styled ‘Italian-Americans’ are even of 100% Italian ancestry. Thus, as far back as 1985, the New York Times reported: 

8 percent of Americans of Italian descent born before 1920 had mixed ancestry, but 70 percent of them born after 1970 were the children of intermarriage… Among Americans of Italian descent under the age of 30, 72 percent of men and 64 percent of women married someone with no Italian background” (Collins, The Family: A new look at intermarriage in the US, New York Times, Feb 11 1985). 

Thus, almost of necessity, the five families have long previously relaxed their traditional requirement for inductees to be of full-Italian ancestry, since otherwise so few Americans would be eligible.

The Gambinos seem to have been the first to relax this requirement, inducting, and eventually promoting to acting-boss, John Gotti’s son, Gotti Junior, at the behest of his father, despite the (part-) Russian, or possibly Russian-Jewish, ancestry of his mother (p462). 

Recently, Raab reports, in an attempt to restore discipline, the earlier requirement of full-Italian ancestry has been reimposed.  

However, in the absence of a fresh infusion of zips fresh off the boat from Sicily (which Raab also anticipates: p703), this will only further dry up the supply of potential recruits, since so few native-born Americans now qualify as 100% Italian in ancestry.

Raab reports that the supposed Mafia revival has resulted from a reduction in FBI scrutiny, owing to: 

1) The perception that the Mafia threat is extinguished;

2) A change in FBI priorities post-9/11, with the FBI increasingly focusing on domestic terror at the expense of Mafia investigation.  

The lower public profile of the five families in recent years, Raab believes, only shows that Mafiosi have been slipping below the radar, quietly returning to their roots:  

Gambling and loan-sharking—the Mafia’s symbiotic bread-and-butter staples—appear to be unstoppable” (p692).[10]

But, in the aftermath of the Supreme Court decision in Murphy v. National Collegiate Athletic Association, sports betting is now legal throughout the New York Metropolitan area (i.e. in New York, New Jersey and Connecticut), and indeed most of the US, and one of these two staples is now likely off the menu for the foreseeable future. 

Moreover, the big money is increasingly in narcotics, and, as Raab concedes, in contrast with their success in taking down the Mafia, the FBI’s “more costly half-century campaign against the narcotics scourge remains a Sisyphean failure” (p689). 

This has meant that non-Italian criminals have increasingly taken over the drug-trade, especially Latin-American cartels, who have taken over importation and wholesale, and black and Latino street gangs, who control most distribution at the street-level. 

Yet, in truth, the replacement of Italian-Americans in organized crime is only the latest in an ongoing process of  ethnic succession—in New York, the Italians had themselves replaced Jewish American crime gangs, who had dominated organized crime in New York in the early twentieth century into the prohibition era, and who had themselves replaced the Irish gangs and political bosses of the nineteenth century (see Ianni, Black Mafia: Ethnic Succession Organized Crime). 

The future likely belongs to blacks and Hispanics. The belief that the latter are somehow incapable of operating with the same level of organization and sophistication as the Mafia is, not only racist, but also likely wrong. 

Indeed, the fact that, prior to recent times, the Mafia in particular, not organized crime in general, was a major FBI priority may even have acted as a form of racially-based ‘affirmative action for black and Hispanic criminals. 

Raab may be right that the shift in FBI priorities post-9/11 has permitted a resurgence of organized crime. Indeed, in truth, organized crime, like the drug problem that fuels it, never really went away.

However, there is no reason to anticipate any resurgence will come with an Italian surname.

Endnotes

[1] Indeed, since Italian-American crime is in terminal decline – not just in New York – the time is also ripe for a definitive history of Italian-American organized crime in general. Of course, Raab’s book does not purport to be a history of Italian-American organized crime in general. It is a history only of the famed ‘five families’ operating in the New York metropolitan area, and hence only of Italian-American organized crime in this city. 
However, it does purport, in its subtitle, to be a history of ‘America’s most Powerful Mafia Empires’. Probably the only Italian-American crime syndicate (or at least predominantly Italian-American crime syndicate) outside of New York which had a claim to qualifying as one of ‘America’s Most Powerful Mafia Empires’ during most of the twentieth century is the Chicago Outfit.
Of course, New York is a much bigger city than Chicago, especially today. However, for most of the twentieth century, until it was eclipsed by Los Angeles in the 1980s, Chicago was known as America’s ‘Second City’. Moreover, whereas in New York there were famously five families competing for power and influence, in Chicago, from the time of the St Valentine’s day massacre in 1929 until the late-twentieth century, the Chicago Outfit was said to enjoy almost unchallenged criminal hegemony.
Raab extends his gaze beyond the New York families to Mafia families based in other cities only during an extended, and probably misguided, discussion of the supposed role of the Mafia, in particular Florida boss, Santo Trafficante Jr., and New Orleans boss, Carlos ‘The Little Man’ Marcello, in the assassination of John F Kennedy.
However, even here, the Chicago Outfit receives short shrift, with infamous Chicago boss, Sam ‘Momo’ Giancana, receiving only passing mention by Raab, even though he features as prominently in JFK conspiracy theories as either Trafficante and Morello.

[2] Of course, most mafiosi themselves likely believe this myth, just as many Freemasons probably themselves believe the exaggerated tales of their own venerability and spurious historical links to groups such as the Knights Templar. They are, in short, very much in thrall of their own mystique. This is among the reasons they are led to join the mafia in the first place. If claims of ancient origins were originally a myth cynically invented by mafiosi themselves, rather than presumed by outsiders, then modern mafiosi have certainly come to very much fall for their own propaganda.

[3] This is certainly the suggestion of Francis Ianni in Black Mafia: Ethnic Succession in Organized Crime, who argues that the American Mafia was already ceding power to black and Hispanic organized crime by at least the 1970s. This view seems to have some substance. 
Early to mid-twentieth century black Harlem crime Bumpy Johnson, for all his infamy, was said to be very much subservient to the Italian mafia families. Indeed, in the 1920s, a white criminal like Owney Madden was able to run the famous Cotton Club, initially with a whites-only door policy, in the heart of black Harlem.
However, by the 1970s, Harlem was mostly a no-go area for whites, Italian-Americans very much included. Therefore, even if the Mafia had the upper-hand in any negotiations, they nevertheless had to delegate to blacks any criminal activities in black areas of the city.
Thus, Nicky Barnes, the major heroin distributer in Harlem, was said to buy his heroin from mafia importers and wholesalers, especially Crazy’ Joe Gallo, whom he was said to formed a relationship with while they were both in prison together. Similarly, unlike his portrayal in the movie American GangsterFrank Lucas also seems to have bought his heroin primarily through mafia wholesalers. However, he may also have had an indirect link to the Golden Triangle through his associate Ike Anderson, a serving soldier in the Vietnam War.
However, both Lucas and Barnes necessarily had their own crew of black dealers to distribute the drugs on the street. The first black criminal in New York to supposedly operate entirely independently of the Mafia in New York was said to have been Frank Matthews, who disappeared under mysterious circumstances while on parole.

[4] Intriguingly, Professor of Criminal Justice, Howard Abadinsky, in his textbook on organized crime, links the higher public profile adopted by Capone and Gotti to the fact that both trace their ancestry, not to Sicily, but rather to Naples, where the local Camorra have long cultivated a higher public profile, and typically adopted a flashier style of dress and demeanor, than their Sicilian Mafia equivalents (Organized Crime, 4th Edition: p18).
Thus, historian John Dickie refers to a “longstanding difference between the public images of the two crime fraternities”: 

The soberly dressed Sicilian Mafioso has traditionally had a much lower public profile than the Camorrista. Mafiosi are so used to infiltrating the state and the ruling elite that they prefer to blend into the background rather than strike poses of defiance against the authorities. The authorities, after all, were often on their side. Camorista, by contrast, often played to an audience” (Mafia Republic: p248). 

Abadinsky concurs that: 

While even a capomafioso exuded an air of modesty in both dress and manner of speaking, the Camorrista was a flamboyant actor whose manner of walking and style of dress clearly marked him out as a member of the società” (Organized Crime, 4th Edition: p18). 

Adabinsky therefore tentatively observes: 

In the United States the public image of Italian-American organized crime figures with Neapolitan heritage has tended towards Camorra, while their Sicilian counterparts have usually been more subdued. Al Capone, for example, and, in more recent years, John Gotti, are both of Neapolitan heritage” (Organized Crime, 4th Edition: p18). 

However, while true, I cannot see how this could be anything other than a coincidence, since both Capone and Gotti were born and spent their entire lives in the USA, Gotti being fully two generations removed from the old country, and neither seem to have had parents or other close relatives who were involved in crime and could somehow have passed on this cultural influence from Naples – unless perhaps Abadinsky is proposing some sort of innate, heritable, racial difference between Neapolitans and Sicilians, which seems even more unlikely.

[5] Gigante is not the only organized crime boss accused of malingering. Neapolitan Camorra boss, Raffaele Cutolo, alias ‘The Professor’, also stood accused of faking mental illness. However, whereas Gigante did so in order to avoid prison, Cutolo, apart from eighteen months living on the run from the authorities after escaping, spent virtually the entirety of his career as a crime boss locked up, being periodically shuttled between psychiatric hospitals and prisons. 

[6] Actually, not all crimes necessarily involve dishonesty – e.g. crimes of passion, some crimes of violence. However, any mafiosa necessarily has to be dishonest, since otherwise he would admit his crimes to the authorities and hence not enjoy a long career. Indeed, the very code of omertà, though conceptualized as a code of honour, demands dishonesty, at least in one’s dealings with the authorities, since it forbids both informing to the authorities regarding the crimes of others, and admitting the existence of, or one’s membership of, the criminal fraternity itself. 

[7] Thus, if there was never outright war between families after the Castallammarese War, nevertheless bosses of some families did often attempt to sponsor ‘regime change’ in other families, by deposing other bosses, both in New York and beyond. For example, as discussed above, Bonanno family boss Joe Bonnano, acting in concert with Joe Magliocco, the then-boss of what was then known as the Profaci family, supposedly conspired to assasinate the bosses of the other three New York families, only to have their scheme betrayed by the assigned assassin, Joe Colombo, who was then himself rewarded for his betrayal by being appointed as boss of the family that thenceforth came to be named after him.
Similarly, Genovese boss Vincent ‘The Chin’ Gigante and Lucchese boss Tony ‘Ducks’ Corallo together attempted unsuccessfully assassinate Gambino boss John Gotti as revenge for Gotti’s own unauthorised assassination of his predecessor, Paul Castellano, which they saw as a violation of Mafia rules, whereby the assassination of a boss was, at least in theory, only permissible with the prior consent and authority of the Commission. The attempted assassination, carried out by Vittorio ‘Little Vic’ Amuso and Anthony ‘Gaspipe’ Casso, themselves later to become boss and underboss of the Luccheses, resulted in the death of Gambino underboss, Frank DeCicco in a car bomb, but not Gotti himself.

[8] In truth, Luciano seems to have invented neither the five families nor the commission. According to Mike Dash in his excellent The First Family, the Commission, under the earlier name ‘the Council’, actually existed long before Luciano came to prominance. 
As for the five families, surely if Luciano, or indeed Maranzano before him (as other versions relate), were to invent afresh the structure of the New York Mafia in a ‘top down’ process, they would surely have created a more unitarycentralized structure in order to maximize their own power and control as overall boss of bosses, rather than devolving power to the bosses of the individual families, who themselves issued orders to capos and soldiers.
As I have discussed previously, the power of the so-called National Commission was, to draw an analogy with international relations, largely intergovernmental rather than federal, let alone unitary or centralized, in its powers. Its power lay in its perverse perceived ‘legitimacy among mafiosi. As Stalin is said to have contemptuously remarked of the Pope, the Commision commanded no divisions (nor any ‘crews’, capos or soldiers) of its own.
In reality, Maranzano and Luciano surely at most merely give formal recognition to factions which long predated the Castallammarese War and its aftermath and whose independent power demanded recognition. Indeed, the Commission, was even initially said to have included non-Italians such as Dutch Schultz, if only because the power of the ‘Bronx Beer Baron’ simply demanded his inclusion if the Commission were to be at all effective in regulating organized crime in New York.

[9] Raab, for his part, anticipates that Mafia rackets will increasingly, like Italian-Americans themselves, migrate to the suburbs: 

A strategic shift could be exploiting new territories. Although big cities continue to be glittering attractions, there are signs that the Mafia, following demographic trends, is deploying more vigorously in suburbs. There, the families might encounter police less prepared to resist them than federal and big-city investigators. ‘Organized crime goes where the money is, and there’s money and increasing opportunities in the suburbs,’ Howard Abadinsky, the historian, observes. Strong suburban fiefs have already been established by the New York, Chicago, and Detroit families” (p707). 

However, organized crime tends to thrive in poor close-knit communities in deprived areas who lack a trust in the police and authorities and are hence unwilling to turn to the latter for protection. If the Mafia attempts to make inroads in the suburbs, it will likely come up against assimilated, middle-class Americans only too willing to turn the to police to protection. In short, there is a reason why organized crime has largely been absent in middle-class suburbia.

[9] Although he wrote ‘Five Families’ several years before the legalization of sports betting in most of America, New York City included, Raab seems to anticipate that legalization will have little if any effect on Mafia revenue from illegal sports books, writing: 

Sensible gamblers will always prefer wagering with the Mob rather than with state-authorized Off-Track Betting parlors and lotteries. Bets on baseball, football, and basketball games placed with a bookie have a 50 percent chance of winning, without the penalty of being taxed, while the typical state lottery is considered a pipe dream because the chance of winning is infinitesimal” (p694). 

It is, of course, true that lotteries, almost by definition, involve long odds and little realistic chance of winning. However, the same was also true of the illegal numbers rackets that were a highly lucrative source of income for predominantly black ‘policy kings’ (and queens) in early twentieth century America. Indeed, this racket was so lucrative so eventually major white organized crime figures like Dutch Schultz in New York and Sam Guancana in Chicago sought to take it over.
Yet, if winning a state lottery is indeed a ‘pipe dream’, the same is not true of legalized sports betting. On the contrary, here, the odds are as good as in illegal Mafia-controlled sports betting, and, given the legal regulation, prospective gamblers will probably be more confident that they are not likely to be ripped off by the bookies.
Thus, in most jurisdications where off-track sports betting is legal and subject to few legal restrictions, there is little if any market for illegal sports betting. Hence the legalization of sports betting in most of America will likely mean that sports betting is no longer controlled by organized crime, let alone the Mafia, just as the end of Prohibition in 1933 similarly similarly led to the decline in the the market for moonshine and bootleg alcohol.

In Defence of Physiognomy

Edward Dutton, How to Judge People by What they Look Like (Wrocław: Thomas Edward Press, 2018) 

Never judge a book by its cover’ – or so a famous proverb advises. 

However, given that Edward Dutton’s ‘How to Judge People by What they Look Like’, represents, from its provocative title onward, a spirited polemic against this received wisdom, one is tempted, in the name of irony, to review his book entirely on the basis of its cover. 

I will resist this temptation. However, it is perhaps worth pointing out that two initial points are apparent, if not from the book’s cover alone, then at least from its external appearance. These are: 

1) It is rather cheaply produced and apparently self-published; and

2) It is very short – a pamphlet rather than a book.[1]

Both these facts are probably excusable by reference to the controversial and politically-incorrect nature of the book’s title, theme and content.

Thus, on the one hand, the notion that we can, with some degree of accuracy, judge people by appearances alone is a very politically-incorrect idea and hence one that many publishers would be reluctant to associate themselves with or put their name to.

On the other hand, the fact that the topic is so controversial may also explain why the book is so short. After all, relatively little research has been conducted on this topic for precisely this reason.

Moreover, even such research as has been conducted is often difficult to track down. 

After all, physiognomy, the field of research which Dutton purports to review, is no longer a recognized science. On the contrary, most people today dismiss it as a discredited pseudoscience.

Therefore, there is no ‘International Journal of Physiognomy’ available at the click of a mouse on ScienceDirect. 

Neither are there any Departments of Physiognomy or Professors of Physiognomy at major universities, or a recent undergraduate, or graduate-level textbook on physiognomy collating all important research on the subject. Indeed, the closest thing we have to such a textbook is Dutton’s own thin, meagre pamphlet. 

Therefore, not only has relatively little research has been conducted in this area, at least in recent years, but also such research as has been conducted is spread across different fields, different journals and different researchers, and hence not always easy to track down. 

Moreover, such research rarely actually refers to itself as ‘physiognomy’, in part precisely because physiognomy is widely regarded as a pseudoscience and hence something to which researchers, even those directly researching correlations between morphology and behaviors, are reluctant to associate themselves.[2]

Therefore, conducting a key word search for the term ‘physiognomy’ in one or more of the many available databases of scientific papers would not assist the reader much, if at all, in tracking down relevant research.[3]

It is therefore not surprising that Dutton’s book is quite short. 

For this same reason, it is perhaps also excusable that Dutton has evidently failed to track down some interesting studies relevant to his theme. 

For example, a couple of interesting studies not cited by Dutton purported to uncover an association between behavioural inhibition and iris pigmentation in young children (Rosenberg & Kagan 1987; Rosenberg & Kagan 1989). 

Another interesting study not mentioned by Dutton presents data apparently showing that subjects are able to distinguish criminals from non-criminals at better than chance levels merely from looking at photographs of their faces (Valla, Ceci & Williams 2011).[4]

Such omissions are inevitable and excusable. More problematically however, Dutton also seems to have omitted at least one entire area of research relevant to his subject-matter – namely research on so-called minor physical anomalies or MPAs

These are certain physiological traits, interpreted as minor abnormalities, probably reflecting developmental instability and mutational load, which have been found in several studies to be associated with various psychiatric and developmental conditions, as well as being a correlate of criminal behaviour (see below).

Defining the Field 

Yet Dutton not only misses out on several studies relevant to the subject-matter of his book, he also is not entirely consistent in identifying just what the precise subject-matter of his book actually is. 

It is true that, at many points in his book, he talks about physiognomy

This term is usually defined as the science (or, according to many people, the pseudoscience) of using a person’s morphology in order to determine their character, personality and likely behaviour. 

However, the title of Dutton’s book, ‘How to Judge People by What They Look Like’, is potentially much broader. 

After all, what people look like includes, not just our morphology, but also, for example, how we dress and what clothes we wear.

For example, we might assess a person’s job from their uniform, or, more generally, their socioeconomic status and income level from the style and quality of their clothing, or the designer labels and brand names adorning it. 

More specifically, we might even determine their gang allegiance from the color of their bandana, and their sexuality and fetishes from the colour and positioning of their handkerchief

We also make assessments of character from clothing style. For example, a person who is sloppily dressed and is hence perceived not take care in his or her appearance (e.g. whose shirt is unironed or unclean) might be interpreted as lacking in self-worth and likely to produce similarly sloppy work in whatever job s/he is employed at. On the other hand, a person always kitted out in the latest designer fashions might be thought shallow and materialistic. 

In addition, certain styles of dress are associated with specific youth subcultures, which are often connected, not only to taste in music, but also with lifestyle (e.g. criminality, drug-use, political views).[5]

Dutton does not discuss the significance of clothing choice in assessments of character. However, consistent with this broader interpretation of his book’s title, Dutton does indeed sometimes venture beyond physiognomy in the strict sense. 

For example, he discusses tattoos (p46-8) and beards (p60-1). 

I suppose the decision to get tattooed or grow a beard reflects both genetic predispositions and environmental influence, just as all aspects of phenotype, including morphology, reflect the interaction between genes and environment. 

However, this is also true of clothing choice, which, as I have already mentioned, Dutton does not discuss.  

On the other hand, both tattoos and, given that they take time to grow, even beards are relatively more permanent than whatever clothes we are wearing at any given time. 

However, Dutton also discusses the significance of what he terms a “blank look” or “glassy eyes” (p57-9). But this is a mere facial expression, and hence even more transitory than clothing. 

Yet Dutton omits discussion of other facial expressions which, unlike his wholly anecdotal discussion of “glassy eyes”, have been researched by ethologists at least since Charles Darwin’s seminal The Expression of the Emotions in Man and Animals was published in 1872. 

Thus, Paul Ekman famously demonstrated that the meanings associated with at least some facial expressions are cross-culturally universal (e.g. smiling being associated with happiness). 

Indeed, some human facial expressions even appear to be homologues of behaviour patterns among non-human primates. For example, it has been suggested that the human smile is homologous with an appeasement gesture, namely the baring of clenched teeth (aka a ‘fear grin’), among chimpanzees. 

Of particular relevance to the question posed in Dutton’s book title, namely ‘How to Judge People by What They Look Like’, it is suggested some facial expressions lie partly outside of conscious control – e.g. blushing when embarrassed, going pale when shocked or fearful.  

Indeed, even a fake smile is said to be distinguishable from a Duchenne smile

This then explains the importance of reading facial expressions when playing poker or interrogating suspects, as people often inadvertently give away their true feelings through their facial expressions, behaviour and other mannerisms (e.g. so-called microexpressions). 

Somatotypes and Physique 

Dutton begins his book with a remarkable attempt to resurrect William Sheldon’s theory that certain types of physiques (or, as Sheldon called them, somatotypes) are associated with particular types of personality (or as Sheldon called them, constitutions). 

Although the three dimensions by which Sheldon classified physiques – endomorphy, ectomorphy and mesomorphy – have proven useful as dimensions for classifying body-type, Sheldon’s attempt to equate these ideal types with personality is now widely dismissed as pseudoscience. 

Dutton, however, argues that physique is indeed associated with character, and moreover provides what was conspicuously lacking in Sheldon’s own exposition – namely, compelling theoretical reasons for the postulated associations. 

Yet, interestingly, the associations suggested by Dutton do indeed to some extent mirror those first posited by William Shelton over half a century previously.

Whereas, elsewhere, Dutton draws on previously published research, here, Dutton’s reasoning is, to my knowledge, largely original to himself, though, as I show below, psychometric studies do support the existence of at least some of the associations he postulates. 

This part of Dutton’s book represents, in my view, the most important and convincing original contribution in the book. 

Endomorphy/Obesity, Self-Control and Conscientiousness

First, he discusses what Sheldon called endomorphy – namely, a body-type that can roughly be equated with what we would today call fatness or obesity

Dutton points out that, at least in contemporary Western societies, where there is a superabundance of food, and starvation is all but unknown even among the relatively less well-off, obesity tends to correlate with personality. 

In short, people who lack self-control and willpower will likely also lack the self-control and willpower to diet effectively. 

Endomorphy (i.e. obesity) is therefore a reliable correlate of the personality factor known to psychometricians as conscientiousness (p31-2).  

Although Dutton himself cites no data or published studies in support of this conclusion, nevertheless several published studies confirm an association between BMI and conscientiousness (Bagenjuk et al 2019; Jokela et al 2012; Sutin et al 2011). 

Obesity is also, Dutton claims, inversely correlated with intelligence

This is, first, because IQ is, according to Dutton, correlated with time-preference – i.e. a person’s willingness to defer gratification by making sacrifices in the short-term in return for a greater long-term pay-off. 

Therefore, low-IQ people, Dutton claims: 

Are less able to forego the immediate pleasure of ice cream for the future positive of not being overweight and diabetic” (p31). 

However, far from being associated with a short-time preference, some evidence, not discussed by Dutton, suggests that intelligence is actually inversely correlated with conscientiousness, such that more intelligent people are actually on average less conscientious (e.g. Rammstedt et al 2016; cf. Murray et al 2014). 

This would suggest that low IQ people might, all else being equal, actually be more successful at dieting than their high IQ counterparts. 

However, according to Dutton, there is a second reason that low-IQ people are more likely to be fat, namely: 

They are likely to understand less about healthy eating and simply possess less knowledge of what constitutes healthy food or a reasonable portion” (p31). 

This may be true. 

However, while there are some borderline cases (e.g. foods misleadingly marketed by advertisers as healthy), I suspect that virtually everyone knows that, say, eating lots of cake is unhealthy. Yet resisting the temptation to eat another slice is often easier said than done. 

I therefore suspect conscientiousness is a better predictor of weight than is intelligence

Interestingly, a few studies have investigated the association between IQ and the prevalence of obesity. However, curiously, most seem to be premised on the notion that, rather than low intelligence causing obesity, obesity somehow contributes to cognitive decline, especially in children (e.g. Martin et al 2015) and the elderly (e.g. Elias et al 2012). 

In fact, however, longitudinal studies confirm that, as contended by Dutton, it is low IQ that causes obesity rather than the other way around (Kanazawa 2014). 

At any rate, people lacking in intelligence and self-control also likely lack the intelligence and self-discipline to excel in school and gain promotions into high-income jobs, since both earnings and socioeconomic status correlate with both intelligence and conscientiousness.[6]

One can also, then, make better than chance assessments of a person’s socioeconomic status  and income from their physique. 

In other words, whereas in the past (and perhaps still in the developing world) the poor were more likely to starve or suffer from malnutrition and only the rich could afford to be fat, in the affluent west today it is the relatively less well-off who are, if anything, more likely to suffer from obesity and diseases of affluence such as diabetes and heart disease

This, then, all rather confirms the contemporary stereotype of the fat, lazy slob. 

However, Dutton also provides a let-off clause for offended fatties. Obesity is associated, not only with conscientiousness, but also with the factor of personality known as extraversion. This refers to the tendency to be outgoing, friendly and talkative, traits that are generally viewed positively. 

Several studies, again not cited by Dutton, do indeed suggest an association between extraversion and BMI (Bagenjuk et al 2019; Sutin et al 2011). Dutton, for his part, explains it this way: 

Extraverts simply enjoy everything positive more, and this includes tasty (and thus unhealthy) food” (p32). 

Dutton therefore provides theoretical support to the familiar stereotype of, not only the fat, lazy slob, but also the jolly and gregarious fat man, and the ‘bubbly’ fat woman.[7]

Mesomorphy/Muscularity and Testosterone

Mesomorphs were another of Sheldon’s supposed body-types. Mesomorphy can roughly be equated with muscularity. 

Here, Dutton concludes that: 

Sheldon’s theory… actually fits quite well with what we know about testosterone” (p33). 

Thus, mesomorphy is associated with muscularity, and muscularity with testosterone

Yet testosterone, as well as masculinizing the body, also masculinizes brain and behaviour. 

This is why anabolic steroids, not only increase muscularity, but are also said to be associated with roid rage.[8]

Testosterone, at least during development, may also be associated, not only with muscularity, but also with certain aspects of facial morphology, such as a wide and well-defined jawline, prominent brow ridges, deep-set eyes and facial width.  

I therefore wonder if this might go some way towards explain the finding, not mentioned by Dutton (but clearly relevant to his subject-matter), that observers are apparently able to identify convicted criminals at better than chance levels from a facial photograph alone (Valla, Ceci & Williams 2011).[9]

Testosterone and Autism 

Further exploring the effects of testosterone on both psychology and morphology, Dutton also proposes: 

We would also expect the more masculine-looking person to have higher levels of autism traits” (p34). 

This idea seems to be based on Simon Baron-Cohen’s extreme male brain theory of autism

However, the relationship between, on the one hand, levels of androgens such as testosterone and, on the other, degree of masculinization in respect of a given sexually-dimorphic trait may be neither one-dimensional nor linear

Thus, interestingly, Kingsley Browne in his excellent Biology at Work: Rethinking Sexual Equality (which I have reviewed here) reports: 

The relationship between spatial ability and [circulating] testosterone levels is described by an inverted U-shaped curve… Spatial ability is lowest in those with the very lowest and the very highest testosterone levels, with the optimal testosterone level lying in the lower end of the normal male range. Thus, males with testosterone in the low-normal range have the highest spatial ability” (Biology at Work: p115; Gouchie & Kimura 1991). 

Similarly, leading intelligence researcher Arthur Jensen reports, in The g Factor: The Science of Mental Ability, that:

Within each sex there is a nonlinear (inverted-U) relationship between an individual’s position on the estrogen/testosterone continuum and the individual’s level of spatial ability, with the optimal level of testosterone above the female mean and below the male mean. Generally, females with markedly above-average testosterone levels (for females) and males with below-average levels of testosterone (for males) tend to have higher levels of spatial ability, relative to the average spatial ability for their own sex” (The g Factor: p534).

In contrast, however, Dutton claims: 

There is evidence that testosterone level in healthy males is positively associated with spatial ability” (p36). 

However, the only study he cites in support of this assertion was, according to its methodology section and indeed its very title, conducted among “older males”, reported as having been between the ages of 60 and 75 years of age (Janowsky et al 1994). 

Therefore, since testosterone levels are known to decline with age, this finding is not necessarily inconsistent with the relationship between testosterone and spatial ability described by Browne (see Moffat & Hampson 1996). 

This, of course, accords with the anecdotal observation that math nerds and autistic males are rarely athletic, square-jawed ‘alpha male’-types.[10]

Testosterone and Baldness 

Another trait associated with testosterone levels, according to Dutton, is male pattern baldness. Thus, Dutton contends: 

Baldness is yet another reflection of high testosterone… [B]aldness in males known as androgenic apolecia, is positively associated with levels of testosterone” (p55). 

As evidence, he cites a study both a review (Batrinos 2014) and some indirect anecdotal evidence: 

It is widely known among doctors – I base this on my own discussions with doctors – that males who come to them in their 60s complaining of impotence tend to have full heads of fair or only very limited hair loss” (p55).[11]

If male pattern baldness is indeed associated with testosterone levels then this is somewhat surprising, because our perceptions regarding men suffering from male pattern baldness seem to be that they are, if anything, less masculine than other males. 

Thus, Nancy Etcoff, in Survival of the Prettiest (which I have reviewed here), reports that one study  found that: 

Both sexes assumed that balding men were weaker and found them less attractive” (Survival of the Prettiest: p121; Cash 1990).[12]

Yet, if the main message of Dutton’s book is that individual differences in morphology and appearance do indeed predict individual differences in behaviour, psychology and personality, then a second implicit theme seems also to be that our intuitions and stereotypes regarding the association between appearance and behaviors are often correct.  

True, it is likely that few people notice, say, digit ratios, or make judgements about people based on them either consciously or unconsciously. However, elsewhere, Dutton cites studies showing that subjects are able to estimate the IQ of male students at better than chance levels simply by viewing a photograph of their faces (Kleisner et al 2014; discussed at p50); and identify homosexuals and heterosexual men at better than chance levels from a facial photograph alone (Kosinski & Wang 2017; discussed at p66). 

Yet, according to Etcoff and Cash, perceptions regarding the personalities of balding men are almost the opposite of what would be expected if male pattern balding were indeed a reflection of high testosterone levels, as suggested by Dutton. 

In fact, however, although a certain level of testosterone is indeed a necessary condition for male pattern hair loss (this is why neither women nor castrated eunuchs experience the condition, though their hair does thin with age), this seems to be a threshold effect, and among non-castrated males with testosterone levels within the normal range levels of circulating testosterone do not seem to significantly predict either the occurrence, or severity, of male pattern baldness

Thus, healthline reports: 

It’s not the amount of testosterone or DHT that causes baldness; it’s the sensitivity of your hair follicles. That sensitivity is determined by genetics. The AR gene makes the receptor on hair follicles that interact with testosterone and DHT. If your receptors are particularly sensitive, they are more easily triggered by even small amounts of DHT, and hair loss occurs more easily as a result. 

In other words, male pattern baldness is yet another trait that is indeed related to testosterone, but does not evince a simple linear relationship

2D:4D Ratio

Another presumed correlate of prenatal androgens is 2D:4D ratio (aka digit ratio). 

Over the last two decades, a huge body of research has reported correlations between 2D:4D ratio and a variety of psychiatric conditions and behavioural propensities, including autism (Manning et al 2001), ADHD (Martel et al 2008; Buru 2020; Işık 2020), psychopathy (Blanchard & Lyons 2010), aggressive behaviours (Bailey & Hurd 2005; Benderlioglu & Nelson 2005), sports and athletic performance (Manning & Taylor 2001Hönekopp & Urban 2010; Griffin et al 2012; Keshavarz et al 2017), criminal behaviour (Ellis & Hoskin 2015; Hoskin & Ellis 2014) and homosexuality (Williams et al 2000; Lippa 2003; Kangassalo et al 2011; Li et al 2016; Xu & Zheng 2016). 
 
Unfortunately, and slightly embarrassingly, Dutton apparently misunderstands what 2D:4D ratio actually measures. Thus, he writes: 

If the profile of someone’s fingers is smoother, more like a shovel, then it implies high testosterone. If, by contrast, the little finger is significantly smaller than the middle finger, which is highly prevalent among women, then it implies lower testosterone exposure” (p69). 

Actually, however, both the little finger and middle finger are irrelevant to 2D:4D ratio.

Indeed, for virtually everyone, “the little finger is significantly smaller than the middle finger”. This is, of course, why the latter is called “the little finger”.

Actually, 2D:4D ratio concerns the ratio between index finger and the ring finger – i.e. the two fingers on either side of the middle finger

These fingers are, of course, the second and fourth digit, respectively, if you begin counting from your thumb outwards, hence the name ‘2D:4D ratio’. 

In evidently misnumbering his digits, I can only conclude that Dutton began counting at the correct end, but missed out his thumb. 

At any rate, the evidence for any association between digit ratios and measures of behavior and psychology is, at best, mixed

Skimming the literature on the subject, one finds many conflicting findings – for example, sometimes significant effects are found only for one sex, while other studies find the same correlations limited to the other sex (e.g. Bailey & Hurd 2005; Benderlioglu & Nelson 2005; see also Hilgard et al 2019), and also many failures to replicate earlier reported associations (e.g. Voracek et al 2011; Fossen et al 2022; Kyselicová et al 2021). 

Likewise, meta-analyses of published studies have generally found, at best, only small and inconsistent associations (e.g Voracek et al 2011 ; Pratt et al 2016). Thus, 2D:4D ratio has been a major victim of the recent so-called replication crisis in psychology

Indeed, it is not entirely clear that 2D:4D ratio represents a useful measure of prenatal androgens in the first place (Hollier et al 2015), and even the universality of the sex difference that originally led researchers to posit such a link is has been called into question (Apicella 2015; Lolli et al 2017).  

In short, the usefulness of digit ratio as a measure of exposure to prenatal androgens, let alone an important correlate of behaviour, psychology, personality or athletic performance, is questionable. 

Testosterone and Height 

The examples of male pattern baldness and spatial ability demonstrate that the effect of testosterone on some sexually-dimorphic traits is not necessarily always linear. Instead, it can be quite complex. 

Therefore, just because men are, on average, higher for a given trait than are women, which is ultimately a consequence of androgens such as testosterone, this does not necessarily mean that men with relatively higher levels of testosterone are necessarily higher for this trait than are men with relatively lower levels of testosterone. 

Indeed, Dutton himself provides another example of such a trait – namely height

Thus, although men, in general, are taller than women, nevertheless, according to Dutton: 

Men who are high in testosterone… tend to be of shorter stature than those who are low in it. High levels of testosterone at a relatively early age have been shown to reduce stature” (p34).[13]

In evolutionary terms, Dutton explains this in terms of the controversial Life History Theory of Philippe Rushton, of whom Dutton seems to be, with some reservations, something of a disciple (p22-4). 

If true, this might explain why eunuchs who were castrated before entering puberty are said to grow taller, on average, than other men. 

Further corroboration is provided by the fact that, in the Netherlands, whose population is among the tallest in the world, excessively tall boys are sometimes treated with testosterone in order to prevent them growing any taller (de Waal et al 1995).[14]

This is said to occur because additional testosterone speeds up puberty, and produces a growth spurt, but it also brings this to an end when height stabilizes and we cease to grow any taller. This is discussed in Carole Hooven’s book Testosterone: The Story of the Hormone that Dominates and Divides Us.

Short Man Syndrome’?

Interestingly, although Dutton does not explore the idea, the association between testosterone levels and height among males may even explain the supposed phenomenon of short man syndrome (also referred to, by reference to the supposed diminutive stature of the French emperor Napoleon, as a Napoleon complex), whereby short men are said to be especially aggressive and domineering. 

This is something that is usually attributed to a psychological need among shorter men to compensate for their diminutive stature. However, if Dutton is right, then the supposed aggressive predilections of short men might simply reflect differences between short and taller man in testosterone levels during adolescence. 

Actually, however, so-called short man syndrome is likely a myth – and yet another way society in general demeans and belittles short men. Certainly, it is very much a folk-psychiatric diagnosis with no empirical or real evidential basis, besides the merely anecdotal.  

Indeed, far from short men being, on average, more aggressive and domineering than taller men, one study commissioned by the BBC actually found that short men were less likely to respond aggressively when provoked

Given that tall men have an advantage in combat, it would actually make sense for relatively shorter men to avoid potentially violent confrontations with other men where possible, since, all else being equal, they would be more likely to come off worse in any such altercation.  

Consistent with this, some studies have found a link between increased stature and anti-social personality disorder, which is associated with aggressive behaviours (e.g. Ishikawa et al 2001; Salas-Wright & Vaughn 2016), while another study found a positive association between height and dominance, especially among males (Malamed 1992).[15]

Height and Intelligence 

Height is also, Dutton reports, correlated with intelligence, with taller people having, on average, slightly higher IQs than shorter people.  

The association between height and IQ is, like most if not all of those discussed by Dutton in this book, modest in magnitude or effect size.[16]

However, unlike many other associations reported by Dutton, many of which are based on just a single published study, or sometimes by purely theoretical arguments, the association between height and intelligence is robust and well-established.[17] Indeed, there is even wikipedia page on the topic

Dutton’s explanation for this phenomenon is that intelligence and height “have been sexually selected for as a kind of bundle” (p46). 

Females have sexually selected for intelligent men (because intelligence predicts social status and they have been specifically selected for this) but they have also selected for taller men, realising that taller men will be better able to protect them. This predilection for tall but intelligent men has led to the two characteristics being associated with one another” (p46). 

Actually, as I see it, this explanation would only work, or at least work much better, if both men and women had a preference for partners who are both tall and intelligent

This is indeed Arthur Jensen’s explanation for the association between height and IQ

Probably represents a simple genetic correlation resulting from cross-assortative mating for the two traits. Both height and ‘intelligence’ are highly valued in western culture. There is also evidence for cross-assortative mating for height and IQ. There is some trade-off between them in mate selection. When short and tall women are matched on IQ, educational level and social class of origin, for example, it is found that taller women tend to marry men of higher socioeconomic status… than do shorter women” (The G Factor: The Science of Mental Ability: p146). 

An alternative explanation might be that both height and intelligence reflect developmental stability and a lack of deleterious mutations. On this view, both height and intelligence might represent indices of genetic quality and lack of mutational load. 

However, this alternative explanation is inconsistent with the finding that there is no ‘within-family’ correlation between height and intelligence. In other words, when one looks at, say, full-siblings from the same family, there is no tendency for the taller sibling to have a higher IQ (Mackintosh, IQ and Human Intelligence: p6). 

This suggests that the genes that cause greater height are different from those that cause greater intelligence, but that they have come to be found in the same individuals through assortative mating, as suggested by Jensen and Dutton.[18]

Height and Earnings 

Although not discussed by Dutton, there is also a correlation between height and earnings. Thus, economist Steven Landsburg reports that: 

In general, an extra inch of height adds roughly an extra $1,000 a year in wages, after controlling for education and experience. That makes height as important as race or gender as a determinant of wages” (More Sex is Safer Sex: p53). 

This correlation could be mediated by the association between height and intelligence, since intelligence is known to be correlated with earnings (Case & Paxson 2009). 

However, one interesting study found that it was actually height during adolescence that accounted for the association, and that, once this was controlled for, adult height had little or no effect on earnings (Persico, Postlewaite & Silverman 2004). 

Controlling for teen height essentially eliminates the effect of adult height on wages for white males. The teen height premium is not explained by differences in resources or endowments” (Persico, Postlewaite & Silverman 2004). 

Thus, Landsburg reports: 

Tall men who were short in high school earn like short men, while short men who were tall (for their age) in high school” (More Sex is Safer Sex: p54). 

This suggests that it is height during a key formative period (a critical period’) in adolescence that increases self-confidence, which self-confidence continues into adulthood and ultimately contributes to higher adult earnings of men who were relatively taller as adolescents. 

On the other hand, however, Case and Paxon report that, in addition to being associated with adult height, intelligence is also associated with an earlier growth spurt. This leads them to conclude that adolescent height might be a better marker for cognitive ability than adult height, thereby providing an alternative explanation for Persico et al’s finding (Case & Paxson 2009). 

Head Size and Intelligence 

Dutton also discusses the finding that there is an association between intelligence and head-size. This is indeed true and is a topic I have written about elsewhere

However, Dutton’s illustration of this phenomenon seems to me rather unhelpful. Thus, he writes: 

Intelligent people have big heads in comparison to the size of their bodies. This association is obvious at the extremes. People who suffer from a variety of conditions that reduce their intelligence, including fetal alcohol syndrome or the zika virus, have noticeably very small heads” (p56). 

However, to me, this seems to be the wrong way to think about it. 

While it is indeed true that microcephaly (i.e. a smaller than usual head size) is usually associated with lower than normal intelligence levels, the reverse is not true. Thus, although head-size is indeed correlated with IQ, people suffering from macrocephaly (i.e. abnormally large heads) do not generally have exceptionally high IQs. On the contrary, macrocephaly is often associated with impaired cognitive function, probably because, like microcephaly, it reflects a malfunction in brain development.

Neither do people afflicted with forms of disproportionate dwarfism, such as achondroplasia, have higher than average IQs even though their heads are larger relative to their body-size than are those of ordinary-sized people.  

In short, rather than being, as Dutton puts it “obvious at the extremes”, the association between head-size and intelligence is obvious at only one of the extremes and not at all apparent at the other extreme. 

In general, species, individuals and races with larger brains have higher intelligence because, because brain-size is highly metabolically expensive and therefore unlikely to evolve without some compensating advantage (i.e. higher intelligence). 

However, conditions such achondroplasia and macrocephaly did not evolve through positive selection. On the contrary, they are pathological and maladaptive. Therefore, in these cases, the additional brain tissue may indeed be wasted and hence confer no cognitive advantage. 

Mate Choice 

In evolutionary psychology, there is a large literature on human mate-choice and beauty/attractiveness standards. Much of this depends on the assumption that the physical characteristics favoured as mate-choice criteria represent fitness-indicators, or otherwise correlate with traits desirable in a mate. 

For example, a low waist-to-hip ratio (or ‘WHR’) is said to be perceived as attractive among females because it is supposedly a correlate of both health and fertility. Similarly, low levels of fluctuating asymmetry are thought to be perceived as attractive by members of the opposite sex in both humans and other animals, supposedly because it is indicative of developmental stability and hence indirectly of genetic quality

Dutton reviews some of this literature. However, an introductory textbook on evolutionary psychology (e.g. David Buss’s Evolutionary Psychology: The New Science of the Mind), or on the evolutionary psychology of mating behaviour in particular (e.g. David Buss’s The Evolution of Desire), would provide a more comprehensive review. 

Also, some of Dutton’s speculations are rather unconvincing. He claims: 

Hipsters with their Old Testament beards are showcasing their genetic quality… Beards are a clear advertisement of male health and status. They are a breeding ground for parasites” (p61). 

However, if this is so, then it merely raises the question as to why have beards come back into fashion very recently? Indeed, until the last few years, beards had not been in fashion for men in the west to my knowledge since the 1970s.[19]

Moreover, it is not at all clear that beards do increase attractiveness (e.g. Dixson & Vasey 2012). Rather, it seems that beards increase perceptions of male age, dominance, social status and aggressiveness, but not their attractiveness.[20]

This suggests that beards are more likely to have evolved through intrasexual selection (i.e. dominance competition or fighting between males) than by intersexual selection (i.e. female choice). 

This is actually consistent with a recently-emerging consensus among evolutionary psychologists that human male physiology (and behaviour) has been shaped more by intrasexual selection than by intersexual selection (Puts 2010; Kordsmeyer et al 2018). 

Consistent with this, Dutton notes: 

“[Beards] have been found to make men look more aggressive, of higher status, and older… in a context in which females tend to be attracted to slightly older men, with age tending to be associated with status in men” (p61). 

However, this raises the question as to why, today, most men prefer to look younger.[21]

Are Feminine Faces More Prone to Infidelity?

Another interesting idea discussed by Dutton is that mate-choice criteria may vary depending on the sort of relationship sought. For example, he suggests: 

A highly feminine face is attractive, in particular in terms of a short term relationship… [where] a healthy and fertile partner is all that is needed” (p43). 

In contrast, however, he concludes that for a long-term relationship a less feminine face may be desirable, since he contends “being extremely feminine in terms of secondary sexual characteristics is associated with an r-strategy” and hence supposedly with a greater risk of infidelity (p43).[22]

However, Dutton presents no evidence in favour of the claim that less feminine women are less prone to sexual infidelity. 

Actually, on theoretical grounds, I would contend that the precise opposite relationship is more likely to exist. 

After all, less feminine and more masculine females, having been subjected to higher levels of androgens, would presumably also have a more male-typical sexuality, including a high sex drive and preference for promiscuous sex with multiple partners

Indeed, there is data in support of this conclusion, from studies of women afflicted with a rare condition, congenital adrenal hyperplasia, which results in their having been exposed to abnormally high levels of masculinizing androgens such as testosterone both in the womb and sometimes in later life as compared to other females, and who, as a consequence, exhibit a more male-typical psychology and sexuality than other females. 

Thus, Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here) reports:  

There is evidence that certain aspects of adult male sexuality result from the effects of prenatal and postpubertal androgens: before the discovery of cortisone therapy women with andrenogenital syndrome [AGS] were exposed to abnormally high levels of androgens throughout their lives, and clinical data on late-treated AGS women indicate clear-cut tendencies toward a male pattern of sexuality” (The Evolution of Human Sexuality: p290). 

Thus, citing the work of, among others the much-demonized John Money, Symons reports that women suffering from andrenogenital syndrome

Tended to exhibit clitoral hypersensitivity and an autonomous, initiatory, appetitive sexuality which investigators have characterized as evidencing a high sex drive or libido” (The Evolution of Human Sexuality: p290). 

This suggests that females with a relatively more masculine appearance, having been subject, on average, to higher levels of masculinizing androgens, will also evidence a more male-typical sexuality, including greater promiscuity and hence presumably a greater proclivity towards infidelity, rather than a lesser tendency as theorized by Dutton. 

Good Looks, Politics and Religion 

Dutton also cites studies showing that conservative politicians, and voters, are more attractive than liberals (Peterson & Palmer 2017; Berggren et al 2017). 

By way of explanation for these findings, Dutton speculates that in ancestral environments: 

Populations… so low in ethnocentrism as to espouse Multiculturalism and reject religion would simply have died out… Therefore… the espousal of leftist dogmas would partly reflect mutant genes, just as the espousal of atheism does. This elevated mutational load… would be reflected in their bodies as well as their brains” (p76). 

However, this seems unlikely, since atheism and possibly socially liberal political views as well have usually been associated with higher intelligence, which is probably a marker for good genes.[23]

Moreover, although mutations might result in suboptimal levels of both ethnocentrism and religiosity, these suboptimal levels would presumably also manifest in the form of excessive levels of religiosity and ethnocentrism

This would suggest that religious fundamentalists and extreme xenophobes and racial supremacists would be just as mutated, and hence just as ugly, as atheists and extreme leftists supposedly are. 

Yet Dutton instead insists that religious fundamentalists, especially Mormons, tend to be highly attractive (Dutton et al 2017). However, he and his co-authors cite little evidence for this claim beyond the merely anecdotal.[24]

The authors of the original paper, Dutton reports, themselves suggested an alternative explanation for the greater attractiveness of conservative politicians, namely: 

Beautiful people earn more, which makes them less inclined to support redistribution” (p75). 

This, to me seems, both simpler more plausible. However, in response, Dutton observes: 

There is far more to being… right-wing… than not supporting redistribution” (p75). 

Here, he is right. The correlation between socioeconomic status/income and political ideology and voting is actually quite modest (see What’s Your Bias). 

However, earnings do still correlate with voting patterns, and this correlation is perhaps enough to explain the modest association between physical attractiveness and political opinions. 

Nevertheless, other factors may also play a role. For example, a couple of studies have found, among men, an association between grip strength and support for policies that benefit oneself economically (Peterson et al 2013; Peterson & Laustsen 2018). 

Grip strength is associated with muscularity, which is generally considered attractive in males

Since most leading politicians mostly come from middle-class, well-to-do, if not elite backgrounds, this would suggest that conservative male politicians are likely to be, on average, more attractive than liberal or leftist politicians.

Indeed, Noah Carl has even purported to observe, and presents evidence suggesting, a general, and widening, masculinity gap between the political left and right, and some studies have found evidence that more physically formidable males have more conservative and less egalitarian political views (Price et al 2017; Kerry & Murray 2018). 

Since masculinity in general (e.g. not just muscularity, but also square jaws etc.) is associated with attractiveness in males (see discussion here), this might explain at least part of the association between political views and physical attractiveness. 

On the other hand, among females, an opposite process may be at work. 

Among women, leftist politics seem to be strongly associated with feminist views

Since feminists reject traditional female sex roles, it is likely they would be relatively less ‘feminine’ than other women, perhaps having been, on average, subjected to relatively higher levels of androgens in the womb, masculinizing both their behaviour and appearance. 

Yet it is relatively more feminine women, with feminine, sexually-dimorphic traits such as large breasts, low waist to hip ratios, and neotenous facial features, who are perceived by men as more attractive.

It is therefore unsurprising that feminist women in particular tend to be less attractive than women who are attracted to traditional sex roles.[25]

Developmental Disorders and MPAs

One study cited by Dutton found that observers are able to estimate a male’s IQ from a facial photograph alone at better than chance level (Kleisner 2014). To explain this, Dutton speculates: 

Having a small nose is associated with Downs [sic] Syndrome and Foetal Alcohol Syndrome and this would have contributed to our assuming that those with smaller noses were less intelligent” (p51). 

Thus, he explains: 

“[Whereas] Downs [sic] Syndrome and Foetal Alcohol Syndrome are major disruptions of developmental pathways and they lead to very low intelligence and a very small nose… even minor disruptions would lead to slightly reduced intelligence and a slightly smaller nose” (p51-2). 

Indeed, foetal alcohol syndrome itself seems to exist on a continuum and is hence a matter of degree. 
 
Indeed, going further than Dutton, I would agree with publisher/blogger Chip Smith, who observes in his blog

Dutton only mention[s] trisomy 21 (Down syndrome) in passing, but I think that’s a pretty solid place to start if you want to establish the baseline premise that at least some mental traits can be accurately inferred from external appearances.” 

Thus, the specific ‘look associated with Down Syndrome is a useful counterexample to cite to anyone who dismisses the idea of physiognomy, and the existence of any association between looks and ability or behaviour, a priori

Indeed, other developmental disorders and chromosomal abnormalities, not mentioned by Dutton, are also associated with a specific specific ‘look’ – for example, Williams Syndrome, the distinctive appearance, and personality, associated with which has even been posited as the basis for the elf figure in folklore.[26]

Less obviously, it has even been suggested that there are also subtle facial features that distinguish autistic children from neurotypical children, and which also distinguish boys with relatively more severe forms of autism from those who are likely to be diagnosed as higher functioning (Aldridge et al 2011; Ozgen et al 2011). 

However, Dutton neglects to mention that there is in fact a sizable literature regarding the association between so-called minor physical anomalies (aka MPAs) and several psychiatric conditions including autism (Ozgen et al 2008), schizophrenia (Weinberg et al 2007; Xu et al 2011) and paedophilia (Dyshniku et al 2015). 

MPAs have also been identified in several studies as a correlate of criminal behaviour (Kandel et al 1989; see also Criminology: A Global Perspective: p70-1). 

Yet these MPAs are often the very same traits – the single transverse palmar crease; sandal toe gap; fissured tongue – that are also used to diagnose Down Syndrome in nenates.

The Morality of Making Judgements

But is it not superficial to judge a book by its cover? And, likewise, by extension, isn’t it morally wrong to judge people by their appearance? 

Indeed, it is not only morally wrong to judge people by their appearance, but also, worse still, isn’t it racist

After all, skin colour is obviously a part of our appearance, and did not our Lord and Saviour, Dr Martin Luther King, himself advocate for a world in which people would be judged “not be judged by the color of their skin but by the content of their character.” 

Here, Dutton turns from science to morality, and convincingly contends that, at least in certain circumstances, it is indeed morally acceptable to judge people by appearances. 

It is true, he acknowledges, that most of the correlations that he has uncovered or reported are modest in magnitude. However, he is at pains to emphasize, the same is true of almost all correlations that are found throughout psychology and the social sciences. Thus, he exhorts: 

Let us be consistent. It is very common in psychology to find a correlation between, for example, a certain behaviour and accidents (or health) of 0.15 or 0.2 and thus argue that action should be taken based on the results. These sizes are considered large enough to be meaningful and even for policy to be changed” (p82). 

However, Dutton also includes a few sensible precautions and caveats to be borne in mind by those readers who might be tempted overenthusiastically apply some of his ideas. 

First, he warns against regarding making inferences regarding “people from a racial group with which you have relatively limited contact”, where the same cues used with respect to your own group may be inapplicable, or must be applied relative to the group averages for the other group, something we may not be adept at doing (p82-3). 

Thus, to give an obvious example, among Caucasians, epicanthic folds (i.e. so-called ‘slanted’ eyes) may be indicative of a developmental disorder such as Down syndrome. However, among East Asians, Southeast Asians and some other racial groups (notably the Khoisan of Southern Africa), such folds are entirely normal and not indicative of any pathology. 

He also cautions regarding people’s ability to disguise their appearance, both by makeup and plastic surgery. However, also notes that the tendency to wear excessive makeup, or undergo cosmetic surgery, is itself indicative of a certain personality type, and indeed often, Dutton asserts, of psychopathology (p84-5). 

Using physical appearance to make assessments is particularly useful, Dutton observes, “in extreme situations when a quick decision must be made” (p80). 

Thus, to take a deliberately extreme reductio ad absurdum, if we see someone stabbing another person, and this first person then approaches us in an aggressive manner brandishing the knife, then, if we take evasive action, we are, strictly speaking, judging by appearances. The person appears as if they are going to stab us, so we assume they are and act accordingly. However, no one would judge us morally wrong for so doing. 

However, in circumstances where we have access to greater individualizing information, the importance of appearances becomes correspondingly smaller. Here, a Bayesian approach is useful. 

In 2013, evolutionary psychologist Geoffrey Miller caused predictable outrage and hysteria when he tweeted

Dear obese PhD applicants: if you didn’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth.” 

According to Dutton, as we have seen above, willpower is indeed likely correlated with obesity, because, as Miller argues, people lacking in willpower also likely lack the willpower to diet. 

However, a PhD supervisor surely has access to far more reliable information regarding a person’s personality and intelligence, including their conscientiousness and willpower, in the form of their application and CV, than is obtainable from their physique alone. 

Thus, the outrage that this tweet provoked, though indeed excessive and a reflection of the intolerant climate of so-called cancel culture’ and public shaming in the contemporary west, was not entirely unwarranted. 

Similarly, if geneticist James Watson did indeed say, as he was rather hilariously reported as having said, that “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them”, he was indeed being prejudiced, because, again, an employer has access to more reliable information regarding applicants than their physique, namely, again, their application and CV. 

Obesity may often—perhaps even usually—be indicative of low levels of conscientiousness, willpower and intelligence. But, it is not always indicative of low levels of conscientiousness, willpower and intelligence. Instead, it may instead, as Dutton himself points out, reflect only high extraversion, or indeed an unusual medical condition. 

However, even at job interviews, employers do still, in practice, judge people partly by their appearance. Moreover, we often regard them as well within their rights to do so. 

This is, of course, why we advise applicants to dress smartly for their interviews.

Endnotes

[1] If ‘How to Judge People by What They Look Like’ is indeed a very short book, then, it must be conceded that this is, by comparison, a rather long and detailed book review. While, as will become clear in the remainder of this review, I have many points of disagreement with Dutton (as well as many points of agreement) and there are many areas where I feel he is mistaken, nevertheless the length of this book review is, in itself, testament to the amount of thinking that Dutton’s short pamphlet has inspired in this reader. 

[2] In addition, I suspect few of the researchers whose work Dutton cites ever even regarded themselves as working within, or somehow reviving, the field of physiognomy. On the contrary, despite researching and indeed demonstrating robust associations between morphology and behavior, this idea may never even have occurred to them.
Thus, for example, I was already familiar with some of this literature even before reading Dutton’s book, but it never occurred to me that what I was reading was a burgeoning literature in a revived science of physiognomy. Indeed, despite being familiar with much of this literature, I suspect that, if questioned directly on the matter, I may well have agreed with the general consensus that physiognomy was a discredited pseudoscience.
Thus, one of the chief accomplishments of Dutton’s book is simply to establish that this body of research does indeed represent a revived science of physiognomy, and should be recognized and described as such, even if the researchers themselves rarely if ever use the term.

[3] Instead, it would surely uncover mostly papers in the field of ‘history of science’, documenting the history of physiognomy as a supposedly discredited pseudoscience, along with such other real and supposed pseudosciences as phrenology and eugenics.

[4] The studies mentioned in the two paragraphs that precede this endnote are simply a few that I happen to have stumbled across that are relevant to Dutton’s theme and which I happen to have been able to recall. No doubt, any list of relevant studies that I could compile would be just as inexhaustive as that of Dutton and my own list would be longer than Dutton’s only because I have the advantage of having read Dutton’s book beforehand.

[5] Thus, a young person dressed as a hippy in the 60s and 70s was more likely to ascribe to certain (usually rather silly and half-baked) political beliefs, and also more likely to engage in recreational drug-use and live on a commune, while a young man dressed as a teddy boy in Britain in the 1950s, a skinhead in the 1970s and 80s, a football casual in the 1990s, or indeed a chav today, may be perceived as more likely to be involved in violent crime and thuggery. The goth subculture also seems to be associated with a certain personality type, and also with self-harm and suicide.

[6] The association between IQ and socioeconomic status is reviewed in The Bell Curve: Intelligence and Class Structure in American Life (which I have reviewed here). The association between conscientiousness and socioeconomic status is weaker, probably because personality tests are a less reliable measure of conscientiousness than IQ tests are of IQ, since the former rely on self-report. This is the equivalent of an IQ test that, instead of asking test-takers to solve logical puzzles, simply asked them how good they perceived themselves to be at solving logical puzzles. Nevertheless, conscientiousness, as measured in personality tests, does indeed correlate with earnings and career advancement, albeit less strongly than does IQ (Spurk & Abele 2011Wiersma & Kappe 2016).

[7] If some fat people are low in conscientiousness and intelligence, and others merely high in extraversion, there may, I suspect, also be a third category of people who do have self-control and self-discipline, but simply do not much care about whether they are fat or thin. However, given both the social stigma and health implications of obesity, this group is, I suspect, small. It is also likely young, since health dangers of obesity increase with age, and male, since both the social stigma of fatness, and especially its negative impact on mate value and attractiveness, seems to be greater for females. 

[8] Actually, whether roid rage is a real thing is a matter of some dispute. Although users of anabolic steroids do indeed have higher rates of violent crime, it has been suggested that this may be at least in part because the type of people who choose to use steroids are precisely those already prone to violence. In other words, there is a problem of self-selection bias.
Moreover, the association between testosterone and aggressive behaviours is more complex than this simple analysis assumes. One leading researcher in the field, Allan Mazur, argues that testosterone is not associated with aggression or violence per se, but only with dominance behaviours, which only sometimes manifest themselves through violent aggression. Thus, for example, a leading politician, business tycoon or chief executive of a large company may have high testosterone and be able to exercise dominance without resort to violence. However, a prisoner, being of low status in the legitimate world, is likely only able to assert dominance through violence (see Mazur & Booth 1998; Mazur 2009).

[9] Here, however, it is important to distinguish between the so-called organizing and ‘activating’ effects of testosterone. The latter can be equated with levels of circulating testosterone at any given time. The former, however, involves androgen levels at certain key points during development, especially in utero (i.e. in the womb) and during puberty, which thenceforth have long-term effects on both morphology and behaviour (and a person’s degree of susceptibility to circulating androgens).
Facial bone structure is presumable largely an effect of the ‘organizing’ effects of testosterone during development, though jaw shape is also affected by the size of the jaw muscles, which can be increased, it has been claimed, by regularly chewing gum. Bodily muscularity, on the other hand, is affected by both levels of circulating testosterone (hence the effects of anabolic steroids on muscle growth) but also levels of testosterone during development, not least because high levels of androgens during development increases the number and sensitivity of androgen receptors, which affect the potential for muscular growth.

[10] In this section, I have somewhat conflated spatial ability, mathematical ability and autism traits. However, these are themselves, of course, not the same, though each is probably associated with the others, albeit again not necessarily in a linear relationship.

[11] I have been unable to discover any evidence for this supposed association between lack of balding and impotence in men. On the contrary, googling the terms ‘male pattern baldness’ and ‘impotence’ finds only a results, mostly people speculating whether there is a positive correlation between balding and impotence in males, if only on the very unpersuasive ground that the two conditions tend to have a similar age of onset (i.e. around middle-age).

[12] In contrast, the shaven-head skinhead-look, or close-cropped military-style induction cut, buzz cut or high and tight is, of course, perceived as a quintessentially masculine, and even thuggish, hairstyle. This is perhaps because, in addition to contrasting with the long hair typically favoured by females, it also, by reducing the size of the upper part of the head, makes the lower part of the face e.g. the jaw and body, appear comparatively larger, and large jaws are a masculine trait, Thus, Nancy Etcoff observes:

The absense of hair on the head serves to exaggerate signals of strength. The smaller the head the bigger the look of the neck and body. Bodybuilders often shave or crop their hair, the size contrast between the head and neck and shoulders emphasizing the massiveness of the chest” (Survival of the Prettiest: p126).

[13] The source that Dutton cites for this claim is (Nieschlag & Behr 2013).

[14] In America, it has been suggested, especially tall boys are not treated with testosterone to prevent their growing any taller. Instead, they are encouraged to attempt to make a successful career in professional basketball

[15] On the other hand, one Swedish study investigating the association between height and violent crime found that the shortest men in Sweden had almost double convictions for violent crimes as compared to the tallest men in Sweden. However, after controlling for potential confounds (e.g. socioeconomic status and intelligence, both of which positively correlate with height), the association was reversed, with taller man having a somewhat higher likelihood of being convicted of a violent crime (Beckley et al 2014). 

[16] According to Dutton, the correlation between height and IQ is only about r = 0.1. This is a modest correlation even by psychology and social science standards.

[17] In other words, although modest in magnitude, the association between height and IQ has been replicated in so many studies with sufficiently large and representative sample sizes that we can be certain that it represents a real association in the population at large, not an artifact of small, unrepresentative or biased sampling in just one or a few studies. 

[18] An alternative explanation for the absence of a within-family correlation between height and intelligence is that some factor that differs as between families causes both increased height and increased intelligence. An obvious candidate would be malnutrition. However, in modern western economies where there is an superabundance of food, starvation is almost unknown and obesity is far more common than undernourishment even among the ostensible poor (indeed, as noted by Dutton, especially among the ostensible poor), it is doubtful that undernourishment is a significant factor in explaining either small stature or low IQs, especially since height is mostly heritable, at least by the time a person reaches adulthood.

[19] The conventional wisdom is that beards went out of fashion during the twentieth century precisely because their role as in spreading germs came to be more widely known. Thus, Nancy Etcoffwrites:

Facial hair has been less abundant in this century than in centuries past (except in the 1960s) partly because medical opinion turned against them. As people became increasingly aware of the role of germs in spreading diseases, beards came to be seen as repositories of germs. Previously, they had been advised by doctors as a means to protect the throat and filter air to the lungs” (Survival of the Prettiest: p156-7). 

Of course, this is not at all inconsistent with the notion that beards are perceived as attractive by women precisely because they represent a potential vector of infection and hence advertise the health and robustness of the male whom they adorn, as contended by Dutton. On the contrary, the fact that beards are indeed associated with infection, is consistent with and supportive of Dutton’s theory. 

[20] It would be interesting to discover whether these findings generalize to other, non-western cultures, especially those where beards are universal or the norm (e.g. among Muslims in the Middle East). It would also be discover whether women’s perceptions regarding the attractiveness of men with beards have changed as beards have gone in and out of fashion. 

[21] Perhaps this is because, although age is still associated with status, it is no longer as socially acceptable for older men to marry, or enter sexual relationships with, much younger women or girls as it was in the past, and such relationships are now less common. Indeed, in the last few years, this has become especially socially unacceptable. Therefore, given that most men are maximally attracted to females in this age category, they prefer to be thought of as younger so that it is more acceptable for them to seek relationships with younger, more attractive females.
Actually, while older men tend to have higher status on average, I suspect that, after controlling for status, it is younger men who would be perceived as more attractive. Certainly, a young multi-millionaire would surely be considered a more eligible bachelor than an older homeless man. Therefore, age per se is not attractive; only high status is attractive, which happens to correlate with age.

[22] This idea is again based on Philippe Rushton’s Differential K theory, which I have reviewed here and here.

[23] Dutton is apparently aware of this objection. He acknowledges, albeit in a different book, that “Intelligence, in general, is associated with health” (Why Islam Makes You Stupid: p174). However, in this same book, he also claims that: 

Intelligence has been shown to be only weakly associated with mutational load” (Why Islam Makes You Stupid: p169). 

Interestingly, Dutton also claims in this book: 

Very high intelligence predicts autism” (Why Islam Makes You Stupid: p175). 

This claim, namely that exceptionally high intelligence is associated with autism, seems anecdotally plausible. Certainly, autism seems to have a complex and interesting relationship with intelligence
Unfortunately, however, Dutton does not cite a source for the claim the claim that exceptionally high intelligence is associated with autism. Nevertheless, according to data cited here, there is indeed a greater variance in the IQs of autistic people, with greater proportions of autistic people at both tail-ends of the bell curve, the author even referring to an inverted bell curve for intelligence among autistic people, though, even according to her own cited data, this appears to be an exaggeration. However, this is not a scholarly source, but rather appears to be the website of a not entirely disinterested advocacy group, and it is not entirely clear from where this data derives, the piece referring only to data from the Netherlands collected by the Dutch Autism Register (NAR). 

[24] Admittedly, Dutton does cite one study showing that subjects can identify Mormons from facial photographs alone, and that the two groups differed in skin quality (Rule et al 2010). However, this might reflect merely the health advantages resulting from the religiously imposed abstention from the consumption of alcohol, tobacco, tea and coffee.
For what it’s worth, my own subjective and entirely anecdotal impression is almost the opposite of Dutton’s, at least here in secular modern Britain, where anyone who identifies as Christian, let alone a fundamentalist, unless perhaps s/he is elderly, tends to be regarded as a bit odd.
An interesting four-part critique of this theory, along very different lines from my own, is provided by Scott A McGreal at the Psychology Today website, see here, here, here, and here. Dutton responds with a two-part rejoinder here and here.

[25] However, when it comes to actual politicians, I suspect this difference may be attenuated, or even nonexistent, since pursuing a career in politics is, by its very nature, a very untraditional, and unfeminine, career choice, most likely because, in Darwinian terms, political power has a greater reproductive payoff for men than for women. Thus, it is hardly surprising that leading female politicians, even those who theoretically champion traditional sex roles, tend themselves to be quite butch and masculine in appearance and often as unattractive as their leftist opponents (e.g. Ann Widdecombe). Indeed, even Ann Coulter, a relatively attractive woman, at least by the standards of female political figures, has been mocked for her supposedly mannish appearance and pronounced Adam’s apple.
Moreover, most leading politicians are at least middle-aged, and female attractiveness peaks very young, in mid- to late-teens into early-twenties

[26] Another medical condition associated with a specific look, as well as with mental disability, is cretinism, though due to medical advances, most people with the condition in western societies, develop normally and no longer manifest either the distinctive appearance or the mental disability. 

References 

Aldridge et al (2011) Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Molecular Autism 14;2(1):15. 
Apicella et al (2015) Hadza Hunter-Gatherer Men do not Have More Masculine Digit Ratios (2D:4D) American Journal of Physical Anthropology 159(2):223-32. 
Bagenjuk et al (2019) Personality Traits and Obesity, International Journal of Environmental Research and Public Health 16(15): 2675. 
Bailey & Hurd (2005) Finger length ratio (2D:4D) correlates with physical aggression in men but not in women. Biological Psychology 68(3):215-22. 
Batrinos (2014) The endocrinology of baldness. Hormones 13(2): 197–212. 
Beckley et al (2014) Association of height and violent criminality: results from a Swedish total population study. International Journal of Epidemiology 43(3):835-42 
Benderlioglu & Nelson (2005) Digit length ratios predict reactive aggression in women, but not in men Hormones and Behavior 46(5):558-64. 
Berggren et al (2017) The right look: Conservative politicians look better and voters reward it Journal of Public Economics 146:  79-86. 
Blanchard & Lyons (2010) An investigation into the relationship between digit length ratio (2D: 4D) and psychopathy, British Journal of Forensic Practice 12(2):23-31. 
Buru et al (2017) Evaluation of the hand anthropometric measurement in ADHD children and the possible clinical significance of the 2D:4D ratioEastern Journal of Medicine 22(4):137-142. 
Case & Paxson (2008) Stature and status: Height, ability, and labor market outcomes, Journal of Political Economy 116(3): 499–532. 
Cash (1990) Losing Hair, Losing Points?: The Effects of Male Pattern Baldness on Social Impression Formation. Journal of Applied Social Psychology 20(2):154-167. 
De Waal et al (1995) High dose testosterone therapy for reduction of final height in constitutionally tall boys: Does it influence testicular function in adulthood? Clinical Endocrinology 43(1):87-95. 
Dixson & Vasey (2012) Beards augment perceptions of men’s age, social status, and aggressiveness, but not attractiveness, Behavioral Ecology 23(3): 481–490. 
Dutton et al (2017) The Mutant Says in His Heart, “There Is No God”: the Rejection of Collective Religiosity Centred Around the Worship of Moral Gods Is Associated with High Mutational Load Evolutionary Psychological Science 4:233–244. 
Dysniku et al (2015) Minor Physical Anomalies as a Window into the Prenatal Origins of Pedophilia, Archives of Sexual Behavior 44:2151–2159. 
Elias et al (2012) Obesity, Cognitive Functioning and Dementia: Back to the Future, Journal of Alzheimer’s Disease 30(s2): S113-S125. 
Ellis & Hoskin (2015) Criminality and the 2D:4D Ratio: Testing the Prenatal Androgen Hypothesis, International Journal of Offender Therapy and Comparative Criminology 59(3):295-312 
Fossen et al (2022) 2D:4D and Self-Employment: A Preregistered Replication Study in a Large General Population Sample Entrepreneurship Theory and Practice 46(1):21-43. 
Gouchie & Kimura (1991) The relationship between testosterone levels and cognitive ability patterns Psychoneuroendocrinology 16(4): 323-334. 
Griffin et al (2012) Varsity athletes have lower 2D:4D ratios than other university students, Journal of Sports Sciences 30(2):135-8. 
Hilgard et al (2019) Null Effects of Game Violence, Game Difficulty, and 2D:4D Digit Ratio on Aggressive Behavior, Psychological Science 30(1):095679761982968 
Hollier et al (2015) Adult digit ratio (2D:4D) is not related to umbilical cord androgen or estrogen concentrations, their ratios or net bioactivity, Early Human Development 91(2):111-7 
Hönekopp & Urban (2010) A meta-analysis on 2D:4D and athletic prowess: Substantial relationships but neither hand out-predicts the other, Personality and Individual Differences 48(1):4-10. 
Hoskin & Ellis (2014) Fetal testosterone and criminality: Test of evolutionary neuroandrogenic theory, Criminology 53(1):54-73. 
Ishikawa et al (2001) Increased height and bulk in antisocial personality disorder and its subtypes. Psychiatry Research 105(3):211-219. 
Işık et al (2020) The Relationship between Second-to-Fourth Digit Ratios, Attention-Deficit/Hyperactivity Disorder Symptoms, Aggression, and Intelligence Levels in Boys with Attention-Deficit/Hyperactivity Disorder, Psychiatry Investigation 17(6):596–602. 
Janowski et al (1994) Testosterone influences spatial cognition in older men. Behavioral Neuroscience 108(2):325-32. 
Jokela et al (2012) Association of personality with the development and persistence of obesity: a meta-analysis based on individual–participant data, Etiology and Pathophysiology 14(4): 315-323. 
Kanazawa (2014) Intelligence and obesity: Which way does the causal direction go? Current Opinion in Endocrinology, Diabetes and Obesity (5):339-44. 
Kandel et al (1989) Minor physical anomalies and recidivistic adult violent criminal behavior, Acta Psychiatrica Scandinavica 79(1) 103-107. 
Kangassalo et al (2011) Prenatal Influences on Sexual Orientation: Digit Ratio (2D:4D) and Number of Older Siblings, Evolutionary Psychology 9(4):496-508 
Kerry & Murray (2019) Is Formidability Associated with Political Conservatism?  Evolutionary Psychological Science 5(2): 220–230. 
Keshavarz et al (2017) The Second to Fourth Digit Ratio in Elite and Non-Elite Greco-Roman Wrestlers, Journal of Human Kinetics 60: 145–151. 
Kleisner et al (2014) Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women. PLoS ONE 9(3): e81237. 
Kordsmeyer et al (2018) The relative importance of intra- and intersexual selection on human male sexually dimorphic traits, Evolution and Human Behavior 39(4): 424-436. 
Kosinski & Wang (2018) Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114(2):246-257. 
Kyselicová et al (2021) Autism spectrum disorder and new perspectives on the reliability of second to fourth digit ratio Developmental Pyschobiology 63(6). 
Li et al (2016) The relationship between digit ratio and sexual orientation in a Chinese Yunnan Han population, Personality and Individual Differences 101:26-29. 
Lippa (2003) Are 2D:4D finger-length ratios related to sexual orientation? Yes for men, no for women, Journal of Personality &Social Psychology 85(1):179-8 
Lolli et al (2017) A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans, Proceedings of the Royal Society B: Biological Sciences 284(1857):20170356 
Malamed (1992) Personality correlates of physical height. Personality and Individual Differences 13(12):1349-1350. 
Manning & Taylor (2001) Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans, Evolution & Human Behavior 22(1):61-69. 
Manning et al (2001) The 2nd to 4th digit ratio and autism, Developmental Medicine & Child Neurology 43(3):160-164. 
Martel et al (2008) Masculinized Finger-Length Ratios of Boys, but Not Girls, Are Associated With Attention-Deficit/Hyperactivity Disorder, Behavioral Neuroscience 122(2):273-81. 
Martin et al (2015) Associations between obesity and cognition in the pre-school years, Obesity 24(1) 207-214 
Mazur & Booth (1998) Testosterone and dominance in men. Behavioral and Brain Sciences, 21(3), 353–397. 
Mazur (2009) Testosterone and violence among young men. In Walsh & Beaver (eds) Biosocial Criminology: New Directions in theory and Research. New York: Routledge. 
Moffat & Hampson (1996) A curvilinear relationship between testosterone and spatial cognition in humans: Possible influence of hand preference. Psychoneuroendocrinology. 21(3):323-37. 
Murray et al (2014) How are conscientiousness and cognitive ability related to one another? A re-examination of the intelligence compensation hypothesis, Personality and Individual Differences, 70, 17–22. 
Nieshclag & Behr (2013) Testosterone Therapy. In Nieschlag & Behr (eds) Andrology: Male Reproductive Health and Dysfunction. New York: Springer. 
Ozgen et al (2010) Minor physical anomalies in autism: a meta-analysis. Molecular Psychiatry 15(3):300–7. 
Ozgen et al (2011) Morphological features in children with autism spectrum disorders: a matched case-control study. Journal of Autism and Developmental Disorders 41(1):23-31. 
Peterson & Palmer (2017) Effects of physical attractiveness on political beliefs. Politics and the Life Sciences 36(02):3-16 
Persico et al (2004) The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height, Journal of Political Economy 112(5): 1019-1053. 
Pratt et al (2016) Revisiting the criminological consequences of exposure to fetal testosterone: a meta-analysis of the 2d:4d digit ratio, Criminology 54(4):587-620. 
Price et al (2017). Is sociopolitical egalitarianism related to bodily and facial formidability in men? Evolution and Human Behavior, 38, 626-634. 
Puts (2010) Beauty and the beast: Mechanisms of sexual selection in humans, Evolution and Human Behavior 31(3):157-175. 
Rammstedt et al (2016) The association between personality and cognitive ability: Going beyond simple effects, Journal of Research in Personality 62: 39-44. 
Rosenberg & Kagan (1987) Iris pigmentation and behavioral inhibition Developmental Psychobiology 20(4):377-92. 
Rosenberg & Kagan (1989) Physical and physiological correlates of behavioral inhibition Developmental Psychobiology 22(8):753-70. 
Rule et al (2010) On the perception of religious group membership from faces. PLoS ONE 5(12):e14241. 
Salas-Wright & Vaughn (2016) Size Matters: Are Physically Large People More Likely to be Violent? Journal of Interpersonal Violence 31(7):1274-92. 
Spurk & Abele (2011) Who Earns More and Why? A Multiple Mediation Model from Personality to Salary, Journal of Business and Psychology 26: 87–103. 
Sutin et al (2011) Personality and Obesity across the Adult Lifespan Journal of Personality and Social Psychology 101(3): 579–592. 
Valla et al (2011). The accuracy of inferences about criminality based on facial appearance. Journal of Social, Evolutionary, and Cultural Psychology, 5(1), 66-91. 
Voracek et al (2011) Digit ratio (2D:4D) and sex-role orientation: Further evidence and meta-analysis, Personality and Individual Differences 51(4): 417-422. 
Weinberg et al (2007) Minor physical anomalies in schizophrenia: A meta-analysis, Schizophrenia Research 89: 72–85. 
Wiersma & Kappe 2015 Selecting for extroversion but rewarding for conscientiousness, European Journal of Work and Organizational Psychology 26(2): 314-323. 
Williams et al (2000) Finger-Length Ratios and Sexual Orientation, Nature 404(6777):455-456. 
Xu et al (2011) Minor physical anomalies in patients with schizophrenia, unaffected first-degree relatives, and healthy controls: a meta-analysis, PLoS One 6(9):e24129. 
Xu & Zheng (2016) The Relationship Between Digit Ratio (2D:4D) and Sexual Orientation in Men from China, Archives of Sexual Behavior 45(3):735-41. 

Desmond Morris’s ‘The Naked Ape’: A Pre-Sociobiological Work of Human Ethology 

Desmond Morris, Naked Ape: A Zoologist’s Study of the Human Animal (New York: Mcgraw-Hill Book Company, 1967)

First published in 1967, ‘The Naked Ape’, a popular science classic authored by the already famous British zoologist and TV presenter Desmond Morris, belongs to the pre-sociobiological tradition of human ethology

In the most general sense, the approach adopted by the human ethologists, who included, not only Morris, but also playwright Robert Ardrey, anthropologists Lionel Tiger and Robin Fox and the brilliant Nobel-prize winning ethologist, naturalist, zoologist, pioneering evolutionary epistemologist and part-time Nazi sympathizer Konrad Lorenz, was correct. 

They sought to study the human species from the perspective of zoology. In other words, they sought to adopt the disinterested perspective, and detachment, of, as Edward O Wilson was later to put it, “zoologists from another planet” (Sociobiology: The New Synthesis: p547). 

Thus, Morris proposed cultivating: 

An attitude of humility that is becoming to proper scientific investigation… by deliberately and rather coyly approaching the human being as if he were another species, a strange form of life on the dissecting table” (p14-5).  

In short, Morris proposed to study humans just as a zoologist would any other species of non-human animal. 

Such an approach was an obvious affront to anthropocentric notions of human exceptionalism – and also a direct challenge to the rather less scientific approach of most sociologists, psychologists, social and cultural anthropologists and other such ‘professional damned fools’, who, at that time, almost all studied human behavior in isolation from, and largely ignorance of, biology, zoology, and the scientific study of the behavior of all animals other than humans. 

As a result, such books inevitably attracted controversy and criticism. Such criticism, however, invariably missed the point. 

The real problem was not that the ethologists sought to study human behavior in just the same way a zoologist would study the behavior of any nonhuman animal, but rather that the study of the behavior of nonhuman animals itself remained, at this time, very much in its infancy. 

Thus, the field of animal behavior was to be revolutionized just a decade or so after the publication of ‘The Naked Ape’ by the approach that came to be known as, first, sociobiology, now more often as behavioral ecology, or, when applied to humans, evolutionary psychology

These approaches sought to understand behavior in terms of fitness maximization – in other words, on the basis of the recognition that organisms have evolved to engage in behaviors which tended to maximize their reproductive success in ancestral environments. 

Mathematical models, often drawn from economics and game theory, were increasingly employed. In short, behavioral biology was becoming a mature science. 

In contrast, the earlier ethological tradition was, even at its best, very much a soft science. 

Indeed, much such work, for example Jane Goodall’s rightly-celebrated studies of the chimpanzees of Gombe, was almost pre-scientific in its approach, involving observation, recording and description of behaviors, but rarely the actual testing or falsification of hypotheses. 

Such research was obviously important. Indeed, Goodall’s was positively groundbreaking. 

After all, the observation of the behavior or an organism is almost a prerequisite for the framing of hypotheses about the behavior of that organism, since hypotheses are, in practice, rarely generated in an informational vacuum from pure abstract theory. 

However, such research was hardly characteristic of a mature and rigorous science. 

When hypotheses regarding the evolutionary significance of behavior patterns were formulated by early ethologists, this was done on a rather casual ad hoc basis, involving a kind of ‘armchair adaptationism’, which could perhaps legitimately be dismissed as the spinning of, in Stephen Jay Gould’s famous phrase, just so stories

Thus, a crude group selectionism went largely unchallenged. Yet, as George C Williams was to show, and Richard Dawkins later to forcefully reiterate in The Selfish Gene (reviewed here), behaviors are unlikely to evolve that benefit the group or species if they involve a cost to the inclusive fitness of the individual engaging in the behavior. 

Robert Wright picks out a good example of this crude group selectionism from ‘The Naked Ape’ itself, quoting Morris’s claim that, over the course of human evolution: 

To begin with, the males had to be sure that their females were going to be faithful to them when they left them alone to go hunting. So the females had to develop a pairing tendency” (p64). 

To anyone schooled in the rudiments of Dawkinsian selfish gene theory, the fallacy should be obvious. But, just in case we didn’t spot it, Wright has picked it out for us: 

Stop right there. It was in the reproductive interests of the males for the females to develop a tendency toward fidelity? So natural selection obliged the males by making the necessary changes in the females? Morris never got around to explaining how, exactly, natural selection would perform this generous feat” (The Moral Animal: p56). 

In reality, couples have a conflict of interest here, and the onus is clearly on the male to evolve some mechanism of mate-guarding, though a female might conceivably evolve some way to advertise her fidelity if, by so doing, she secured increased male parental investment and provisioning, hence increasing her own reproductive success.[1]

In short, mating is Machiavellian. A more realistic view of human sexuality, rooted in selfish gene theory, is provided by Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here). 

Unsuccessful Societies? 

The problems with ‘The Naked Ape’ begin in the very first chapter, where Morris announces, rather oddly, that, in studying the human animal, he is largely uninterested in the behavior of contemporary foraging groups or other so-called ‘primitive’ peoples. Thus, he bemoans: 

The earlier anthropologists rushed off to all kinds of unlikely corners of the world… scattering to remote cultural backwaters so atypical and unsuccessful that they are nearly extinct. They then returned with startling facts about the bizarre mating customs, strange kinship systems, or weird ritual procedures of these tribes, and used this material as though it were of central importance to the behaviour of our species as a whole. The work done by these investigators… did not tell us was anything about the typical behaviour of typical naked apes. This can only be done by examining the common behaviour patterns that are shared by all the ordinary, successful members of the major cultures-the mainstream specimens who together represent the vast majority. Biologically, this is the only sound approach” (p10).[2]

Thus, today, political correctness has wholly banished the word ‘primitive’ from the anthropological lexicon. It is, modern anthropologists insist, demeaning and pejorative.  

Indeed, post-Boasian cultural anthropologists in America typically reject the very notion that some societies are more advanced than others, championing instead a radical cultural relativism and insisting we have much to learn from the lifestyle and traditions of hunter-gatherers, foragers, savage cannibals and other such ‘indigenous peoples’. 

Morris also rejects the term ‘primitive’ as a useful descriptor for hunter-gatherer and other technologically-backward peoples, but for diametrically opposite reasons. 

Thus, for Morris, to describe foraging groups as ‘primitive’ is to rather give them altogether too much credit: 

The simple tribal groups that are living today are not primitive, they are stultified. Truly primitive tribes have not existed for thousands of years. The naked ape is essentially an exploratory species and any society that has failed to advance has in some sense failed, ‘gone wrong’. Something has happened to it to hold it back, something that is working against the natural tendencies of the species to explore and investigate the world around it” (p10). 

Instead, Morris proposes to focus on contemporary western societies, declaring: 

North America… is biologically a very large and successful culture and can, without undue fear of distortion, be taken as representative of the modern naked ape” (p51) 

It is indeed true that, with the diffusion of American media and consumer goods, American culture is fast becoming ubiquitous. However, this is a very recent development in historical terms, let alone on the evolutionary timescale of most interest to biologists. 

Indeed, viewed historically and cross-culturally, it is we westerners who are the odd, aberrant ones. 

Thus, we even have been termed, in a memorable backcronym, WEIRD (Western, Educated, Industrialized, Rich and Democratic), and hence quite aberrant, not only in terms of our lifestyle and prosperity, but also in terms of our psychology and modes of thinking

Moreover, while foraging groups, and other pre-modern peoples, may now indeed now be tottering on the brink of extinction, this again is a very recent development. 

Indeed, far from being aberrant, this was the lifestyle adopted by all humans throughout most of the time we have existed as a species, including during the period when most of our unique physical and behavioural adaptations evolved

In short, although we may inhabit western cities today, this is not the environment where we evolved, nor that to which our brains and bodies are primarily adapted.[3]

Therefore, given that it represents the lifestyle of our ancestors during the period when most of our behavioral and bodily adaptations evolved, primitive peoples must necessarily have a special place in any evolutionary theory of human behaviour.[4]

Indeed, Morris himself admits as much himself just a few pages later, where he acknowledges that: 

The fundamental patterns of behavior laid down in our early days as hunting apes still shine through all our affairs, no matter how lofty they may be” (p40). 

Indeed, a major theme of ‘The Naked Ape’ is the extent to which the behaviour even of wealthy white westerners is nevertheless fundamentally shaped and dictated by the patterns of foraging set out in our ancient hunter-gatherer past. 

This, of course, anticipates the concept of the environment of evolutionary adaptedness (or EEA) in modern evolutionary psychology

Thus, Morris suggests that the pattern of men going out to work to financially provision wives and mothers who stay home with dependent offspring reflects the ancient role of men as hunters provisioning their wives and children: 

“Behind the façade of modern city life there is the same old naked ape. Only the names have been changed: for ‘hunting’ read ‘working’, for ‘hunting grounds’ read ‘place of business’, for ‘home base’ read ‘house’, for ‘pair-bond’ read ‘marriage’, for ‘mate’ read ‘wife’, and so on” (p84).[5]

In short, while we must explain the behaviors of contemporary westerners, no less than those of primitive foragers, in the light of Darwinian evolution, nevertheless all such behaviors must be explained ultimately in terms of adaptations that evolved over previous generations under very different conditions. 

Indeed, in the sequel to ‘The Naked Ape’, Morris further focuses on this very point, arguing that modern cities, in particular, are unnatural environments for humans, rejecting the then-familiar description of cities as concrete jungles on the grounds that, whereas jungles are the “natural habitat” of animals, modern cities are very much an unnatural habitat for humans. 

Instead, he argues, the better analogy for modern cities is a Human Zoo

The comparison we must make is not between the city dweller and the wild animal but between the city dweller and the captive animal. The city dweller is no longer living in conditions natural for his species. Trapped, not by a zoo collector, but by his own brainy brilliance, he has set himself up in a huge restless menagerie where he is in constant danger of cracking under the strain” (The Human Zoo: pvii). 

Nakedness 

Morris adopts what he calls a zoological approach. Thus, unlike modern evolutionary psychologists, he focuses as much on explaining our physiology as our behavior and psychology. Indeed, it is in explaining the peculiarities of human anatomy that Morris’s book is at his best.[6]

This begins, appropriately enough, with the trait that gives him his preferred name for our species, and also furnishes his book with its title – namely our apparent nakedness or hairlessness. 

Having justified calling us ‘The Naked Ape’ on zoological grounds, namely on the ground that this is the first thing the naturalist would notice upon observing our species, Morris then comes close to contradicting himself, admitting that, given the densely concentrated hairs on our heads (as well as the less densely packed hairs on much of the remainder of our bodies), we actually have more hairs on our bodies than do chimpanzees.[7]

However, Morris summarily dispatches this objection: 

It is like saying that because a blind man has a pair of eyes, he is not blind. Functionally, we are stark naked and our skin is fully exposed” (p42). 

Why then are we so strangely hairless? Neoteny, Morris proposes, provides part of the answer. 

This refers to the tendency of humans to retain into maturity traits that are, in other primates, restricted to juveniles, nakedness among them. 

Neoteny is a major theme in Morris’s book – and indeed in human evolution

Besides our hairlessness, other human anatomical features that have been explained either partly or wholly in terms of neoteny, whether by Morris or by other evolutionists, include our brain size, growth patterns, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into sexual maturity (see below). Indeed, many of these traits are explicitly discussed by Morris himself as resulting from neoteny

However, while neoteny may supply the means by which our relative hairlessness evolved, it is not a sufficient explanation for why this development occurred, because, as Morris points out: 

The process of neoteny is one of the differential retarding of developmental processes” (p43). 

In other words, humans are neotenous in respect of only some of our characters, not all of them. After all, an ape that remained infantile in all respects would never evolve, for the simple reason that it would never reach sexual maturity and hence remain unable to reproduce. 

Instead, only certain specific juvenile or infantile traits are retained into adulthood, and the question then becomes why these specific traits were the ones chosen by natural selection to be retained. 

Thus, Morris concludes: 

It is hardly likely… that an infantile trait as potentially dangerous as nakedness was going to be allowed to persist simply because other changes were slowing down unless it had some special value to the new species” (p43). 

As to what this “special value” (i.e. selective advantage) might have been, Morris considers, in turn, various candidates.  

One theory considered by Morris theory relates to our susceptibility to insect parasites.  

Because humans, unlike many other primates, return to a home base to sleep most nights, we are, Morris reports, afflicted with fleas as well as lice (p28-9). Yet fur, Morris observes, is a good breeding ground for such parasites (p38-9). 

Perhaps, then, Morris imagines, we might have evolved hairlessness in order to minimize the problems posed by such parasites. 

However, Morris rejects this as an adequate explanation, since, he observes: 

Few other den dwelling mammals… have taken this step” (p43). 

An alternative explanation implicates sexual selection in the evolution of human hairlessness.  

Substantial sex differences in hairiness, as well as the retention of pubic hairs around the genitalia, suggests that sexual selection may indeed have played a role in the evolution of our relative hairlessness as compared to other mammals.

Interestingly, this was Darwin’s own proposed explanation for the loss of body hair during the course of our evolution, the latter writing in The Descent of Man that:

No one supposes that the nakedness of the skin is any direct advantage to man; his body therefore cannot have been divested of hair through natural selection” (The Descent of Man).

Darwin instead proposes:

Since in all parts of the world women are less hairy than men… we may reasonably suspect that this character has been gained through sexual selection” (The Descent of Man).

Morris, however, rejects this explanation on the grounds that: 

The loss of bodily insulation would be a high price to pay for a sexy appearance alone” (p46). 

But other species often often pay a high price for sexually selected bodily adornments. For example, the peacock sports a huge, brightly coloured and elaborate tail that is thought to have evolved through sexual selection or female choice, which is costly to grow and maintain, impedes his mobility and is conspicuous to predators. 

Indeed, according to Amotz Zahavi’s handicap principle, it is precisely the high cost of such sexually-selected adornments that made them reliable fitness indicators and hence attractive to potential mates, because only a highly ‘fit’ male can afford to grow such a costly, inconvenient and otherwise useless appendage. 

Morris also gives unusually respectful consideration to the highly-controversial aquatic ape theory as an explanation for human hairlessness. 

Thus, if humans did indeed pass through an aquatic, or at least amphibious, stage during our evolution, then, Morris agrees, this may indeed explain our hairlessness, since it is indeed true that other aquatic or semiaquatic mammals, such as whales, dolphins and seals, also seem to have jettisoned most of their fur over the course of their evolution. 

This is presumably because fur increases frictional drag while in the water and hence impedes swimming ability, and is among the reasons that elite swimmers also remove their body-hair before competition. 

Indeed, our loss of body hair is among the human anatomical peculiarities that are most often cited by champions of aquatic ape theory in favor of the theory that humans did indeed pass through an aquatic phase during our evolution. 

However, aquatic ape theory is highly controversial, and is rejected by almost all mainstream evolutionists and biological anthropologists.  

As I have said, Morris, for his part, gives respectful consideration to the theory, and, unlike many other anthropologists and evolutionists, does not dismiss it out of hand as entirely preposterous and unworthy even of further consideration.[8]

On the contrary, Morris credits the theory as “ingenious”, acknowledging that, if true, it might explain many otherwise odd features of human anatomy, including not just our relative hairlessness, but also the retention of hairs on our head, the direction of the hairs on our backs, our upright posture, ‘streamlined’ bodies, dexterity of our hands and the thick extra layer of sub-cutaneous fat beneath our skin that is lacking in other primates. 

However, while acknowledging that the theory explains many curious anomalies of human physiology, Morris ultimately rejects ‘aquatic ape theory’ as altogether too speculative given the complete lack of fossil evidence in support of the theory – the same reason that most other evolutionists also reject the theory. 

Thus, he concludes: 

It demands… the acceptance of a hypothetical major evolutionary phase for which there is no direct evidence” (p45-6). 

Morris also rejects the theory that was, according to Morris himself, the most widely accepted explanation for our hairlessness among other evolutionists at the time he was writing – namely the theory that our hairlessness evolved as a cooling mechanism when our ancestors left the shaded forests for the open African savannah

The problem with this theory, as Morris explains it, is that:  

Exposure of the naked skin to the air certainly increases the chances of heat loss, but it also increases heat gain at the same time and risks damage from the sun’s rays” (p47). 

Thus, it is not at all clear that moving into the open savannah would indeed select for hairlessness. Otherwise, as Morris points out, we might expect other carnivorous, predatory mammals such as lions and jackals, who also inhabit the savannah, to have similarly jettisoned most of their fur. 

Ultimately, however, Morris accepts instead a variant on this idea – namely that hairlessness evolved to prevent overheating while chasing prey when hunting. 

However, this fails to explain why it is men’s bodies that are generally much hairier than those of women, even though, cross-culturally, in most foraging societies, it is men who do most, if not all, of the hunting. 

It also raises the question as to why other mammalian carnivores, including some that also inhabit the African Savannah and other similar environments, such as lions and jackals, have not similarly shed their body hair, especially since the latter rely more on their speed to catch prey species, whereas humans, armed with arrows and javelins as well as hunting dogs, do not always have to catch a prey themselves in order to kill it. 

I would tentatively venture an alternative theory, one which evidently did not occur to Morris – namely, perhaps our hairlessness evolved in concert with our invention and use of clothing (e.g. animal hides) – i.e. a case of gene-culture coevolution

Clothing would provide an alternative means of protect from both sun and cold alike, but one that has the advantage that, unlike bodily fur, it can be discarded (and put back on) on demand. 

This explanation suggests that, paradoxically, we became naked apes at the same time, and indeed precisely because, we had also become clothed apes. 

The Sexiest Primate? 

One factor said to have contributed to the book’s commercial success was the extent to which its thesis chimed with the prevailing spirit of the age during which it was first published, namely the 1960s. 

Thus, as already alluded to, it presented, in many ways, an idealized and romantic version of human nature, with its crude group-selectionism and emphasis on cooperation within groups without a concomitant emphasis on conflict between groups, and its depiction of humans as a naturally monogamous pair-bonding species, without a concomitant emphasis on the prevalence of infidelity, desertion, polygamy, Machiavellian mating strategies and even rape.  

Another element that jibed with the zeitgeist of the sixties was Morris’s emphasis on human sexuality, with Morris famously declaring: 

The naked ape is the sexiest primate alive” (p64). 

Are humans indeed the ‘sexiest’ of primates? How can we assess this claim? It depends, of course, on precisely how we define ‘sexiness’. 

Obviously, if beauty is in the eye of the beholder, then sexiness is located in a rather different part of the male anatomy, but equally subjective in nature. 

Thus, humans like ourselves find other humans more sexy than other primates because we have evolved to do so. A male chimpanzee, however, would likely disagree and regard a female chimpanzee as sexier. 

However, Morris presumably has something else in mind when he describes humans as the “sexiest” of primates. 

What he seems to mean is that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates. Thus, for example, he cites as evidence the extended or continuous sexual receptivity of human females, writing: 

There is much more intense sexual activity in our own species than in any other primates” (p56) 

However, the claim that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates is difficult to maintain when you have studied the behavior of some of our primate cousins. Thus, for example, both chimpanzees and especially bonobos, our closest relatives among extant non-human primates, are far more promiscuous than all but the sluttiest of humans

Indeed, one might cynically suggest that what Morris had most in mind when he described humans as “the sexiest primate alive” was simply a catchy marketing soundbite that very much tapped into the zeitgeist of the era (i.e. the 1960s) and might help boost sales for his book. 

Penis Size

As further evidence for our species’ alleged “sexiness” Morris also supposedly unusually large size of the human penis, reporting: 

The [human] male has the largest penis of any primate. It is not only extremely long when fully erect, but also very thick when compared with the penises of other species” (p80). 

This claim, namely that the human male has an unusually large penis, may originate with Morris, and has certainly since enjoyed wide currency in subsequent decades. 

Thus, competing theories have been formulated to account for the (supposedly) unusual size of our penes.

One idea is that our large penes evolved through sexual selection, more specifically female choice, with females preferring either the appearance, or the internal ‘feel’, of a large penis during coitus, and hence selecting for increased penis size among men (e.g. Mautz et al 2013; The Mating Mind: p234-6).

Of course, one might argue that the internal ‘feel’ of a large penis during intercourse is a bit late for mate choice to operate, since, by this time, the choice in question has already been made. Indeed, in cultures where the genitalia are usually covered with clothing, even exercising mate choice on the basis of the external appearance of the penis, especially an erect penis, might prove difficult, or, at the very least, socially awkward.

However, given that, in humans, most sexual intercourse is non-reproductive (i.e. does note result in conception, let alone in offspring), the idea is not entirely implausible.

This idea, namely the our large penes evolved through sexual selection, dovetails neatly with Richard Dawkins’ tentative suggestion in an endnote appended to later editions of The Selfish Gene (reviewed here) that the capacity to maintain an erection (presumably especially a large erection) without any penis bone may function as an honest signal of health in accordance with Zahavi’s handicap principle, an idea I have previously discussed here (The Selfish Gene: p307-8).

An alternative explanation for the relatively large size of our penes implicates sperm competition. On this view, human penes are designed to remove sperm deposited by rival males in the female reproductive tract by functioning as a “suction piston” during intercourse, as I discuss below (Human Sperm Competition: p170-171; Gallup & Burch 2004; Gallup et al 2004; Goetz et al 2005; Goetz et al 2007). 

Yet, in fact, according to Alan F Dixson, the human penis is not unusually long by primate standards, being roughly the same length as that of the chimpanzee (Sexual Selection and the Origins of Human Mating Systems: p64). 

Instead, Dixson reports: 

The erect human penis is comparable in length to those of other primates, in relation to body size. Only its circumference is unusual when compared to the penes of other hominids” (Sexual Selection and the Origins of Human Mating Systems: p65). 

The human penis is unusual, then, only in its width or girth. 

As to why our penes are so wide, the answer is quite straightforward, and has little to do with the alleged ‘sexiness’ of the human species, whatever that means. 

Instead, it is a simple, if indirect, reflection of our increased brain-size.

Increased brain-size first selected for changes in the size and shape of female reproductive anatomy. This, in turn, led to changes in male reporoductive anatomy. Thus, Bowman suggests: 

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008). 

Similarly, Robin Baker and Mark Bellis write: 

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: Copulation, Masturbation and Infidelity: p171). 

In turn, larger vaginas select for larger penises in order to fill this larger vagina (Bowman 2008).  

Interestingly, this theory directly contradicts the alleged claim of infamous race scientist Philippe Rushton (whose work I have reviewed here and here) that there is an inverse correlation between brain-size and penis-size, which relationship supposedly explains race differences in brain and genital size. Thus, Rushton was infamously quoted as observing: 

It’s a trade off, more brains or more penis. You can’t have everything.[9]

On the contrary, this analysis suggests that, at least as between species (and presumably as between sub-species, i.e. races, as well), there is a positive correlation between brain-size and penis-size.[10]

According to Baker and Bellis, one reason male penis size tracks that of female vagina size (both being relatively large, and especially wide, in humans) is that the penis functions as, in Baker and Bellis’s words, a “suction piston” during intercourse, the repeated thrusting functioning to remove any sperm previously deposited by rival males – a form of sperm competition

Thus, they report:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171). 

Interestingly, this theory – namely that the human penis functions as a sperm displacement device – although seemingly fanciful, actually explains some otherwise puzzling aspects of human coitus, such as its relatively extended duration, the male refractory period and related Coolidge effect – i.e. why a male cannot immediately recommence intercourse immediately after orgasm, unless perhaps with a new female (though this exception has yet to be experimentally demonstrated in humans), since to do so would maladaptively remove one’s own sperm from the female reproductive tract. 

Though seemingly fanciful, this theory even has some empirical support (Gallup & Burch 2004; Goetz et al 2005; Goetz et al 2007), including some delightful experiments involving sex toys of various shapes and sizes (Gallup et al 2004). 

Morris writes:

“[Man] is proud that he has the biggest brain of all the primates, but attempts to conceal the fact that he also has the biggest penis, preferring to accord this honor falsely to the mighty gorilla” (p9). 

Actually, the gorilla, mighty though he indeed may be, has relatively small genitalia. This is on account of his polygynous, but non-polyandrous, mating system, which involves minimal sperm competition.[11]

Moreover, the largeness of our brains, in which, according to Morris, we take such pride, may actually be the cause of the largeness of our penes, for which, according to Morris, we have such shame (here, he speaks for few men). 

Thus, large brains required larger heads which, in turn, required larger vaginas in order to successfully birth larger-headed babies. This in turn selected for larger penises to fill the larger vagina. 

In short, the large size, or rather large girth/width, of our penes has less to do with our being the “sexiest primate” and more to do with our being the brainiest

Female Breasts

In addition to his discussion of human penis size, Morris also argues that various other features of human anatomy that not usually associated with sex nevertheless evolved, in part, due to their role in sexual signaling. These include our earlobes (p66-7), everted lips (p68-70) and, tentatively and rather bizarrely, perhaps even our large fleshy noses (p67). 

He makes the most developed and persuasive case, however, in respect of another physiological peculiarity of the human species, and of human females in particular, namely the female breasts

Thus, Morris argues: 

For our species, breast design is primarily sexual rather than maternal in function” (p106). 

The evolution of protruding breasts of a characteristic shape appears to be yet another example of sexual signalling” (p70). 

As evidence, he cites the differences in shape between women’s breasts and both the breasts of other primates and the design of baby bottles (p93). In short, the shape of human breasts do not seem ideally conducive to nursing alone. 

The notion that breasts have a secondary function as sexual advertisements is indeed compelling. In most other mammals, large breasts develop only during pregnancy, but human breasts are permanent, developing at puberty, and, except during pregnancy and lactation, composed predominantly of fat not milk (see Møller et al 1995; Manning et al 1997; Havlíček et al 2016). 

On the other hand, it is difficult to envisage how breasts ever first became co-opted as a sexually-selected ornament. 

After all, the presence of developed breasts on a female would originally, as among other primates, have indicated that the female in question was pregnant, and hence infertile. There would therefore initially have been strong selection pressure among males against ever finding breasts sexually attractive, since it would lead to their pursuing infertile women whom they could not possibly impregnate. As a consequence, there would be strong selection against a female ever developing permanant breasts, since it would result in her being perceived as currently infertile and hence unattractive to males.

How then did breasts ever make the switch to a sexually attractive, sexually-selected ornament? This is what George Francis, at his blog, ‘Anglo Reaction’, terms the breast paradox.[12]

Morris does not address this not insignificant problem. However, he does suggest that two other human traits unique among primates may have facilitated the process. 

Our so-called nakedness (i.e. relative hairlessness as compared to other mammals), the trait that furnished Morris’s book with its title, and Morris himself with his preferred name for our species, is the first of these traits. 

Swollen breast-patches in a shaggy-coated female would be far less conspicuous as signalling devices, but once the hair has vanished they would stand out clearly” (p70-1). 

Secondly, Morris argues that our bipedalism (i.e. the fact we walk on two legs) and resulting vertical posture, necessarily put the female reproductive organs out of sight underneath a woman when she adopts a standing position, and hence generally out of the sight of potential mates. There was therefore, Morris suggests, a need for some frontal sexual-signaling. 

This, he argues, was further necessitated by what he argues is our species’ natural preference for ventro-ventral (i.e. missionary position) intercourse. 

In particular, Morris argues that human female breasts evolved in order to mimic the appearance of the female buttocks, a form of what he terms ‘self-mimicry’. 

The protuberant, hemispherical breasts of the female must surely be copies of the fleshy buttocks” (p76). 

Everted Lips 

Interestingly, he makes a similar argument in respect of another trait of humans not shared by other extant primates – namely, our inverted lips.

The word ‘everted’ refers to the fact that our lips are turned outwards, as is easily perceived by comparing human lips with the much thinner lips of our closest non-human relatives

Again, this seems intuitively plausible, since, like female breasts, lips do indeed seem to be a much-sexualized part of the human anatomy, at least in western societies, and in at least some non-western cultures as well, if erotic art is to be taken as evidence.[13]

These everted lips, he argues, evolved to mimic the appearance of the female labia

As with Morris’s idea that female breasts evolved to mimic the appearance of female buttocks, the idea that our lips, and women’s use of lipstick, is designed to imitate the appearance of the female sexual organs has been much mocked.[14]

However, the similarity in appearance of the labia and human lips can hardly be doubted. After all, it is even attested to in the very etymology of the word ‘labia

Of course, inverted lips reach their most extreme form among extant sub-species of hominid among black Africans. This Morris argues is because: 

If climatic conditions demand a darker skin, then this will work against the visual signalling capacity of the lips by reducing their colour contrast. If they really are important as visual signals, then some kind of compensating development might be expected, and this is precisely what seems to have occurred, the negroid lips maintaining their conspicuousness by becoming larger and more protuberant. What they have lost in colour contrast, they have made up for in size and shape” (p69-70).[15]

Thus, rejecting the politically-incorrect notion that black Africans are, as a race, somehow more primitive than other humans, Morris instead emphasizes the fact that, in respect of this trait (i.e. everted lips), they are actually the most differentiated from non-human primates.  

Thus, all humans, compared to non-human primates, have everted lips, but black African lips are the most everted. Therefore, Morris concludes, using the word ‘primitive’ is in the special phylogenetic sense

Anatomically, these negroid characters do not appear to be primitive, but rather represent a positive advance in the specialization of the lip region” (p70).

In other words, whereas whites and Asians may be more advanced than blacks when it comes to intelligence, brain-size, science, technology and building civilizations, when it comes to everted lips, black Africans have us all beaten! 

Female Orgasm

Morris also discusses the function of the female orgasm, a topic which has subsequently been the subject of much speculation and no little controversy among evolutionists.  

Again, Morris suggests that humans’ unusual vertical posture, brought on by our bipedal means of locomotion, may have been central to the evolution of this trait. 

Thus, if a female were to walk off immediately after sexual intercourse had occurred, then: 

Under the simple influence of gravity the seminal fluid would flow back down the vaginal tract and much of it would be lost” (p79).  

This obviously makes successful impregnation less likely. As a result, Morris concludes: 

There is therefore a great advantage in any reaction that tends to keep the female horizontal when the male ejaculates and stops copulating” (p79). 

The chief adaptive function of the female orgasm therefore, according to Morris, is the tiredness, and perhaps post-coital tristesse, that immediately follows orgasm, and motivates the female experiencing these emotions to remain in a horizontal position even after intercourse has ended, and hence retain the male ejaculate within her reproductive tract. 

The violent response of female orgasm, leaving the female sexually satiated and exhausted has precisely this effect” (p79).[16]

However, the main problem with Morris’s theory is that it predicts that female orgasm should be confined to humans, since, at least among extant primates, we represent the only bipedal ape.  

Morris does indeed argue that the female organism is, like our nakedness, bipedal locomotion and large brains, an exclusively human trait, describing how, among most, if not all, non-human primates: 

At the end of a copulation, when the male ejaculates and dismounts, the female monkey shows little sign of emotional upheaval and usually wanders off as if nothing had happened” (p79). 

Unfortunately for Morris’s theory, however, evidence has subsequently accumulated that some non-human (and non-bipedal) female primates do indeed seem to sometimes experience responses seemingly akin to orgasm during copulation. 

Thus, Alan Dixson reports: 

Female orgasm is not confined to Homo sapiens. Putatively homologous responses [have] been reported in a number of non-human primates, including stump-tail and Japanese Macaques, rhesus monkeys and chimpanzees… Pre-human ancestors of Homo sapiens, such as the australopithecines, probably possessed a capacity to exhibit female orgasm, as do various extant ape and monkey species. The best documented example concerns the stump tailed macaque (Macaca arctoides), in which orgasmic uterine contractions have been recorded during female-female mounts… as well as during copulation… De Waal… estimates that female stump-tails show their distinctive ‘climax face’ (which correlates with the occurrence of uterine contractions) once in every six copulations. Vaginal spasms were noted in two female rhesus monkeys as a result of extended periods of stimulation (using an artificial penis) by an experimenter… Likewise, a female chimpanzee exhibited rhythmical vaginal contractions, clitoral erection, limb spasms, and body tension in response to manual stimulation of its genitalia… Masturbatory behaviour, accompanied by behavioural and physiological responses indicative of orgasm, has also been noted in Japanese macaques… and chimpanzees” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Thus, in relation to Morris’s theory, Dixson concludes that the theory lacks “comparative depth” because: 

Monkey and apes exhibit female orgasm in association with dorso-ventral copulatory postures and an absence of post-mating rest periods” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Certainly, female orgasm, unlike male orgasm, is hardly a prerequisite for successful impregnation. 

Thus, American physician, Robert Dickson, in his book, Human Sex Anatomy (1933), reports that, in a study of a thousand women who attended his medical practice afflicted with so-called ‘frigitity’ (i.e they were incapable of orgasmic response during intercourse): 

The frigid were not notably infertile, having the expected quota of living children, and somewhat less than the average incidence of sterility” (Human Sex Anatomy: p92). 

Thus, as argued by Donald Symons in his groundbreaking The Evolution of Human Sexuality (which I have reviewed here), the most parsomonious theory of the evolution of female orgasm is that it represents simply a non-adaptive byproduct of male orgasm, which is, of course, itself adaptive (see Sherman 1989Case Of The Female Orgasm: Bias in the Science of Evolution).

It thus represents, if you like, the female equivalent of male nipples – only more fun.

Hymen

Interestingly, Morris also hypothesizes regarding the evolutionary function of another peculiarity of human female reproductive anatomy which, in contrast to the controversy regarding the evolutionary function, if any, of the female orgasm and clitoris (and of the female breasts), has received surprisingly scant attention from evolutionists – namely, the hymen

In most mammals, Morris reports, “it occurs as an embryonic stage in the development of the urogenital system” (p82). However, only in humans, he reports, is it, when not ruptured, retained into adulthood. 

Regarding the means by which it evolved, the trait is then, Morris concludes, like our large brains, upright posture and hairlessness, “part of the naked ape’s neoteny” (p82). 

However, as with our hairlessness, neoteny only the means by which this trait was retained into adulthood among humans, not the evolutionary reason for its retention.  

In other words, he suggests, the hymen, like other traits retained into adulthood among humans, must serve some evolutionary function. 

What is this evolutionary function? 

Morris suggests that, by making first intercourse painful for females, it deters young women from engaging in intercourse too early, and hence risking pregnancy, without first entering a relationship (‘pair-bond’) of sufficient stability to ensure that male parental investment, and provisioning, will be forthcoming (p73). 

However, pain experienced during intercourse occurs rather too late to deter first intercourse, because, by the time this pain is experienced, intercourse has already occurred. 

Of course, given our species’ unique capacity for speech and communication, the pain experienced during first intercourse could be communicated to young virginal women through conversation with other non-virginal women who had already experienced first intercourse.  

However, this would be an unreliable method of inducing fear and avoidance regarding first intercourse, especially given the sort of taboos regarding discussion of sexual activities which are common in many cultures. 

At any rate, why would natural, or sexual, selection not instead simply directly select for fear and anxiety regarding first intercourse – i.e. a psychological rather than a physiological adaptation. After all, as evolutionary psychologists and sociobiologists have convincingly demonstrated, our psychology is no less subject to natural selection than is our physiology. 

Although, as already noted, the evolutionary function, if any, of the female hymen has received surprisingly little attention from evolutionists, I can think of at least three rival hypotheses regarding the evolutionary significance of the hymen. 

First, it may have evolved among humans as a means of advertising to prospective suitors a prospective bride’s chastity, and hence reassuring the suitor of the paternity of offspring that subsequently result and encouraging paternal investment in offspring. 

This would, in turn, increase the perceived attractiveness of the female in question, and help secure her a better match with a higher-status male, and hence increase her own reproductive success

Thus, it is notable that, in many cultures, prospective brides are inspected for virginity, a so-called virginity test, sometimes by the prospective mother-in-law or another older woman, before being considered marriageable and accepted as brides. 

Alternatively, and more prosaically, the hymen may simply function to protect against infection, by preventing dirt and germs from entering a woman’s body by this route. 

This, of course, would raise the question as to why, at least according to Morris, the trait is retained into sexual maturity only among humans?  

Actually, however, as with his claim that the female orgasm is unique to humans, Morris’s claim that only humans retain the hymen into sexual maturity is disputed by other sources. Thus, for example, Catherine Blackledge reports: 

Hymens, or vaginal closure membranes or vaginal constrictions, as they are often referred to, are found in a number of mammals, including llamas, guinea-pigs, elephants, rats, toothed whales, seals, dugongs, and some primates, including some species of galagos, or bushbabys, and the ruffed lemur” (The story of V: p145). 

Finally, even more prosaically, the hymen may simply represent a nonadaptive vestige of the developmental process, or a nonadaptive by-product of our species’ neoteny

This would be consistent with the apparent variation with which the trait presents itself, suggesting that it has not been subject to strong selection pressure that has weeded out suboptimal variations. 

This then would appear to be the most parsimonious explanation. 

Zoological Nomenclature 

The works on human ethology of both Richard Ardrey and Konrad Lorenz attracted much attention and no little controversy in their day. Indeed, they perhaps attracted even more controversy than Morris’s own ‘The Naked Ape’, not least because they tended to place greater emphasis on humankind’s capacity, and alleged innate proclivity, towards violence. 

In contrast, Morris’s own work, placing less emphasis on violence, and more on sex, perhaps jibed better with the zeitgeist of the era, namely the 1960s, with its hippy exhortations to ‘make love not war’. 

Yet, although all these works were first published at around the same time, the mid- to late-sixties (though Adrey continued publishing books of this subject into the 1970s), Morris’s ‘The Naked Ape’ seems to be the only of these books that remains widely read, widely known and still in print, to this day. 

Partly, I suspect, this reflects its brilliant and provocative title, which works on several levels, scientific and literary.  

Morris, as we have seen, justifies referring to humans by this perhaps unflattering moniker on zoological grounds.  

Certainly, he acknowledges that humans possess many other exceptional traits that distinguish us from all other extant apes, and indeed all other extant mammals. 

Thus, we walk on two legs, use and make tools, have large brains and communicate via a spoken language. Thus, the zoologist could refer to us by any number of descriptors – “the vertical ape, the tool-making ape, the brainy ape” are a few of Morris’s own suggestions (p41).  

But, he continues, adopting the disinterested detachment of the proverbial alien zoologist: 

These were not the first things we noticed. Regarded simply as a zoological specimen in a museum, it is the nakedness that has the immediate impact” (p41) 

This name has, Morris observes, several advantages, including “bringing [humans] into line with other zoological studies”, emphasizing the zoological approach, and hence challenging human vanity. 

Thus, he cautions: 

The naked ape is in danger of being dazzled by [his own achievements] and forgetting that beneath the surface gloss he is still very much a primate. (‘An ape’s an ape, a varlet’s a valet, though they be clad in silk or scarlet’). Even a space ape must urinate” (p23). 

Thus, the title works also on another metaphoric level, which also contributed to the title’s power.  

The title ‘Naked Ape’ promises to reveal, if you like, the ‘naked’ truth about humanity—to strip humanity down in order to reveal the naked truth that lies beneath the façade and finery. 

Morris’s title reduces us to a zoological specimen in the laboratory, stripped naked on the laboratory table, for the purposes of zoological classification and dissection. 

Interestingly, humans have historically liked to regard ourselves as superior to other animals, in part, precisely because we are the only ones who did clothe ourselves. 

Thus, beside Adam and Eve, it was only primitive tropical savages who went around in nothing but a loincloth, and they were disparaged as uncivilized precisely on this account. 

Yet even tropical savages wore loincloths. Indeed, clothing, in some form, is sometimes claimed to be a human universal

Yet animals, on the other hand, go completely unclothed – or so we formerly believed. 

But Morris turns this reasoning on its head. In the zoological sense, it is humans who are the naked ones, being largely bereft of hairs sufficient to cover most of our bodies. 

Stripping humanity down in this way, Morris reveals the naked truth that beneath, the finery and façade of civilization, we are indeed an animal, an ape and a naked one at that. 

The power of Morris’s chosen title ensures that, even if, like all science, his book has quickly dated, his title alone has stood the test of time and will, I suspect, be remembered, and employed as a descriptor of the human species, long after Morris himself, and the books he authored, are forgotten and cease to be read. 

Endnotes

[1] In fact, as I discuss in a later section of this review, it is possible that the female hymen evolved through just such a process, namely as a means of advertising female virginity and premarital chastity (and perhaps implying post-marital fidelity), and hence as a paternity assurance mechanism, which benefited the female by helping secure male parental investment, provisioning and hypergamy.

[2] Morris is certainly right that anthropologists have overemphasized the exotic and unfamiliar (“bizarre mating customs, strange kinship systems, or weird ritual procedures”, as Morris puts it). Partly, this is simply because, when first encountering an alien culture, it is the unfamiliar differences that invariably stand out, whereas the similarities are often the very things which we tend to take for granted.
Thus, for example, on arriving in a foreign country, we are often struck by the fact that everyone speaks a foreign unintelligible language. However, we often take for granted the more remarkable fact that all cultures around the world do indeed have a spoken language, and also that all languages supposedly even share in common a universal grammar.
However, anthropologists have also emphasized the alien and bizarre for other reasons, not least to support theories of radical cultural malleability, sometimes almost to the verge of outright fabrication (e.g. Margaret Mead’s studies in Samoa).

[3] It is true that there has been some significant human evolution since the dawn of agriculture, notably the evolution of lactase persistence in populations with a history of dairy agriculture. Indeed, as Cochran and Harpending emphasize in their book The 10,000 Year Explosion, far from evolution having stopped at the dawn of agriculture or the rise of ‘civilization’, it has in fact sped up, as a natural reflection of the rapid change in environmental conditions that resulted. Thus, as Nicholas Wade concludes in A Troublesome Inheritance, much human evolution has been “recent, copious and regional”, leading to substantial differentiation between populations (i.e. race differences), including in psychological traits such as intelligence. Nevertheless, despite such tinkering, the core adaptations that identify us as a species were undoubtedly molded in ancient prehistory, and are universal across the human species.

[4] However, it is indeed important to recognize that the lifestyle of our own ancestors was not necessarily identical to that of those few extant hunter-gatherer groups that have survived into modern times, not least because the latter tend to be concentrated in marginal and arid environments (e.g. the San people of the Kalahari DesertEskimos of the Arctic region, Aboriginals of the Australian outback), with those formerly inhabiting more favorable environments having either themselves transitioned to agriculture or else been displaced or absorbed by more advanced invading agriculturalists with higher population densities and superior weapons and other technologies.

[5] This passage is, of course, sure to annoy feminists (always a good thing), and is likely to be disavowed even by many modern evolutionary psychologists since it relies on a rather crude analogy. However, Morris acknowledges that, since “’hunting’… has now been replaced by ‘working‘”: 

The males who set off on their daily working trips are liable to find themselves in heterosexual groups instead of the old all-male parties. All too often it [the pair bond] collapses under the strain” (p81). 

This factor, Morris suggests, explains the prevalence of marital infidelity. It may also explain the recent hysteria, and accompanying witch-hunts, regarding so-called ‘sexual harassment’ in the workplace.
Relatedly, and also likely to annoy feminists, Morris champions the then-popular man the hunter theory of hominid evolution, which posited that the key development in human evolution, and the development of human intelligence in particular, was the switch from a largely, if not wholly, herbivorous diet and lifestyle, to one based largely on hunting and the consumption of meat. On this view, it was the cognitive demands that hunting placed on humans that selected for increased intelligence among humans, and also the nutritional value of meat that made possible increases in  highly metabolically expensive brain tissue.
This theory has since fallen into disfavor. This seems to be primarily because it gives the starring role in human evolution to men, since men do most of the hunting, and relegates women to a mere supporting role. It hence runs counter to the prevailing feminist zietgeist.
The main substantive argument given against the ‘man the hunter theory’ is that other carnivorous mammals (e.g. lions, wolves) adapted to carnivory without any similar increase in brain-size or intelligence. Yet Morris actually has an answer to this objection.
Our ancestors, fresh from the forests, were relative latecomers to carnivory. Therefore, Morris contends, had we sought to compete with tigers and wolves by mimicking them (i.e. growing our fangs and claws instead of our brains) we would inevitably have been playing a losing game of evolutionary catch-up. 

Instead, an entirely new approach was made, using artificial weapons instead of natural ones, and it worked” (p22).

However, this theory fails to explain how female intelligence evolved. One possibility is that increases in female intelligence are an epiphenomenal byproduct of selection for male intelligence, rather like the female equivalent of male nipples.
On this view, men would be expected to have higher intelligence than women, just as male nipples (and breasts) are smaller than female nipples, and the male penis is bigger than the female clitoris. That adult men have greater intelligence than adult women is indeed the conclusion of a recent controversial theory, though the difference is very modest (Lynn 1999). There is also evidence this sexual division of labour between hunting and gathering led to sex dithfferences spatio-visual intelligence (Eals & Silverman 1994).

[6] Another difference from modern evolutionary psychologists derives from Morris’s ethological approach, which involves a focus on human-typical behaviour patterns. For example, he discusses the significance of body language and facial expressions, such as smiling, which is supposedly homologous with an appeasement gesture (baring clenched teeth, aka a ‘fear grin’) common to many primates, and staring, which represents a form of threat across many species.

[7] Interestingly, however, he acknowledges that this statement does not apply to all human races. Thus, he observes: 

Negroes have undergone a real as well as an apparent hair loss” (p42). 

Thus, it seems blacks, unlike Caucasians, have fewer hairs on their body than do chimpanzees. This fact is further evidence that, contrary to the politically correct orthodoxy, race differences are real and important, though this fact is, of course, played down by Morris and other popular science writers.

[8] Edward O Wilson, for example, in Sociobiology: The New Synthesis (which I have reviewed here) dismisses aquatic ape theory, as then championed by Elaine Morgan in The Descent of Woman, as feminist-inspired pop-science “contain[ing] numerous errors” and as being “far less critical in its handling of the evidence than the earlier popular books”, including, incidentally, that of Morris, who is mentioned by name in the same paragraph (Sociobiology: The New Synthesis: p29).

[9] Actually, I suspect this infamous quotation may be apocryphal, or at best a misconstrued joke. Certainly, while I think Rushton’s theory of race differences (which he calls ‘differential K theory’) is flawed, as I explain in my review of his work, there is nothing in it to suggest a direct trade-off between penis-size and brain-size. Indeed, one problem with Rushton’s theory, or at least his presentation of it, is that he never directly explains how traits such as penis-size actually relate to r/K selection in the first place.
The quotation is usually traced to a hit piece in Rolling Stone, a leftist hippie rag with a reputation for low editorial standards and fake news. However, Jon Entine, in his book on race differences in athletic ability, instead traces it to a supposed interview between Rushton and Geraldo Rivera broadcast on the Geraldo’ show in 1989 (Taboo: Why Black Athletes Dominate Sports: p74).
Interestingly, one study has indeed reported that there is a “demonstrated negative evolutionary relationship”, not between brain-size and penis-size, but rather between brain-size and testicle size, if only on account of the fact that each contain “metabolically expensive tissues” (Pitnick et al 2006).

[10] Interestingly, Baker and Bellis attribute race differences in penis-size, not to race differences in brain-size, but rather to race differences in birth weight. Thus, they conclude:

Racial differences in size of penis (Mongoloid < Caucasoid < Negroid…) reflects racial differences in birth weight… and hence presumably, racial differences in size of vagina” (Human Sperm Competition: p171). 

[11] In other words, a male silverback gorilla may mate with the multiple females in his harem, but each of the females in his harem likely have sex with only one male, namely that silverback. This means that sperm from rival males are rarely simultaneously present in the same female’s oviduct, resulting in minimal levels of sperm competition, which is known to select from larger testicles in particular, and also often more elaborate penes as well.

[12] Alternative theories for the evolution of permanent fatty breasts in women is that they function analogously to camel humps, i.e. as a storehouse of nutrients to guard against and provide reserves in the event of future scarcity or famine. On this view, the sexually dimorphic presentation (i.e. the fact that fatty breasts are largely restricted to women) might reflect the caloric demands of pregnancy. Indeed, this might explain why women have higher levels of fat throughout their bodies. (For a recent review of rival theories for human breast evolution see Pawłowski & Żelaźniewicz 2021.)

[13] However, to be pedantic, this phraseology is perhaps problematic, since, to say that breasts and lips are ‘sexualized’ in western, and at least some non-western, cultures implicitly presupposes that they are not already inherently sexual parts of our anatomy by virtue of biology, which is, of course, the precisely what Morris is arguing. 

[14] For example, if I recall correctly, extremely annoying, left-wing 1980s-era British comedian Ben Elton once commented in a one of his stand-up routines that the male anthropologist (i.e. Morris, actually not an anthropologist, at least not by training) who came up with this idea (namely, that lips and lipstick mimiced the appearance of the labia) had obviously never seen a vagina in his life. He also, if I recall correctly, attributed this theory to the supposed male-dominated, androcentric nature of the field of anthropology – an odd notion given that Morris is not an anthropologist by training, and cultural anthropology is, in fact, one of the most leftist-dominated, feminist-infested, politically correct fields in the whole of academia, this side of ‘gender studies’, which, in the present, politically-correct world of academia, is saying a great deal.

[15] To test this theory, we might look at other relatively dark-skinned, but non-Negroid, populations. Here, the theory receives, at best, only partial support. Thus, Australian Aboriginals, another dark-skinned but unrelated group, do indeed tend to have quite large lips. However, these lips are not especially everted. 
On the other hand, the dark-skinned Dravidian populations of Southern India are not generally especially large-lipped, but are rather quite Caucasoid in facial morphology, and indeed, like the generally lighter-complexioned, Indo-European speaking, ‘Aryan’ populations of northern India, were generally classified as ‘Caucasoid by most early-twentieth century racial anthropologists.

[16] This theory is rather simpler, and has hence always struck me as more plausible, than the more elaborate, but also more widely championed so-called ‘upsuck hypothesis’, whereby female orgasm is envisaged as somehow functioning to suck semen deeper into the cervix. This idea is largely based on a single study involving two experiments on a single subject (Fox et al 1970). However, two other studies failed to produce any empirical support for the theory (Grafenberg 1950; Masters & Johnson 1966). Baker and Bellis’s methodologically problematic work on what they call ‘flowback’ provides, at best, ambivalent evidence (Baker & Bellis 1993). For detailed critique, see Dixson’s Sexual Selection and the Origins of Human Mating Systems: p74-6.

References 

Baker & Bellis (1993) Human sperm competition: ejaculate manipulation by females and a function for the female orgasm. Animal Behaviour 46:887–909. 
Bowman EA (2008) Why the human penis is larger than in the great apes. Archives of Sexual Behavior 37(3): 361. 
Eals & Silverman (1994) The Hunter-Gatherer theory of spatial sex differences: Proximate factors mediating the female advantage in recall of object arrays. Ethology and Sociobiology 15(2): 95-105.
Fox et al 1970. Measurement of intra-vaginaland intra-uterine pressures during human coitus by radio-telemetry. Journal of Reproduction and Fertility 22:243–251. 
Gallup et al (2004). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277–289 
Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology 2:12-23. 
Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: A preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences 38:749-763 
Goetz et al (2007) Sperm Competition in Humans: Implications for Male Sexual Psychology, Physiology, Anatomy, and Behavior. Annual Review of Sex Research 18:1. 
Grafenberg (1950) The role of urethra in female orgasm. International Journal of Sexology 3:145–148. 
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226. 
Lynn (1999) Sex differences in intelligence and brain size: A developmental theory. Intelligence 27(1):1-12.
Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236. 
Masters & Johnson (1966) Human Sexual Response (Boston: Little, Brown, 1966).
Mautz et al (2013) Penis size interacts with body shape and height to influence male attractiveness, Proceedings of the National Academy of Sciences 110(17): 6925–30.
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219. 
Pawłowski & Żelaźniewicz (2021) The evolution of perennially enlarged breasts in women: a critical review and a novel hypothesis. Biological reviews of the Cambridge Philosophical Society 96(6): 2794-2809. 
Pitnick et al (2006) Mating system and brain size in bats. Proceedings of the Royal Society B: Biological Sciences 273(1587): 719-24. 

Pierre van den Berghe’s ‘The Ethnic Phenomenon’: Ethnocentrism and Racism as Nepotism Among Extended Kin

Pierre van den Berghe, The Ethnic Phenomenon (Westport: Praeger 1987) 

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common. 

In ‘The Ethnic Phenomenon’, pioneering sociologist-turned-sociobiologist Pierre van den Berghe attempts to explain this universal phenomenon. 

In the process, he not only provides a persuasive ultimate evolutionary explanation for the universality of ethnocentrism, but also produces a remarkable synthesis of scholarship that succeeds in incorporating virtually every aspect of ethnic relations as they have manifested themselves throughout history and across the world, from colonialism, caste and slavery to integration and assimilation, within this theoretical and explanatory framework. 

Ethnocentrism as Nepotism? 

At the core of Pierre van den Berghe’s theory of ethnocentrism and ethnic conflict is the sociobiological theory of kin selection. According to van den Berghe, racism, xenophobia, nationalism and other forms of ethnocentrism can ultimately be understood as kin-selected nepotism, in accordance with biologist William D Hamilton’s theory of inclusive fitness (Hamilton 1964a; 1964b). 

According to inclusive fitness theory (also known as kin selection), organisms evolved to behave altruistically towards their close biological kin, even at a cost to themselves, because close biological kin share genes in common with one another by virtue of their kinship, and altruism towards close biological kin therefore promotes the survival and spread of these genes. 

Van den Berghe extends this idea, arguing that humans have evolved to sometimes behave altruistically towards, not only their close biological relatives, but also sometimes their distant biological relatives as well – namely, members of the same ethnic group as themselves. 

Thus, van den Berghe contends: 

Racial and ethnic sentiments are an extension of kinship sentiments [and] ethnocentrism and racism are… extended forms of nepotism” (p18). 

Thus, while social scientists, and social psychologists in particular, rightly emphasize the ubiquity, if not universality, of in-group preference, namely a preference for and favouring of individuals of the same social group as oneself, they also, in my view, rather underplay the extent to which the group identities which lead to the most conflict, animosity, division and discrimination, not only in the contemporary west, but throughout history and across the world, and are also most apparently impervious to resolution, are ethnic identities.

Thus, divisions such as those between social classes, or the sexes, different generations, or between members of different political factions, or youth subcultures (e.g. between mods’ and ‘rockers), or supporters of different sports teams, may indeed lead to substantial conflict, at least in the short-term, and are often cited as quintessential examplars of ‘tribal’ identity and conflict.

However, the most violent and intransient of group conflicts seem to me to be those between ethnic groups, namely a form of group identity that is passed down in families, from parent to offspring, in a quasi-biological fashion, based on a perception of shared kinship, and in respect of which people are usually expected to marry endogamously.

In contrast, aspects of group identity that vary even between individuals within a single family, including those that are freely chosen by individuals, tend to be somewhat muted in intensity, perhaps precisely because most people share bonds with close family members of a different group identity.

Thus, there has never, to my knowledge, been a civil war arising from conflict between the sexes, or between supporters of one or another football team.[1]

Ethnic Groups as Kin Groups?

Before reading van den Berghe’s book, I was skeptical regarding whether the degree of kinship shared among co-ethnics would ever be sufficient to satisfy Hamilton’s rule, whereby, for altruism to evolve, the cost of the altruistic act to the altruist, measured in terms of reproductive success, must be outweighed by the benefit to the recipient, also measured in terms of reproductive success, multiplied by the degree of relatedness of the two parties (Brigandt 2001; cf. Salter 2008; see also On Genetic Interests). 

Thus, Brigandt (2001) takes van den Berghe to task for his formulation of what the latter catchily christens “the biological golden rule”, namely: 

Give unto others as they are related unto you” (p20).[2]

However, contrary to both critics of his theory (e.g. Brigandt 2001) and others developing similar ideas (e.g. Rushton 2005; Salter 2000), van den Berghe is actually agnostic on the question of whether ethnocentrism is ever actually adaptive in modern societies, where the shared kinship of large nations or ethnic groups is, as van den Berghe himself readily acknowledges, “extremely tenuous at best” (p243). Thus, he concedes: 

Clearly, for 50 million Frenchmen or 100 million Japanese, any common kinship that they may share is highly diluted … [and] when 25 million African-Americans call each other ‘brothers’ and ‘sisters’, they know that they are greatly extending the meaning of these terms” (p27).[3]

Instead, van den Berghe suggests that nationalism and racism may reflect the misfiring of a mechanism that evolved when our ancestors still still lived in small kin-based groups of hunter-gatherers that represented little more than extended families (p35; see also Tooby and Cosmides 1989; Johnson 1986). 

Thus, van den Berghe explains: 

Until the last few thousand years, hominids interacted in relatively small groups of a few score to a couple of hundred individuals who tended to mate with each other and, therefore, to form rather tightly knit groups of close and distant kin” (p35). 

Therefore, in what evolutionary psychologists now call the environment of evolutionary adaptedness or EEA:

The natural ethny [i.e. ethnic group] in which hominids evolved for several thousand millennia probably did not exceed a couple of hundred individuals at most” (p24) 

Thus, van den Berghe concludes: 

The primordial ethny is thus an extended family: indeed, the ethny represents the outer limits of that inbred group of near or distant kinsmen whom one knows as intimates and whom therefore one can trust” (p25). 

On this view, ethnocentrism was adaptive when we still resided in such groups, where members of our own clan or tribe were indeed closely biologically related to us, but is often maladaptive in contemporary environments, where our ethnic group may include literally millions of people. 

Another not dissimilar theory has it that racism in particular might reflect the misfiring of an adaptation that uses phenotype matching, in particular physical resemblance, as a form of kin recognition

Thus, Richard Dawkins in his seminal The Selfish Gene (which I have reviewed here), cautiously and tentatively speculates: 

Conceivably, racial prejudice could be interpreted as an irrational generalization of a kin-selected tendency to identify with individuals physically resembling oneself, and to be nasty to individuals different in appearance” (The Selfish Gene: p100). 

Certainly, van den Berghe takes pains to emphasize that ethnic sentiments are vulnerable to manipulation – not least by exploitative elites who co-opt kinship terms such as ‘motherland’, fatherland and ‘brothers-in-arms‘ to encourage self-sacrifice, especially during wartime (p35; see also Johnson 1987; Johnson et al 1987; Salmon 1998). 

However, van den Berghe cautions, “Kinship can be manipulated but not manufactured [emphasis in original]” (p27). Thus, he observes how: 

Queen Victoria could cut a motherly figure in England; she even managed to proclaim her son the Prince of Wales; but she could never hope to become anything except a foreign ruler of India; [while] the fiction that the Emperor of Japan is the head of the most senior lineage descended from the common ancestor of all Japanese might convince the Japanese peasant that the Emperor is an exalted cousin of his, but the myth lacks credibility in Korea or Taiwan” (p62-3). 

This suggests that the European Union, while it may prove successful as customs union, single market and even an economic union, and while integration in other non-economic spheres may also prove a success, will likely never command the sort of loyalty and allegiance that a nation-state holds over its people, including, sometimes, the willingness of men to fight and lay down their lives for its sake. This is because its members come from many different cultures and ethnicities, and indeed speak many different languages. 

For van den Berghe, national identity cannot be rooted in anything other than a perception of shared ancestry or kinship. Thus, he observes: 

Many attempts to adopt universalistic criteria of ethnicity based on legal citizenship or acquisition of educational qualifications… failed. Such was the French assimilation policy in her colonies. No amount of proclamation of Algérie française could make it so” (p27). 

Thus, so-called civic nationalism, whereby national identity is based, not on ethnicity, but rather, supposedly, on a shared commitment to certain common values and ideals (democracy, the ‘rule of law’ etc.), as encapsulated by the notion of America as a proposition nation’, is, for van den Berghe, a complete non-starter. 

Yet this is today regarded as the sole basis for national identity and patriotic feeling that is recognised as legitimate, not only in the USA, but also all other contemporary western polities, where any assertion of racial nationalism or a racially-based or ethnically-based national identity is, at least for white people, anathema and beyond the pale. 

Moreover, due to the immigration policies of previous generations of western political leaders, policies that largely continue today, all contemporary western polities are now heavily multi-ethnic and multi-racial, such that any sense of national identity that was based on race or ethnicity is arguably untenable as it would necessarily exclude a large proportion of their populations.

On the other hand, however, van den Berghe’s reasoning also suggests that the efforts of some white nationalists to construct a pan-white, or pan-European, ethnic identity is also, like the earlier efforts of Japanese imperialist propagandists to create a pan-Asian identity, and of Marcus Garvey’s UNIA to construct a pan-African identity, likely to end in failure.[4]

Racism vs Ethnocentrism 

Whereas ethnocentrism is therefore universal, adaptive and natural, van den Berghe denies that the same can be said for racism

There is no evidence that racism is inborn, but there is considerable evidence that ethnocentrism is” (p240). 

Thus, van den Berge concludes: 

The genetic propensity is to favor kin, not those who look alike” (p240).[5]

As evidence, he cites:

The ease with which parental feelings take precedence over racial feeling in cases of racial admixture” (p240). 

In other words, fathers who sire mixed-race offspring with women of other races, and the women of other races with whom they father such offspring, often seemingly love and care for the resulting offspring just as intensely as do parents whose offspring is of the same race as themselves.[6]

Thus, cultural, rather than racial, markers are typically adopted to distinguish ethnic groups (p35). These include: 

  • Clothing (e.g. hijabs, turbans, skullcaps);
  • Bodily modification (e.g. tattoos, circumcision); and 
  • Behavioural criteria, especially language and dialect (p33).

Bodily modification and language represent particularly useful markers because they are difficult to fake, bodily modification because it is permanent and hence represents a costly commitment to the group (in accordance with Zahavi’s handicap principle), and language/dialect, because this is usually acquirable only during a critical period during childhood, after which it is generally not possible to achieve fluency in a second language without retaining a noticeable accent. 

In contrast, racial criteria, as a basis for group affiliation, is, van den Berghe reports, actually quite rare: 

Racism is the exception rather than the rule in intergroup relations” (p33). 

Racism is also a decidedly modern phenomenon. 

This is because, prior to recent technological advances in transportation (e.g. ocean-going ships, aeroplanes), members of different races (i.e. groups distinguishable on the basis of biologically inherited physiological traits such as skin colour, nose shape, hair texture etc.) were largely separated from one another by the very geographic barriers (e.g. deserts, oceans, mountain ranges) that reproductively isolated them from one another and hence permitted their evolution into distinguishable races in the first place. 

Moreover, when different races did make contact, then, in the absence of strict barriers to exogamy and miscegenation (e.g. the Indian caste system), racial groups typically interbred with one another and hence become phenotypically indistinguishable from one another within just a few generations. 

This, van den Berghe explains, is because: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Therefore, in the ancestral environment for which our psychological adaptations are designed (i.e. before the development of ships, aeroplanes and other methods of long-distance intercontinental transportation), different races did not generally coexist in the same locale. As a result, van den Berghe concludes: 

We have not been genetically selected to use phenotype as an ethnic marker, because, until quite recently, such a test would have been an extremely inaccurate one” (p 240). 

Humans, then, have simply not had sufficient time to have evolved a domain-specificracism module’ as suggested by some researchers.[7]

Racism is therefore, unlike ethnocentrism, not an innate instinct, but rather “a cultural invention” (p240). 

However, van den Berghe rejects the fashionable, politically correct notion that racism is “a western, much less a capitalist monopoly” (p32). 

On the contrary, racism, while not innate, is, not a unique western invention, but rather a recurrent reinvention, which almost invariably arises where phenotypically distinguishable groups come into contact with one another, if only because: 

Genetically inherited phenotypes are the easiest, most visible and most reliable predictors of group membership” (p32).

For example, van den Berghe describes the relations between the Tutsi, Hutu and Pygmy Twa of Rwanda and neighbouring regions as “a genuine brand of indigenous racism” which, according to van den Berghe, developed quite independently of any western colonial influence (p73).[8]

Moreover, where racial differences are the basis for ethnic identity, the result is, van den Berghe claims, ethnic hierarchies that are particularly rigid, intransient and impermeable.

For van den Berghe, this then explains the failure of African-Americans to wholly assimilate into the US melting pot in stark contrast to successive waves of more recently-arrived European immigrants. 

Thus, van den Berghe observes: 

Blacks who have been English-speaking for several generations have been much less readily assimilated in both England… and the United States than European immigrants who spoke no English on arrival” (p219). 

Thus, language barriers often break down within a generation. 

As Judith Harris emphasizes in support of peer group socialization theory, the children of immigrants whose parents are not at all conversant in the language of their host culture nevertheless typically grow up to speak the language of their host culture rather better than they do the first language of their parents, even though the latter was the cradle tongue to which they were first exposed, and first learnt to speak, inside the family home (see The Nurture Assumption: which I have reviewed here). 

As van den Berghe observes: 

It has been the distressing experience of millions of immigrant parents that, as soon as their children enter school in the host country, the children begin to resist speaking their mother tongue” (p258). 

While displeasing to those parents who wish to pass on their language, culture and traditions to their offspring, this response is wholly adaptive from the perspective of the offspring themselves:  

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (p258).[9]

However, whereas one can learn a new language, it is not usually possible to change one’s race – the efforts of Rachel Dolezal, Elizabeth Warren, Jessica Krug and Michael Jackson notwithstanding. However, due to the one-drop rule and the history of miscegenation in America, passing is sometimes possible (see below). 

Instead, phenotypic (i.e. racial) differences can only be eradicated after many generations of miscegenation, and sometimes, as in the cases of countries like the USA and Brazil, not even then. 

Meanwhile, van den Berghe observes, often the last aspect of immigrant culture to resist assimilation is culinary differences. However, he observes, increasingly even this becomes only a ‘ceremonial’ difference reserved for family gatherings (p260). 

Thus, van den Berghe surmises, Italian-Americans probably eat hamburgers as often as Americans of any other ethnic background, but at family gatherings they still revert to pasta and other traditional Italian cuisine

Yet even culinary differences eventually disappear. Thus, in both Britain and America, sausage has almost completely ceased to be thought of as a distinctively German dish (as have hamburgers, originally thought to have been named in reference to the city of Hamburg) and now pizza is perhaps on the verge of losing any residual association with Italians. 

Is Racism Always Worse than Ethnocentrism? 

Yet if raciallybased ethnic hierarchies are particularly intransigent and impermeable, they are also, van den Berghe claims, “peculiarly conflict-ridden and unstable” (p33). 

Thus, van den Berghe seems to believe that racial prejudice and animosity tends to be more extreme and malevolent in nature than mere ethnocentrism as exists between different ethnic groups of the same race (i.e. not distinguishable from one another on the basis of inherited phenotypic traits such as skin colour). 

For example, van den Berghe claims that, during World War Two: 

There was a blatant difference in the level of ferociousness of American soldiers in the Pacific and European theaters… The Germans were misguided relatives (however distant), while the ‘Japs’ or the ‘Nips’ were an entirely different breed of inscrutable, treacherous, ‘yellow little bastards.’ This was reflected in differential behavior in such things as the taking (versus killing) of prisoners, the rhetoric of war propaganda (President Roosevelt in his wartime speeches repeatedly referred to his enemies as ‘the Nazis, the Fascists, and the Japanese’), the internment in ‘relocation camps’ of American citizens of Japanese extraction, and in the use of atomic weapons” (p57).[10]

Similarly, in his chapter on ‘Colonial Empires’, by which he means “imperialism over distant peoples who usually live in noncontiguous territories and who therefore look quite different from their conquerors, speak unrelated languages, and are so culturally alien to their colonial masters as to provide little basis for mutual understanding”, van den Berghe writes: 

Colonialism is… imperialism without the restraints of common bonds of history, culture, religion, marriage and blood that often exist when conquest takes place between neighbors” (p85). 

Thus, he claims: 

What makes for the special character of the colonial situation is the perception by the conqueror that he is dealing with totally unrelated, alien and, therefore, inferior people. Colonials are treated as people totally beyond the pale of kin selection” (p85). 

However, I am unpersuaded by van den Berghe’s claim that conflict between more distantly related ethnic groups is always, or even typically, more brutal than that among biologically and culturally more closely related groups. 

After all, even conquests of neighbouring peoples, identical in race, if not always in culture, to the conquering group, are often highly brutal, for example the British in Ireland or the Japanese in Korea and China in the first half of the twentieth century. 

Indeed, many of the most intense and intractable ethnic conflicts are those between neighbours and ethnic kin, who are racially (and culturally) very similar to one another. 

Thus, for example, Catholics and Protestants in Northern Ireland, Greeks and Turks in Cyprus, and Bosnians, Croats, Serbs and Albanians in the Balkans, and even Jews and Palestinians in the Middle East, are all racially and genetically quite similar to one another, and also share many aspects of their culture with one another too. (The same is true, to give a topical example at the time of writing, of Ukrainians and Russians.) However, this has not noticeably ameliorated the nasty, intransient and bloody conflicts that have been, and continue to be, waged among them.  

Of course, the main reason that most ethnic conflict occurs between close neighbours is because neighbouring groups are much more likely to come into contact, and hence into conflict, with one another, especially over competing claims to land.[11]

Yet these same neighbouring groups are also likely to be related to one another, both culturally and genetically, because of both shared origins and the inevitable history of illicit intermarriage or miscegenation, and cultural borrowings, that inevitably occur even among the most hostile of neighbours.[12]

Nevertheless, the continuation of intense ethnic animosity between ethnic groups who are genetically, close to one another seems to pose a theoretical problem, not only for van den Berghe’s theory, but also, to an even greater degree, for Philippe Rushton’s so-called genetic similarity theory (which I have written about here), which argues that conflict between different ethnic groups is related to their relative degree of genetic differentiation from one another (Rushton 1998a; 1998b; 2005). 

It also poses a problem for the argument of political scientist Frank K Salter, who argues that populations should resist immigration by alien immigrants proportionally to the degree to which the alien immigrants are genetically distant from themselves (On Genetic Interests; see also Salter 2002). 

Assimilation, Acculturation and the American Melting Pot 

Since racially-based hierarchies result in ethnic boundaries that are both “peculiarly conflict-ridden and unstable” and also peculiarly rigid and impermeable, Van den Berghe controversially concludes: 

There has never been a successful multiracial democracy” (p189).[13]

Of course, in assessing this claim, we must recognize that ‘success’ is not only a matter of degree, but also can also be measured on several different dimensions. 

Thus, many people would regard the USA as the quintessential “successful… democracy”, even though the US has been multiracial, to some degree, for the entirety of its existence as a nation. 

Certainly, the USA has been successful economically, and indeed militarily.

However, the US has also long been plagued by interethnic conflict, and, although successful economically and militarily, it has yet to be successful in finding a way to manage its continued interethnic conflict, especially that between blacks and whites.

The USA is also afflicted with a relatively high rate of homicide and gun crime as compared to other developed economies, as well as low levels of literacy and numeracy and educational attainment. Although it is politically incorrect to acknowledge as much, these problems also likely reflect the USA’s ethnic diversity, in particular its large black underclass.

Indeed, as van den Berghe acknowledges, even societies divided by mere ethnicity rather than race seem highly conflict-prone (p186). 

Thus, assimilation, when it does occur, occurs only gradually, and only under certain conditions, namely when the group which is to be assimilated is “similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (p219). 

Thus, van den Berghe observes: 

People tend to assimilate and acculturate when their ethny [i.e. ethnic group] is geographically dispersed (often through migration), when they constitute a numerical minority living among strangers, when they are in a subordinate position and when they are allowed to assimilate by the dominant group” (p185). 

Moreover, van den Berghe is careful distinguish what he calls assimilation from mere acculturation.  

The latter, acculturation, involves a subordinate group gradually adopting the norms, values, language, cultural traditions and folkways of the dominant culture into whom they aspire to assimilate. It is therefore largely a unilateral process.[14]

In contrast, however, assimilation goes beyond this and involves members of the dominant host culture also actually welcoming, or at least accepting, the acculturated newcomers as a part of their own community.  

Thus, van den Berghe argues that host populations sometimes resist the assimilation of even wholly acculturated and hence culturally indistinguishable out-groups. Examples of groups excluded in this way include, according to van den Berghe, pariah castes, such as the untouchable dalits of the Indian subcontinent, the Burakumin of Japan and blacks in the USA.[15]

In other words, assimilation, unlike acculturation, is very much a two-way street. Thus, just as it ‘takes two to tango’, so assimilation is very much a bilateral process: 

It takes two to assimilate” (p217).  

On the one hand, minority groups may sometimes themselves resist assimilation, or even acculturation, if they perceive themselves as better off maintaining their distinct identify. This is especially true of groups who perceive themselves as being, in some respects, better-off than the host outgroup into whom they refuse to be absorbed. 

Thus, middleman minorities, or market-dominant minorities, such as Jews in the West, the overseas Chinese in contemporary South-East Asia, the Lebanese in West Africa and South Asians in East Africa, being, on average, much wealthier than the bulk of the host populations among whom them live, often perceive no social or economic advantage to either assimilation or acculturation and hence resist the process, instead stubbornly maintaining their own language and traditions and marrying only among themselves. 

The same is also true, more obviously, of alien ruling elites, such as the colonial administrators, and settlers, in European colonial empires in Africa, India and elsewhere, for whom assimilation into native populations would have been anathema.

Passing’, ‘Pretendians’ and ‘Blackfishing’ 

Interestingly, just as market-dominant minorities, middleman minorities, and European colonial rulers usually felt no need to assimilate into the host society in whose midst they lived, because to do so would have endangered their privileged position within this host society, so recent immigrants to America may no longer perceive any advantage to assimilation. 

On the contrary, there may now be an economic disincentive operating against assimilation, at least if assimilation means forgoing from the right to benefit from affirmative action in employment and college admissions

Thus, in the nineteenth and early twentieth centuries, the phenomenon of passing, at least in America, typically involved non-whites, especially light-skinned mixed-race African-Americans, attempting to pass as white or, if this were not realistic, sometimes as Native American.  

Some non-whites, such as Bhagat Singh Thind and Takao Ozawa, even brought legal actions in order to be racially reclassified as ‘white’ in order to benefit from America’s then overtly racialist naturalization law.

Contemporary cases of passing, however, though rarely referred to by this term, typically involve whites themselves attempting to somehow pass themselves off as some variety of non-white (see Hannam 2021). 

Recent high-profile recent examples have included Rachel Dolezal, Elizabeth Warren and Jessica Krug

Interestingly, all three of these women were both employed in academia and involved in leftist politics – two spheres in which adopting a non-white identity is likely to be especially advantageous, given the widespread adoption of affirmative action in college admissions and appointments, and the rampant anti-white animus that infuses so much of academia and the cultural Marxist left.[16]

Indeed, the phenomenon is now so common that it even has its own associated set of neologisms, such as Pretendian, ‘blackfishing’ and, in Australia, box-ticker.[17]

Indeed, one remarkable recent survey purported to uncover that fully 34% of white college applicants in the United States admitted to lying about their ethnicity on their applications, in most cases either to improve their chances of admission or to qualify for financial aid

Although Rachel Dolezal, Elizabeth Warren and Jessica Krug were all women, this survey found that white male applicants were even more likely to lie about their ethnicity than were white female applicants, with only 16% of white female applicants admitting to lying, as compared to nearly half (48%) of white males.[18]

This is, of course, consistent with the fact that it is white males who are the primary victims of affirmative action and other forms of discrimination.  

This strongly suggests that, whereas there were formerly social (and legal) benefits that were associated with identifying as white, today the advantages accrue to instead to those able to assume a non-white identity.  

For all the talk of so-called ‘white privilege’, when whites and mixed-race people, together with others of ambiguous racial identity, preferentially choose to pose as non-white in order to take advantage of the perceived benefits of assuming such an identity, they are voting with their feet and thereby demonstrating what economists call revealed preferences

This, of course, means that recent immigrants to America, such as Hispanics, will have rather less incentive in integrate into the American mainstream than did earlier waves of European immigrants, such as Irish, Poles, Jews and Italians, the latter having been, primarily, the victims of discrimination rather than its beneficiaries

After all, who would want to be another, boring unhyphenated American when to do so would presumably mean relinquishing any right to benefit from affirmative action in job recruitment or college admissions, not to mention becoming a part of the hated white ‘oppressor’ class. 

In short, ‘white privilege’ isn’t all it’s cracked up to be. 

This perverse incentive against assimilation obviously ought to be worrying to anyone concerned with the future of American as a stable unified polity. 

Ethnostates – or Consociationalism

Given the ubiquity of ethnic conflict, and the fact that assimilation occurs, if at all, only gradually and, even then, only under certain conditions, a pessimist (or indeed a racial separatist) might conclude that the only way to prevent ethnic conflict is for different ethnic groups to be given separate territories with complete independence and territorial sovereignty. 

This would involve the partition of the world into separate ethnically homogenous ethnostates, as advocated by racial separatists and many in the alt-right. 

Yet, quite apart from the practical difficulties such an arrangement would entail, not least the need for large-scale forcible displacements of populations, this ‘universal nationalism’, as championed by political scientist Frank K Salter among others, would arguably only shift the locus of ethnic conflict from within the borders of a single multi-ethnic state to between those of separate ethnostates – and conflict between states can be just as destructive as conflict within states, as countless wars between states throughout history have amply proven.  

In the absence of assimilation, then, perhaps fairest and least conflictual solution is what van den Berghe terms consociationalism. This term refers to a form of ethnic power-sharing, whereby elites from both groups agree to share power, each usually retaining a veto power regarding major decisions, and there is proportionate representation for each group in all important positions of power. 

This seems to be roughly the basis of the power sharing agreement imposed on Northern Ireland in the Good Friday Agreement, which was largely successful in bringing an end to the ethnic conflict known as ‘the Troubles.[19]

On the other hand, however, power-sharing was explicitly rejected by both the ANC and the international anti-apartheid movement as a solution in another ethnically-divided polity, namely South Africa, in favour of majority rule, even though the result has been a situation very similar to the situation in Northern Ireland which led to the Troubles, namely an effective one-party state, with a single party in power for successive decades and institutionalized discrimination against minorities.[20]

Consociationalism or ethnic power-sharing also arguably the model towards which the USA and other western polities are increasingly moving, with quotas and so-called ‘affirmative action increasingly replacing the earlier ideals of appointment by merit, color blindness or freedom of association, and multiculturalism and cultural pluralism replacing the earlier ideal of assimilation

Perhaps the model consociationalist democracy is van den Berghe’s own native Belgium, where, he reports: 

All the linguistic, class, religious and party-political quarrels and street demonstrations have yet to produce a single fatality” (p199).[21]

Belgium is, however, very much the exception rather than the rule, and, at any rate, though peaceful, remains very much a divided society

Indeed, power-sharing institutions, in giving official, institutional recognition to the existing ethnic divide, function only to institutionalize and hence reinforce and ossify the existing ethnic divide, making successful integration and assimilation almost impossible – and certainly even less likely to occur than it had been in the absence of such institutional arrangements. 

Moreover, consociationalism can be maintained, van den Berghe emphasizes, only in a limited range of circumstances, the key criterion being that the groups in question are equal, or almost equal, to one another in status, and not organized into an ethnic hierarchy. 

However, even when the necessary conditions are met, it invariably involves a precarious balancing act. 

Just how precarious is illustrated by the fate of other formerly stable consociationalist states. Thus, van den Bergh notes the irony that earlier writers on the topic had cited Lebanon as “a model [consociationalist democracy] in the Third World” just a few years before the Lebanese Civil War broke out in the 1970s (p191). 

His point is, ironically, only strengthened by the fact that, in the three decades since his book was first published, two of his own examples of consociationalism, namely the USSR and Yugoslavia, have themselves since descended into civil war and fragmented along ethnic lines. 

Slavery and Other Recurrent Situations  

In the central section of the book, van den Berghe discusses such historically recurrent racial relationships as “slavery”, middleman minorities, “caste” and “colonialism”. 

In large part, his analyses of these institutions and phenomena do not depend on his sociobiological theory of ethnocentrism, and are worth reading even for readers unconvinced by this theory – or even by readers skeptical of sociobiology and evolutionary psychology altogether. 

Nevertheless, the sociobiological model continues to guide his analysis. 

Take, for example, his chapter on slavery. 

Although the overtly racial slavery of the New World was quite unique, slavery often has an ethnic dimension, since slaves are often captured during warfare from among enemy groups. 

Indeed, the very word slave is derived from the ethnonym, Slav, due to the frequency with which the latter were captured as slaves, both by Christians and Muslims.[22]

In particular, van den Berghe argues that: 

An essential feature of slave status is being torn out of one’s network of kin selection. This condition generally results from forcible removal of the slave from his home group by capture and purchase” (p120).

This then partly explains, for example, why European settlers were far less successful in enslaving the native inhabitants of the Americas than they were in exploiting the slave labour of African slaves who had been shipped across the Atlantic, far from their original kin groups, precisely for this purpose.[23]

Thus, for van den Berghe, the quintessential slave is: 

Not only involuntarily among ethnic strangers in a strange land: he is there alone, without his support group of kinsmen and fellow ethnics” (p115)

Here van den Berghe seemingly anticipates the key insight of Jamaican sociologist Orlando Peterson in his comparative study of slavery, Slavery and Social Death, who terms this key characteristic of slavery natal alienation.[24]

This, however, is likely to be only a temporary condition, since, at least if allowed to reproduce, then, gradually over time, slaves would put down roots, produce new families, and indeed whole communities of slaves.[25]

When this occurs, however, slaves gradually, over generations, cease to be true slaves. The result is that: 

Slavery can long endure as an institution in a given society, but the slave status of individuals is typically only semipermanent and nonhereditary… Unless a constantly renewed supply of slaves enters a society, slavery, as an institution, tends to disappear and transform itself into something else” (p120). 

This then explains the gradual transformation of slavery during the medieval period into serfdom in much of Europe, and perhaps also the emergence of some pariah castes such as the untouchables of India. 

Paradoxically, van den Berghe argues that racism became particularly virulent in the West precisely because of Western societies’ ostensible commitment to notions of liberty and the rights of man, notions obviously incompatible with slavery. 

Thus, whereas most civilizations simply took the institution of slavery for granted, feeling no especial need to justify its existence, western civilization, given its ostensible commitment to such lofty notions as individual liberty and the equality of man, was always on the defensive, feeling a constant need to justify and defend slavery. 

The main justification hit upon was racialism and theories of racial superiority

If it was immoral to enslave people, but if at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (p115).  

This then explains much of the virulence of western racialism in the much of the eighteenth, nineteenth and even early-twentieth centuries.[26]

Another important, and related, ideological justification for slavery was what van den Berghe refers to as ‘paternalism’. Thus, Van den Berghe observes that: 

All chattel slave regimes developed a legitimating ideology of paternalism” (p131). 

Thus, in the American South, the “benevolent master” was portrayed a protective “father figure”, while slaves were portrayed as childlike and incapable of living an independent existence and hence as benefiting from their own enslavement (p131). 

This, of course, was a nonsense. As van den Berghe cynically observes: 

Where the parentage was fictive, so, we may assume, is the benevolence” (p131). 

Thus, exploitation was, in sociobiological terms, disguised as kin-selected parental benevolence

However, despite the dehumanization of slaves, the imbalance of power between slave and master, together with the men’s innate and evolved desire for promiscuity, made the sexual exploitation of female slaves by male masters all but inevitable.[27]

As van den Berghe observes: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Thus, he notes the hypocrisy whereby: 

Dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women” (p33). 

The result was that the fictive ideology of ‘paternalism’ that served to justify slavery often gave way to literal paternity of the next generation of the slave population. 

This created two problems. First, it made the racial justification for slavery, namely the ostensible inferiority of black people, ring increasingly hollow, as ostensibly ‘black slaves acquired greater European ancestry, lighter skins and more Caucasoid features with each successive generation of miscegenation. 

Second, and more important, it also meant that the exploitation of this next generation of slaves by their owners potentially violated the logic of kin selection, because: 

If slaves become kinsmen, you cannot exploit them without indirectly exploiting yourself” (p134).[28]

This, van den Berghe surmises, led many slave owners to free those among the offspring of slave women whom they themselves, or their male relatives, had fathered. As evidence, he observes:  

In all [European colonial] slave regimes, there was a close association between manumission and European ancestry. In 1850 in the United States, for example, an estimated 37% of free ‘negroes’ had white ancestry, compared to about 10% of the slave population” (p132). 

This leads van den Bergh to conclude that many such free people of color – who were referred to as people of color precisely because their substantial degree of white ancestry precluded any simple identification as black or negro – had been freed by their owner precisely because their owner was now also their kinsmen. Indeed, many may have been freed by the very slave-master who had been responsible for fathering them. 

Thus, to give a famous example, Thomas Jefferson is thought to have fathered six offspring, four of whom survived to adulthood, with his slave, Sally Hemings – who was herself already three-quarters white, and indeed Jefferson’s wife’s own half-sister, on account of miscegenation in previous generations. 

Of these four surviving offspring, two were allowed to escape, probably with Jefferson’s tacit permission or at least acquiescence, while the remaining two were freed upon his death in his will.[29]

This seems to have been a common pattern. Thus, van den Berghe reports: 

Only about one tenth of the ‘negro’ population of the United States was free in 1860. A greatly disproportionate number of them were mulattoes, and, thus, presumably often blood relatives of the master who emancipated them or their ancestors. The only other slaves who were regularly were old people past productive and reproductive age, so as to avoid the cost of feeding the aged and infirm” (p129). 

Yet this made the continuance of slavery almost impossible, because each new generation more and more slaves would be freed.  

Other slave systems got around this problem by continually capturing or importing new slaves in order to replenish the slave population. However, this option was denied to American slaveholders by the abolition of the slave trade in 1807

Instead, the Americans were unique in attempting to ‘breed’ slaves. This leads van den Berghe to conclude that: 

By making the slave woman widely available to her master…Western slavery thus literally contained the genetic seeds of its own destruction” (p134).[30]

Synthesising Marxism and Sociobiology 

Given the potential appeal of his theory to nationalists, and even to racialists, it is perhaps surprising that van den Berghe draws heavily on Marxist theory. Although Marxists were almost unanimously hostile to sociobiology, sociobiologists frequently emphasized the potential compatibility of Marxist theory and sociobiology (e.g. The Evolution of Human Sociality). 

However, van den Berghe remains, to my knowledge, the only figure (except myself) to actually successfully synthesize sociobiology and Marxism in order to produce novel theory.  

Thus, for example, he argues that, in almost every society in existence, class exploitation is disguised by an ideology (in the Marxist sense) that disguises exploitation as either: 

1) Kin-selected nepotistic altruism – e.g. the king or dictator is portrayed as benevolent ‘father’ of the nation; or
2) Mutually beneficial reciprocity – i.e. social contract theory or democracy (p60). 

However, contrary to orthodox Marxist theory, van den Berghe regards ethnic sentiments as more fundamental than class loyalty since, whereas the latter is “dependent on a commonality of interests”, the former is often “irrational” (p243). 

Nationalist conflicts are among the most intractable and unamenable to reason and compromise… It seems a great many people care passionately whether they are ruled and exploited by members of their own ethny or foreigners” (p62). 

In short, van den Berghe concludes: 

Blood runs thicker than money” (p243). 

Another difference is that, whereas Marxists view control over the so-called means of production (i.e. the means necessary to produce goods for sale) as the ultimate factor determining exploitation and conflict in human societies, Darwinians instead focus on conflict over access to what I have termed the means of reproduction – in other words, the means necessary to produce offspring (i.e. fertile females, their wombs and vaginas etc.). 

This is because, from a Darwinian perspective: 

The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (p165). 

Thus, unlike his contemporary Darwin, Karl Marx was, for all his ostensible radicalism, in his emphasis on economics rather than sex, just another Victorian sexual prude.[31]

Mating, Miscegenation and Intermarriage 

Given that reproduction, not production, is the ultimate focus of individual and societal conflict and competition, van den Berghe argues that ultimately questions of equality, inequality and assimilation must be also determined by reproductive, not economic, criteria. 

Thus, he concludes, intermarriage, especially if it occurs, not only frequently, but also in both directions (i.e. involves both males and females of both ethnicities, rather than always involving males of one ethnic group, usually the dominant ethnic group, taking females of the other ethnic group, usually the subordinate group, as wives), is the ultimate measure of racial equality and assimilation: 

Marriage, especially if it happens in both directions, that is with both men and women of both groups marrying out, is probably the best measure of assimilation” (p218). 

In contrast, however, he also emphasizes that mere “concubinage is frequent [even] in the absence of assimilation” (p218). 

Moreover, such concubinage invariably involves males of the dominant-group taking females from the subordinate-group as concubines, whereas dominant-group females are invariably off-limits as sexual partners for subordinate group males. 

Thus, van den Berghe observes, although “dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women”, they nevertheless are jealously protective of their own women and enforce strict double-standards (p33). 

For example, historian Wynn Craig Wade, in his history of the Ku Klux Klan (which I have reviewed here), writes: 

In [antebellum] Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation.” (The Fiery Cross: p20). 

The result, van den Berghe reports, is that: 

The subordinate group in an ethnic hierarchy invariably ‘loses’ more women to males of the dominant group than vice versa” (p75). 

Indeed, this same pattern is even apparent in the DNA of contemporary populations. Thus, geneticist James Watson reports that, whereas the mitochondrial DNA of contemporary Columbians, which is passed down the female line, shows a “range of Amerindian MtDNA types”, the Y-chromosomes of these same Colombians, are 94% European. This leads him to conclude: 

The virtual absence of Amerindian Y chromosome types, reveals the tragic story of colonial genocide: indigenous men were eliminated while local women were sexually ‘assimilated’ by the conquistadors” (DNA: The Secret of Life: p257). 

As van den Berghe himself observes: 

It is no accident that military conquest is so often accompanied by the killing, enslavement and castration of males, and the raping and capturing of females” (p75). 

This, of course, reflects the fact that, in Darwinian terms, the ultimate purpose of power is to maximize reproductive success

However, while the ethnic group as a whole inevitably suffers a diminution in its fitness, there is a decided gender imbalance in who bears the brunt of this loss. 

The men of the subordinate group are always the losers and therefore always have a reproductive interest in overthrowing the system. The women of the subordinate group, however frequently have the option of being reproductively successful with dominant-group males” (p27). 

Indeed, subordinate-group females are not only able, and sometimes forced, to mate with dominant-group males, but, in purely fitness terms, they may even benefit from such an arrangement.  

Hypergamy (mating upward for women) is a fitness enhancing strategy for women, and, therefore, subordinate-group women do not always resist being ‘taken over’ by dominant-group men” (p75). 

This is because, by so doing, they thereby obtain access to both the greater resources that dominant group males are able to provide in return for sexual access or as provisioning for their offspring, as well as the superior’ genes which facilitated the conquest in the first place. 

Thus, throughout history, women and girls have been altogether too willing to consort and intermarry with their conquerors. 

The result of this gender imbalance in the consequences of conquest and subjugation, is, a lack of solidarity as between men and women of the subjugated group. 

This sex asymmetry in fitness strategies in ethnically stratified societies often creates tension between the sexes within subordinate groups. The female option of fitness maximization through hypergamy is deeply resented by subordinate group males” (p76). 

Indeed, even captured females who were enslaved by their conquerers sometimes did surprisingly well out of this arrangement, at least if they were young and beautiful, and hence lucky enough to be recruited into the harem of a king, emperor or other powerful male.

One slave captured in Eastern Europe even went on to become effective queen of the Ottoman Empire at the height of its power. Hurrem Sultan, as she came to be known, was, of course, exceptional, but only in degree. Members of royal harems may have been secluded, but they also lived in some luxury.

Indeed, even in puritanical North America, where concubinage was very much frownded upon, van den Berghe reports that “slavery was much tougher on men than on women”, since: 

Slavery drastically reduced the fitness of male slaves; it had little or no such adverse effect on the fitness of female slaves whose masters had a double interest – financial and genetic – in having them reproduce at maximum capacity” (p133) 

Van den Berghe even tentatively ventures: 

It is perhaps not far-fetched to suggest that, even today, much of the ambivalence in relations between black men and women in America… has its roots in the highly asymmetrical mating system of the slave plantation” (p133).[32]

Miscegenation and Intermarriage in Modern America 

Yet, curiously, however, patterns of interracial dating in contemporary America are anomalous – at least if we believe the pervasive myth that America is a ‘systemically racist’ society where black people are still oppressed and discriminated against

On the one hand, genetic data confirms that, historically, matings between white men and black women were more frequent than the reverse, since African-American mitochondrial DNA, passed down the female line, is overwhelmingly African in origin, whereas their Y chromosomes, passed down the male line, are often European in origin (Lind et al 2007). 

However, recent census data suggests that this pattern is now reversed. Thus, black men are now about two and a half times as likely to marry white women as black women are to marry white men (Fryer 2007; see also Sailer 1997). 

This seemingly suggests white American males are actually losing out in reproductive competition to black males. 

This observation led controversial behavioural geneticist Glayde Whitney to claim: 

By many traditional anthropological criteria African-Americans are now one of the dominant social groups in America – at least they are dominant over whites. There is a tremendous and continuing transfer of property, land and women from the subordinate race to the dominant race” (Whitney 1999: p95). 

However, this conclusion is difficult to square with the continued disproportionate economic deprivation of much of black America. In short, African-Americans may be reproductively successful, and perhaps even, in some respects, socially privileged, but, despite benefiting from systematic discrimination in employment and admission to institutions of higher education, they are clearly also, on average, economically much worse-off as compared to whites and Asians in modern America.  

Instead, perhaps the beginnings of an explanation for this paradox can be sought in van den Berghe’s own later collaboration with anthropologist, and HBD blogger, Peter Frost

Here, in a co-authored paper, van den Berghe and Frost argue that, across cultures, there is a general sexual preference for females with somewhat lighter complexion than the group average (van den Berghe and Frost 1986). 

However, as Frost explains in a more recent work, Fair Women, Dark Men: The Forgotten Roots of Racial Prejudice, preferences with regard to male complexion are more ambivalent (see also Feinman & Gill 1977). 

Thus, whereas, according to the title of a novel, two films and a hit Broadway musical, ‘Gentlemen Prefer Blondes’ (who also reputedly, and perhaps as a consequence, have more fun), the idealized male romantic partner is instead tall, dark and handsome

In subsequent work, Frost argues that ecological conditions in sub-Saharan Africa permitted high levels of polygyny, because women were economically self-supporting, and this increased the intensity of selection for traits (e.g. increased muscularity, masculinity, athleticism and perhaps outgoing, sexually-aggressive personalities) which enhance the ability of African-descended males to compete for mates and attract females (Frost 2008). 

In contrast, Frost argues that there was greater selection for female attractiveness (and perhaps female chastity) in areas such as Northern Europe and Northeast Asia, where, to successfully reproduce, women were required to attract a male willing to provision them during cold winters throughout their gestation, lactation and beyond (Frost 2008). 

This then suggests that African males have simply evolved to be, on average, more attractive to women, whereas European and Asian females have evolved to be more attractive to men

This speculation is supported by a couple of recent studies of facial attractiveness, which found that black male faces were rated as most attractive to members of the opposite sex, but that, for female faces, the pattern was reversed (Lewis 2011; Lewis 2012). 

These findings could also go some way towards explaining patterns of interracial dating in the contemporary west (Lewis 2012). 

The Most Explosive Aspect of Interethnic Relations” 

However, such an explanation is likely to be popular neither with racialists, for whom miscegenation is anathema, nor with racial egalitarians, for whom, as a matter of sacrosanct dogma, all races must be equal in all things, even aesthetics and sex appeal.[33]

Thus, when evolutionary psychologist Satoshi Kanazawa made a similar claim in 2011 in a blog post (since deleted), outrage predictably ensued, the post was swiftly deleted, his then-blog dropped by its host, Psychology Today, and the author reprimanded by his employer, the London School of Economics, and forbidden from writing any blog or non-scholarly publications for a whole year. 

Yet all of this occurred within a year of the publication of the two papers cited above that largely corroborated Kanazawa’s finding (Lewis 2011; Lewis 2012). 

Yet such a reaction is, in fact, little surprise. As van den Berghe points out: 

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (p75). 

After all, from a sociobiological perspective, competition over reproductive access to fertile females is Darwinian conflict in its most direct and primordial form

Van den Berghe’s claim that interethnic sexual contact is “the most explosive aspect” of interethnic relations also has support from the history of racial conflict in the USA and elsewhere. 

The spectre of interracial sexual contact, real or imagined, has motivated several of the most notorious racially-motivated ‘hate-crimes’ of American history, from the torture-murder of Emmett Till for allegedly propositioning a white woman, to the various atrocities of the reconstruction-era Ku Klux Klan in defence of the ostensible virtue of ‘white womanhood, to the recent Charleston church shooting, ostensibly committed in revenge for the allegedly disproportionate rate of rape of white women by black man.[34]

Meanwhile, interracial sexual relations are also implicated in some of American history’s most infamous alleged miscarriages of justice, from the Scottsboro Boys and Groveland Four cases, and the more recent Central Park jogger case, all of which involved allegations of interracial rape, to the comparatively trivial conduct alleged, but by no means trivial punishment imposed, in the so-called Monroe ‘kissing case

Allegations of interracial rape also seem to be the most common precursor of full-blown race riots

Thus, in early-twentieth century America, the race riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 were all ignited, at least in part, by allegations of interracial rape or sexual assault

Meanwhile, on the other side of the Atlantic, multi-racial Britain’s first modern post-war race riot, the 1958 Notting Hill riot in London 1958, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before turning on them both. 

Meanwhile, Britain’s most recent unambiguous race riot, the 2005 Birmingham riot, an entirely non-white affair, was ignited by the allegation that a black girl had been gang-raped by South Asians.

Meanwhile, at least in the west, whites no longer seem participate in race riots, save as victims. However, an exception was the 2005 Cronulla riots in Sydney, Australia, which were ignited by the allegation that Middle Eastern males were sexually harassing white Australian girls on Sydney beaches. 

Similarly, in Britain, though riots have yet to result, the spectre of so-called Muslim grooming gangs, preying on, and pimping out, underage white British girls in northern towns across the England, has arguably done more to ignite anti-Muslim sentiment among whites in the UK than a whole series of Jihadist terrorist attacks on British civilian targets

Thus, in Race: The Reality of Human Differences (which I have reviewed here, here and here) Sarich and Miele caution that miscegenation, often touted as the universal panacea to racism simply because, if practiced sufficiently widely, it would eventually eliminate all racial differences, or at least blur the lines between racial groups, may actually, at least in the short-term, actually incite racist attacks. 

This, they argue, is because: 

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (Race: The Reality of Human Differences: p256) 

This “racial solidarist perspective” is, of course, a crudely group selectionist view of Darwinian competition, and it leads Sarich and Miele to hypothesize: 

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (Race: The Reality of Human Differences: p255). 

In other words, in sociobiological terms, ‘Robert’, a character from one of Michel Houellebecq’s novels, has it right when he claims: 

What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 

Endnotes

[1] Admittedly, the Croatian War of Independence is indeed sometimes said to have been triggered, or at least precipitated, by a football match between Dinamo Zagreb and Red Star Belgrade, and the riot that occurred at the ground on that day. However, this war was, of course, ethnic in origin, fought between Croats and Serbians, and the football match served as a triggering event only because the two teams were overwhelmingly supported supported by Croats and Serbians respectively.
This leads to an interesting observation – namely that rivalries such as those between supporters of different football teams tend to become especially malignant and acrimonious when support for one team or the other comes to be inextricably linked to ethnic identity.
Thus it is surely no accident that, in the UK, the most intense rivalry between groups of football supporters is that between between supporters of Ragners and Celtic in Glasgow, at least in part because the rivalry has become linked to religion, which was, at least until recently, a marker for ancestry and ethnicity, while an apparently even more intense rivalry was that between Linfield and Belfast Celtic in Northern Ireland, which was also based on a parallel religious and ethnic divide, and ultimately became so acrimonious that one of the two teams had to withdraw from domestic football and ultimately ceased to exist.

[2] Actually, however, contrary to Brigandt’s critique, it is clear that van den Berghe intended his “biological golden rule” only as a catchy and memorable aphorism, crudely summarizing Hamilton’s rule, rather than a quantitative scientific law akin to, or rivalling, Hamilton’s Rule itself. Therefore, this aspect of Brigandt’s critique is, in my view, misplaced. Indeed, it is difficult to see how this supposed rule could be applied as a quantitative scientific law, since relatedness, on the one hand, and altruism, on the other, are measured in different currencies. 

[3] Thus, van den Berghe concedes that: 

In many cases, the common descent acribed to an ethny is fictive. In fact, in most cases, it is partly fictive” (p27). 

[4] The question of racial nationalism (i.e. encompassing all members of a given race, not just those of a single ethnicity or language group) is actually more complex. Certainly, members of the same race do indeed share some degree of kinship, in so far as they are indeed (almost by definition) on average more closely biologically related to one another than to members of other races – and indeed that relatedness is obviously apparent in their phenotypic resemblance to one another. This suggests that racial nationalist movements such as that of, say, UNIA or of the Japanese imperialists, might have more potential as a viable form of nationalism than do attempts to unite racially disparate ethnicities, such as civic nationalism in the contemporary USA. The same may also be true of Oswald Mosley’s Europe a Nation campaign, at least while Europe remained primarily monoracial (i.e. white). However, any such racial nationalism would incorporate a far larger and more culturally, linguistically and genetically disparate group than any form of nationalism that has previously proven capable of mobilizing support.
Thus, Marcus Garvey’s attempt to create a kind of pan-African ethnic identity enjoyed little success and was largely restricted to North America, where African-Americans, do indeed share a common language and culture in addition to their race. Similarly, the efforts of Japanese nationalists to mobilize a kind of pan-Asian nationalism in support of their imperial aspirations during the first half of the twentieth century was an unmitigated failure, though this was partly because of the brutality with which they conquered and suppressed the other Asian nationalities whose support for pan-Asianism they intermittently and half-heartedly sought to enlist.
On the other hand, it is sometimes suggested that, in the early twentieth century, a white supremacist ideology was largely taken for granted among whites. However, while to some extent true, this shared ideology of white supremacism did not prevent the untold devastation wrought by the European wars of the early twentieth century, namely World Wars I and II, which Patrick Buchanan has collectively termed The Great Civil War of the West.
Thus, European nationalisms usually defined themselves by opposition to other European peoples and powers. Thus, just as Irish nationalism is defined largely by opposition to Britain, and Scottish nationalism by opposition to England, so English (and British) nationalism has itself traditionally been directed against rival European powers such as France and Germany (and formerly Spain), while French nationalism seems to have defined itself primarily in opposition to the Germans and the British, and German nationalism in opposition to the French and Slavs, etc.
It is true that, in the USA, a kind of pan-white American nationalism did seem to prevail in the early twentieth century, albeit initially limited to white protestants, and excluding at least some recent European immigrants (e.g. Italians, Jews). This is, however, a consequence of the so-called melting pot, and really only amounts to yet another parochial nationalism, namely that of a newly-formed ethnic group – white Americans.
At any rate, today white American nationalism is, at most, decidedly muted in form – a kind of implicit white racial consciousness, or, to coin a phrase, the nationalism that dare not speak its name. Thus, Van den Berghe observes: 

In the United States, the whites are an overwhelming majority, so much so that they cannot be meaningfully conceived of as a ruling group at all. The label ‘white’ in the United States does not correspond to a well-defined ethnic or racial group with a high degree of social organization or even self-consciousness, except regionally in the south” (p183). 

Van den Berghe wrote this in 1981. Today, of course, whites are no longer such an “overwhelming majority” of the US population. On the contrary, they are already well on the way to becoming a minority in America, a milestone that is likely to be reached over the coming decades.
Yet, curiously, white ‘racially consciousness’ is seemingly even more muted and implicit today than it was back when van den Berghe authored his book – and this is seen even in the South, which van den Berghe cited as an exception and lone bastion of white identity politics.
True, White Southerners may vote as a solidly for Republican candidates as they once did for the Democrats. However, overt appeals to white racial interests are now as anathema in the South as elsewhere.
Thus, as recently as 1990, a more or less open white racialist like David Duke was able to win a majority of the white vote in Louisiana in his run for the Senate. Today, this is unimaginable.
If the reason that whites lack any ‘racial consciousness’ is indeed, as van den Berghe claims, because they represent such an “overwhelming majority” of the American population, then it is interesting to speculate if and when, during the ongoing process of white demographic displacement, this will cease to be the case.
One thing seems certain: If and when it does ever occur, it will be too late to make any difference to the ongoing process of demographic displacement that some have termed ‘The Great Replacement’ or a third demographic transition.

[5] Of course, a preference for those who look similar to oneself (or one’s other relatives) may itself function as a form of kin recognition (i.e. of recognizing who is kin and who is not). This is referred to in biology as phenotype matching. Moreover, as Richard Dawkins has speculated in The Selfish Gene (reviewed here), racial feeling could conceivably have evolved through a misfiring of such a crude heuristic (The Selfish Gene: p100).

[6] Actually, I suspect that, on average, at least historically, both mothers and fathers may indeed, on average, have provided rather less care for their mixed-race offspring than for offspring of the same race as themselves, simply because mixed-race offspring were more likely to be born out of wedlock, not least because interracial marriage was, until recently, strongly frowned upon, and, in some jurisdictions, either not legally permitted or even outright criminalized, and both mothers and fathers tended to provide less care for illegitimate offspring, fathers because they often refused to acknowledge their illegitimate offspring and had little or no contact with them and may not even have been aware of their existence, and mothers because, lacking paternal support, they usually had no means of raising their illegitimate offspring alone and hence often gave them up for adoption or fostering.

[7] On the other hand, in his paper, An integrated evolutionary perspective on ethnicity, controversial antiSemitic evolutionary psychologist Kevin Macdonald disagrees with this conclusion, citing personal communication from geneticist and anthropologist Henry Harpending for the argument that: 

Long distance migrations have easily occurred on foot and over several generations, bringing people who look different for genetic reasons into contact with each other. Examples include the Bantu in South Africa living close to the Khoisans, or the pygmies living close to non-pygmies. The various groups in Rwanda and Burundi look quite different and came into contact with each other on foot. Harpending notes that it is ‘very likely’ that such encounters between peoples who look different for genetic reasons have been common for the last 40,000 years of human history; the view that humans were mostly sessile and living at a static carrying capacity is contradicted by history and by archaeology. Harpending points instead to ‘starbursts of population expansion’. For example, the Inuits settled in the arctic and exterminated the Dorsets within a few hundred years; the Bantu expansion into central and southern Africa happened in a millennium or less, prior to which Africa was mostly the yellow (i.e., Khoisan) continent, not the black continent. Other examples include the Han expansion in China, the Numic expansion in northern America, the Zulu expansion in southern Africa during the last few centuries, and the present day expansion of the Yanomamo in South America. There has also been a long history of invasions of Europe from the east. ‘In the starburst world people would have had plenty of contact with very different looking people’” (Macdonald 2001: p70). 

[8] Others have argued that the differences between Tutsi and Hutu are indeed largely a western creation, part of the divide and rule strategy supposedly deliberately employed by European colonialists, as well as a theory of Tutsi racial superiority promulgated by European racial anthropologists known as the Hamitic theory of Tutsi origins, which suggested that the Tutsi had migrated from the Horn of Africa, and had benefited from Caucasoid ancestry, as reflected in their supposed physiological differences from the indigenous Hutu (e.g. lighter complexions, greater height, narrower noses).
On this view, the distinction between Hutu and Tutsi was originally primarily socioeconomic rather than racial, and, at least formerly, the boundaries between the two groups were quite fluid.
I suspect this view is nonsense, reflecting political correctness and the leftist tendency to excuse any evidence of dysfunction or oppression in non-Western cultures as necessarily of product of the malign influence of western colonizers. (Most preposterously, even the Indian caste system has been blamed on British colonizers, although it actually predated them, in one form or another, by several thousand years.)
With respect to the division between Tutsi and Hutu, there are not only morphological differences between the two groups in average stature, nose width and complexion, but also substantial differences in the prevalence of genes for lactose tolerance and sickle-cell. These results do indeed seem to suggest that, as predicted by the reviled ‘Hamitic theory’, the Tutsi do indeed have affinities with populations from the Horn of Africa and East Africa. Modern genome analysis tends to confirm this conclusion. 

[9] Exceptions, where immigrant groups retain their distinctive language for multiple generations, occur where immigrants speaking a particular language arrive in sufficient numbers, and are sufficiently isolated in ethnic enclaves and ghettos, that they mix primarily or exclusively with people speaking the same language as themselves. A related exception is in respect of economically, politically or socially dominant minorities, such as alien colonizers, as well as market-dominant or middleman minorities, who often resist assimilation into the mainstream culture precisely so as to maintain their cultural separateness and hence their privileged position within society, and who also, partly for this reason, take steps to socialize, and ensure their offspring socialize, primarily among their own group. 

[10] Some German-Americans were also interred during World War II. However, far fewer were interred than among Japanese-Americans, especially on a per capita basis.
Nevertheless, some German-Americans were treated very badly indeed, yet the latter, unlike the Japanese, have yet to receive a government apology or compensation. Moreover, there was perhaps justification for the differing treatment accorded Japanese- and German-Americans, since the latter were generally longer established and, being white, were also more successfully integrated into mainstream American society, and there was perceived to be a real threat of enemy sabotage.
Also, with regard to van den Berghe’s observation that nuclear atomic weapons were used only against Japan, this is rather misleading. Nuclear weapons could not have been used against Germany, since, by the time of the first test detonation of a nuclear device, Germany had already surrendered. Yet, in fact, the Manhattan Project seems to have been begun with the Germans very much in mind as a prospective target. (Many of the scientists involved were Jewish, many having fled Nazi-occupied Europe for America, and hence their hostility towards the Nazis, and perhaps Germans in general, is easy to understand.)
Whether it is true that, as van den Berghe claims, atomic bombs were never actually likely to be “dropped over, say, Stuttgart or Dortmund” is a matter of supposition. Certainly, there were great animosity towards the Germans in America, as illustrated by the Morgenthau Plan, which, although ultimately never put into practice, was initially highly influential in directing US policy in Europe and even supported by President Roosevelt.
On the other hand, Roosevelt’s references to ‘the Nazis, the Fascists, and the Japanese’ might simply reflect the fact that there was no obvious name for the faction or regime in control of Japan during the Second World War, since, unlike in Germany and Italy, no named political party had seized power. I am therefore unconvinced that a great deal can necessarily be read into this.

[11] This was especially so in historical times, before the development of improved technologies of long-distance transportation (ships, aeroplanes) enabled more distantly related populations to come into contact, and hence conflict with one another (e.g. blacks and whites in the USA and South Africa, South Asians and English in the UK or under the British Raj). Thus, the ancient Indian treatise on statecraft and strategy, Arthashastra, observed that a ruler’s natural enemies are his immediate neighbours, whereas his next-but-one neighbours, being immediate neighbours of his own immediate neighbours, are his natural allies. This is sometimes credited as the origin of the famous aphorism, The enemy of my enemy is my friend.

[12]  The idea that neighbouring groups tend to be in conflict with one another precisely because, being neighbours, they are also in close contact, and hence competition, with one another, ironically posits almost the exact opposite relationship between ‘contact’ and intergroup relations than that posited by the famous contact theory of mid-twentieth psychology, which posited that increased contact between members of different racial and ethnic groups would lead to reduced prejudice and animosity.
This, of course, depends, at least partly, on the nature of the ‘contact’ in question. Contact that involves territorial rivalry, economic competition and war, obviously exacerbates conflict and animosity. In contrast, proponents of contact theory typically had in mind personal contact, rather than, say, the sort of impersonal, but often deadly, contact that occurs between rival belligerent combatants in wartime.
In fact, however, even at the personal level, contact can take many different forms, and often functions to increase inter-ethnic animosity. Hence the famous proverb, ‘familiarity breeds contempt’.
Indeed, social psychologists now concede that only ‘positive’ interactions with members with members of other groups (e.g. friendship, cooperation, acts of altruism, mutually beneficial trade) reduces animosity and conflict.
In contrast, negative interactions (e.g. being robbed, mugged or attacked by members of another group) only serves to reinforce, exacerbate, or indeed create intergroup animosity. This, of course, reduces the contact hypothesis to little more than common sense – positive experiences with a given group lead to positive perceptions of that group; negative interactions to negative perceptions.
This in turn suggests that stereotypes are often based on real experiences and therefore tend to be true – if not of all individuals, then at least at the statistical, aggregate group level.
I would add that, anecdotally, even positive interactions with members of disdained outgroups do not always shift perceptions regarded the disdained outgroup as a whole. Instead, the individuals with whom one enjoys positive interactions, and even friendships, are often seen as exceptions to the rule (‘one of the good ones’), rather than representative of the demographic to which they belong. Hence the familiar phenomenon of even virulent racists having friendships and sometimes even heroes among members of races whom they generally otherwise disdain. 

[13] However, Van den Berghe acknowledges that racially diverse societies have lived in “relative harmony” in places such as Latin America, where government gives no formal political recognition to racial groups (e.g. racial preferences and quotas for members of certain races) and where the latter do not organize on a racial basis, such that government is, in van den Berghe’s terminology, “non-racial” rather than “multiracial” (p190). However, this is perhaps a naïvely benign view of race relations in Latin American countries such as Brazil, which is, despite the fluidity of racial identity and lack of clear dividing lines between races, nevertheless now viewed by most social scientists, not so much as a model racial democracy, so much as a racially-stratified pigmentocracy , where skin tone correlates with social status. It is also arguably an outdated view of race relations in Latin America, because, perhaps due to indirect cultural and political influence emanating from the USA, ethnic groups in much of Latin America (e.g. blacks in Brazil, indigenous populations in Bolivia) increasingly do organize and agitate on a racial basis.

[14] I am careful here not to refer to refer the dominant culture as that of either a ‘host population’ or a ‘majority population’, or the subordinate group as a ‘minority group’ or an incoming group of migrants. This is because sometimes newly-arrived settlers successfully assimilate the indigenous populations among whom they settle, and sometimes it is the majority group who ultimately assimilate to the norms and culture of the minority. Thus, for example, the Anglo-Saxons imposed their Germanic language on the indigenous inhabitants of what is today England, and indeed ultimately most of the inhabitants of Scotland, Wales and Ireland as well, even though they likely never represented a majority of the population even in England, and may have made only a comparatively modest contribution to the ancestry of the people whom we today call ‘English’.

[15] Interestingly, and no doubt controversially, Van den Berghe argues that blacks in the USA do not have any distinctive cultural traits that distinguish them from the white American mainstream, and that their successful assimilation has been prevented only by the fact that, until very recently, whites have refused to ‘assimilate’ them. He is particularly skeptical regarding the notion of any cultural inheritances from Africa, dismissing “the romantic search for survivals of African Culture” as “elusive” (p177).
Indeed, for van den Berghe, the whole notion of a distinct African-American culture is “largely ideological and romantic” (p177). “Afro-Americans are,” he argues, “culturally ‘Anglo-Saxon’” and hence paradoxically ”as Anglo as anyone… in America” (p177). He concludes:

The case for ‘black culture’ rests… largely on the northern ghetto lumpenproletariat, a class which has no direct counterpart. Even in that group, however, much of the distinctiveness is traceable to their southern, rural origins” (p177). 

This reference to “southern rural origins” anticipates Thomas Sowell’s later black redneck hypothesis. Certainly, many aspects of black culture, such as dialect (e.g. the use of terms such as y’all and ain’t and the pronunciation of ‘whores’ as ‘hoes’) and stereotypical fondness for fried chicken, are obvious inheritances from Southern culture rather than distinctively black, let alone an inheritance from Africa. Thus, van den Berghe observes:

Ghetto lumpenproletariat blacks in Chicago, Detroit and New York may seem to have a distinct subculture of their own compared collectively to their white neighbors, but the black Mississippi sharecropper is not very different, except for his skin pigment, from his white counterparts” (p177). 

Any remaining differences not attributable to their Southern origins are, van den Berghe claims, not “African survivals, but adaptation to stigma” (p177). Here, van den Berghe perhaps has in mind the inverse morality, celebration of criminality, and bad nigger’ archetype prevalent in, for example, gangsta rap music. Thus, van den Berghe concludes that: 

Afro-Americans owe their distinctiveness overwhelmingly to the fact that they have been first enslaved and then stigmatized as a pariah group. They lack a territorial base, the necessary economic, and political resources and the cultural and linguistic pluralism ever to constitute a successful nation. Their pluralism is strictly a structural pluralism inflicted on them by racism. A stigma is hardly an adequate basis for successful nationalism” (p184). 

[16] Thus, Elizabeth Warren was a law professor who became a Democratic Party Senator and Presidential candidate, and had described herself as ‘American Indian, and been cited by her University employers as an ethnic minority, in order to benefit from informal affirmative action, despite having only a very small amount of Native American ancestry. Krug and Dolezal, meanwhile, taking advantage of the one drop rule, both identified as African-American, Krug, a history professor and leftist activist, taking advantage of her Middle-Eastern appearance, itself likely a reflection of her Jewish ancestry. Dolezal, however, was formerly a white, blonde girl, but, through the simple expedient of getting a perm and tan, managed to become an adjunct professor of black studies at a local university and local chapter president of the NAACP in an overwhelmingly white town and state. Whoever said blondes have more fun? 

[17] It has even given rise to a popular new hairstyle among young white males attempting to escape the stigma of whiteness by adopting a racially ambiguous appearance – the mulatto perm

[18] Interestingly, the examples cited by Paddy Hannam in his piece on the phenomenon, The rise of the race fakers also seem to have been female (Hannam 2021). Steve Sailer wisely counsels caution with regard to the findings of this study, noting that anyone willing to lie about their ethnicity on their college application, is also likely even more willing to lie in an anonymous survey (Sailer 2021 ; see also Hood 2007). 

[19] Actually, the Northern Ireland settlement is often classed as centripetalist rather than consociationalist. However, the distinction is minimal, with the former arrangement representing a modification of the latter designed to encourage cross-community cooperation, and prevent, or at least mitigate, the institutionalization and ossification of the ethnic divide that is perceived to occur under consociationalism, where constitutional recognition is accorded to the divide between the two (or more) communities. There is, however, little evidence that centripetalism have ever actually been successful in encouraging cross-community cooperation, beyond what is necessitated by the consitutional system, let alone encouraging assimilation of the rival communities and the depoliticization of ethnic identity. 

[20] The reason for the difference in the attitudes of leftists and liberals towards majority-rule in Northern Ireland and South Africa respectively seems to reflect the fact that, whereas in Northern Ireland, the majority protestant population were perceived of as the dominant oppressor’ group, the black majority in South Africa were perceived of as oppressed.
However, it is hard to see why this would mean black majority-rule in South Africa would be any less oppressive of South Africa’s white, coloured, and Asian minorities than Protestant majority rule had been of Catholics in Ulster. On the contrary, precisely because the black majority in South Africa perceive themselves as having been ‘oppressed’ in the past, they are likely to be especially vengeful and feel justified in seeking recompense for their earlier perceived oppression. This indeed seems to be what is occurring in South Africa, and Zimbabwe, today. 
Interestingly, van den Berghe, writing in 1981 was wisely prophetic regarding the long-term prospects for both apartheid – and for white South Africans. Thus, on the one hand he predicted: 

Past experience with decolonization elsewhere in Africa, especially in Zimbabwe (which is in almost every respect a miniature version of South Africa) seems to indicate that the end of white domination is in sight. The only question is whether it will take the form of a prolonged civil war, a negotiated partition or a frantic white exodus. The odds favor, I think, a long escalating war of attrition accompanied by a gradual economic winddown and a growing white emigration” (p174). 

Thus, van den Berghe was right in so far as he predicted the looming end of the apartheid system – though hardly unique in making this prediction. However, he was wrong in his predictions as to how this end would come about. On the other hand, however, with ongoing farm murders and the overtly genocidal rhetoric of populist politicians like Julius Malema, van den Berghe was probably right regarding the long-term prognosis of the white community in South Africa when he observed: 

Five million whites perched precariously at the tip of a continent inhabited by 400 millions blacks, with no friends in sight. No matter what happens whites will lose heavily, perhaps their very lives, or at least their place in the African sun that they love so much” (p172). 

However, perhaps surprisingly, van den Berghe denies that apartheid was entirely a failure: 

Although apartheid failed in the end, it was a rational course for the Afrikaners to take, given their collective aims, and probably did postpone the day of reckoning by about 30 years” (p174).

[21] The only other polity that perhaps has a competing claim to representing the world’s model consociationalist democracy is Switzerland. However, van den Berghe emphasizes that Switzerland is very much a special case, the secret of its success being that:

Switzerland is one of those rare multiethnic states that did not originate either in conquest or in the breakdown of multinational empires” (p194).

It managed to avoid conquest by its richer and more powerful neighbours simply because:

The Swiss had the dual advantage in resisting outside conquest: favorable terrain and lack of natural resources” (p194)

Also, it provided valuable services to these neighbours, first providing mercenaries to fight in their armed forces and later specialising in the manufacture of watches and what van den Berghe terms “the management of shady foreigners’ ill-gotten capital” (p194).
In reality, however, although divided linguistically and religiously, Switzerland does not, in van den Berghe’s constitute true consociationalism, since the country, with originated as confederation of fomerly independent hill tribes, remains highly decentralized, and power is shared, not by ethnic groups, but rather between regional cantons. Therefore, van den Berghe concludes:

The ethnic diversity of Switzerland is only incidental to the federalism, it does not constitute the basis for it” (p196-7).

In addition, most cantons, where much of the real power lies, are themselves relatively monoethnic and monoliguistic, at least as compared to the country as a whole.

[22] Indeed, since the Slavs of Eastern Europe were the last group in Europe to be converted to Christianity, and it was forbidden by Papal decree to enslave fellow-Christians or sell Christian slaves to non-Christians (i.e. Muslims, among whom there was a great demand for European slaves), Slavs were preferentially targeted by Christians for enslavement, and even those non-Slavic people who were enslaved or sold into bondage were often falsely described as Slavs in order to justify their enslavement and sale to Muslim slaveholders. The Slavs, for geographic reasons, were also vulnerable to capture and enslavement directly by the Muslims themselves.

[23] Another reason that it proved difficult to enslave the indigenous inhabitants of the Americas, according to van den Berghe, is the lifestyle of the latter prior to colonization. Thus, prior to the arrival of Euopean colonists, the indigenous people in many parts of the Americas were still relatively primitive, many subsisting, in whole or in part, as nomadic or semi-nomadic hunter-gatherers. This meant, not only that they had low population densities and were hence few in number and vulnerable to infectious diseases introduced by European colonizers, but also that:

Such aborigines as existed were mobile, elusive and difficult to control. They typically had a vast hinterland into which they could escape labor exploitation” (p93).

Thus, van den Berghe reports, when, in what is today Brazil, Portuguese colonists led raiding expeditions in an attempt to capture and enslave natives, so many of the latter “escaped, committed suicide or died of disease” that the attempt was soon abandoned (p93).
Perhaps more interestingly, van den Berghe also argues that another reason that it proved difficult to enslave nomadic peoples was that:

Nomads typically are unused to being exploited since their own societies are often relatively egalitarian, ill-adapted to steady hard labor and lacking in the skills useful to colonial exploiters (as cultivators, for example). They are, in short, lovers of freedom and make very poor colonial underlings… They are regarded by their conquerors as lazy, shiftless and unreliable, as an obstacle to development and as a nuisance to be displaced” (p93).

In contrast, whereas sub-Saharan Africa is usually stereotyped, not entirely inaccurately, as technologically backward as compared to other cultures, and this very backwardness as facilitating their enslavement, in fact, van den Berghe explains, it was the relatively socially advanced nature of West African societies that permitted the transatlantic slave trade to be so successful.

Contrary to general opinion, Africans were so successfully enslaved, not because they belonged to primitive cultures, but because they had a complex enough technology and social organization to sustain heavy losses of manpower without appreciable depopulation. Even the heavy slaving of the 18th century made only a slight impact on the demography of West Africa. The most heavily raided areas are still today among the most densely populated” (p126).

[24] Although this review is based on the 1987 edition, The Ethnic Phenomenon was first published in 1981, whereas Orlando Peterson’s Slavery and Social Death came out just a year later in 1982.

[25] In the antebellum American South, much is made of the practice of slave-owners selling the spouses and offspring of their slaves to other masters, thereby breaking up families. On the basis of van den Berghe’s arguments, this might actually have represented an effective means of preventing slaves from putting down roots and developing families and slave communities, and might therefore have helped perpetuate the institution of slavery.
However, even assuming that such practices would indeed have had this effect, it is doubtful that there was any such deliberate long-term policy among slaveholders to break up families in this way. On the contrary, van den Berghe reports:

It is not true that slave owners systematically broke up slave couples… On the contrary, it was in their interest to foster stable slave families for the sake of morale, and to discourage escape” (p133). 

Thus, though it certainly occurred and may indeed have been tragic where it did occur, slaveholders generally preferred to keep slave families intact, precisely because, in forming families, slaves would indeed ‘put down roots’ and hence be less likely to try to escape, lest, in the process, they would leave other family members behind to face the vengeance of their former owners alone and without any protection and support they might otherwise have been in a position to offer. The threat of breaking up families, however, surely remained a useful tool in the arsenal of slaveholders to maintain control over slaves. 

[26] While acknowledging, and indeed emphasizing, the virulence of western racialism, van den Berghe, bemoaning the intrusion of “moralism” (and, by extension, ethnomasochism) into scholarship, has little time for the notion that western slavery was intrinsically more malign than forms of slavery practised in other parts of the world or at other times in history (p116). This, he dismisses as “the guilt ascription game: whose slavery was worse?” (p128). Male slaves in the Islamic world, for example, were routinely castrated before being sold (p117). 
Thus, while it is true that slaves in the American South had unusually low rates of manumission (i.e. the granting of freedom to slaves), they also enjoyed surprisingly high standards of living, were well-fed and enjoyed long lives. Indeed, not only did slaves in the American South enjoy standards of living superior to those of most other slave populations, they even enjoyed, by some measures, higher standards of living than many non-slave populations, including industrial workers in Europe and the Northern United States, and poor white Southerners, during the same time period (The End of Racism: p88-91; see also Time on the Cross: the Economics of American Slavery). 
Ironically, living standards were so high for the very same reason that rates of manumission were so low – namely, slaves, especially after the abolition and suppression of the transatlantic slave-trade (but also even before then due to the costs of transportation during the middle passage) were an expensive commodity. Masters therefore fully intended to get their money’s worth out of their slaves, not only by rarely granting them their freedom, but also ensuring that they lived a long and healthy life.
In this endeavour, they were surprisingly successful. Thus, van den Berghe reports, in the fifty years that followed the prohibition on the import of new slaves into the USA in 1908, the black population of the USA nevertheless more than tripled (p128). In short, slaves may have been property, but they were valuable property – and slaveholders made every effort to protect their investment.
Ironically, therefore, indentured servants (themselves, in America, often white, and later, in Africa, usually South or East Asian) were, during the period of their indenture, often worked harder, and forced to live in worse conditions, than were actual slaves. This was because, since they were indentured for only a set number of years before they would be free, there was less incentive on the part of their owners to ensure that they lived a long and healthy life.   
Van den Berghe concludes: 

“The blanket ascription of collective racial guilt for slavery to ‘whites’ that is so dear to many liberal social scientists is itself a product of the racist mentality produced by slavery. It takes a racist to ascribe causality and guilt to racial categories” (p130). 

Indeed, as Dinesh D’Souza in The End of Racism, and Thomas Sowell in his essay ‘The Real History of Slavery’ included in the collection Black Rednecks and White Liberals, both emphasize, whereas all civilizations have practised slavery, what was unique about western civilization was that it was the first civilization ever known to have abolished slavery (at, as it ultimately turned out, no little economic cost to itself).
Therefore, even if liberals and leftists do insist that we play what van den Berghe disparagingly calls “the guilt ascription game”, then white westerners actually come out rather well in the comparison. 

[27] Indeed, in most cultures and throughout most of history, the use of female slaves as concubines was, not only widespread, but also perfectly socially acceptable. For example, in the Islamic world, the use of female slaves as concubines was entirely open and accepted, not only attracting literally no censure or criticism in the wider society or culture, but also receiving explicit prophetic sanction in the Quran. For this reason, in the Islamic world, females slaves tended to be in greater demand than males, and usually commanded a higher price.
In contrast, most slaves transported to the Americas were male, since males were more useful for hard, intensive agricultural labour and, in puritanical North America, sexual contact with between slaveholder and slave was very much frowned upon, even though it certainly occurred. Thus, van den Berghe cynically observes:  

Concubinage with slaves was somewhat more clandestine and hypocritical in the English and Dutch colonies than in the Spanish, Portuguese and French colonies where it was brazen, but there is no evidence that the actual incidence of interbreeding was any higher in the Catholic countries” (p132). 

Partial corroboration for this claim is provided by historian Eugene Genovese, who, in his book Roll, Jordan, Roll: The World the Slaves Made, reports that, in New Orleans slave markets:

First-class blacksmiths were being sold for $2,500 and prime field hands for about $1,800, but a particularly beautiful girl or young woman might bring $5,000” (Roll, Jordan, Roll: p416).

[28] Actually, exploitation can still be an adaptive strategy, even in respect of close biological relatives. This depends of the precise relative gain and loss in fitness to both the exploiter (the slave owner) and his victim (the slave), and their respective coefficient of relatedness, in accordance with Hamilton’s rule. Thus, it is possible that a slaveholder’s genes may benefit more from continuing to exploit his slaves as slaves than by freeing them, even if the latter are also his kin. Possibly the best strategy will often be a compromise of, say, keeping your slave-kin in bondage, but treating them rather better than other non-related slaves, or freeing them after your death in your will. 
Of course, this is not to suggest that individual slaveholders consciously (or subconsciously) perform such a calculation, nor even that their actual behaviour is usually adaptive. Slaveholding is likely an ‘environmental novelty’ to which we are yet to have evolved adaptive responses

[29] Others suggest that Thomas Jefferson himself did not father any offspring with Sally Hemmings and that the more likely father is Jefferson’s wayward younger brother Randolph, who would, of course, share the same Y chromosome as his elder brother. For present purposes, this is not especially important, since, either way, Heming’s offspring would be blood relatives of Jefferson to some degree, hence likely influencing his decision to free them or permit them to escape.

[30] Quite how this destruction can be expected to have manifested itself is not spelt out by van den Berghe. Perhaps, with each passing generation, as slaves became more and more closely biologically related to their masters, more and more slaves would have been freed until there were simply no more left. Alternatively, perhaps, as slaves and slaveowners increasingly became biological kin to one another, the institution of slavery would gradually have become less oppressive and exploitative until ultimately it ceased to constitute true slavery at all. At any rate, in the Southern United States this (supposed) process was forestalled by the American Civil War and Emancipation Proclamation, and neither does it appear to have occurred in Latin America.  

[31] Another area of conflict between Marxism and Darwinism is the assumption of the former that somehow all conflict and exploitation will end in a future posited communist utopia. Curiously, although healthily cynical about exploitation under Soviet-style communism (p60), van den Berghe describes himself as an anarchist (van den Berghe 2005). However, anarchism seems even more hopelessly utopian than communism, given humanity’s innate sociality and desire to exploit reproductive competitors. In short, a Hobbesian state of nature is surely no one’s utopia (except perhaps Ragnar Redbeard). 

[32] The idea that there is “ambivalence in relations between black men and women in America” seems anecdotally plausible, given, for example, the delightfully misogynistic lyrics found in much African-American rap music. However, it is difficult to see how this could be a legacy of the plantation era, when everyone alive today is several generations removed from that era and living in a very different sexual and racial milieu. Today, black men do rather better in the mating market place than do black women, with black men being much more likely to marry non-black women than black women are to marry non-black men, suggesting that black men have a larger dating pool from which to choose (Sailer 1997; Fryer 2007).
Moreover, black men and women in America today are, of course, the descendants of both men and women. Therefore, even if black women did have a better time of it that black men in the plantation era, how would black male resentment be passed down the generations to black men today, especially given that most black men are today raised primarily by their mothers in single-parent homes and often have little or no contact with their fathers?

[33] Indeed, being perceived as attractive, or at least not as ugly, seems to be rather more important to most women that does being perceived as intelligent. Therefore, the question of race differences in attractiveness is seemingly almost as controversial as that of race differences in intelligence. This, then, leads to the delightfully sexist Sailer’s first law of female journalism, which posits that: 

The most heartfelt articles by female journalists tend to be demands that social values be overturned in order that, Come the Revolution, the journalist herself will be considered hotter-looking.” 

[34] A popular alt-right meme has it that there are literally no white-on-black rapes. This is, of course, untrue, and reflects the misreading of a table in a US departnment of Justice report that actually involved only a small sample. In fact, the government does not currently release data on the prevalence of interracial rape. Nevertheless, the US Department of Justice report (mis)cited by some white nationalists does indeed suggest that black-on-white rape is much more common than white-on-black rape in the contemporary USA, a conclusion corroborated by copious other data (e.g. Lebeau 1985).
Thus, in his book Paved with Good Intentions, Jared Taylor reports:

“In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93). 

Indeed, the authors of one recent textbook on criminology even claim that: 

“Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544). 

Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by the Human Rights Watch report, No Escape: Male Rape in US Prisons

References

Brigandt (2001) The homeopathy of kin selection: an evaluation of van den Berghe’s sociobiological approach to ethnicity. Politics and the Life Sciences 20: 203-215. 
Feinman & Gill (1977) Sex differences in physical attractiveness preferences, Journal of Social Psychology 105(1): 43-52. 
Frost (2008) Sexual selection and human geographic variation. Special Issue: Proceedings of the ND Annual Meeting of the Northeastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4): 169-191 
Fryer (2007) Guess Who’s Been Coming to Dinner? Trends in Interracial Marriage over the 20th Century, Journal of Economic Perspectives 21(2), pp. 71-90 
Hannam (2021) The rise of the race fakers. Spiked-Online.com, 5 November. 
Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52. 
Hood (2017) The privilege no one wants, American Renaissance, December 11.
Johnson (1986) Kin selection, socialization and patriotism. Politics and the Life Sciences 4(2): 127-154. 
Johnson (1987) In the Name of the Fatherland: An Analysis of Kin Term Usage in Patriotic Speech and Literature. International Political Science Review 8(2): 165-174.
Johnson, Ratwik and Sawyer (1987) The evocative significance of kin terms in patriotic speech pp157-174 in Reynolds, Falger and Vine (eds) The Sociobiology of Ethnocentrism: Evolutionary Dimensions of Xenophobia, Discrimination, Racism, and Nationalism (London: Croom Helm). 
Lebeau (1985) Rape and Racial Patterns. Journal of Offender Counseling Services Rehabilitation, 9(1- 2): 125-148 
Lewis (2011) Who is the fairest of them all? Race, attractiveness and skin color sexual dimorphism. Personality & Individual Differences 50(2): 159-162. 
Lewis (2012) A Facial Attractiveness Account of Gender Asymmetries in Interracial Marriage PLoS One. 2012; 7(2): e31703. 
Lind et al (2007) Elevated male European and female African contributions to the genomes of African American individuals. Human Genetics 120(5) 713-722 
Macdonald 2001 An integrative evolutionary perspective on ethnicity. Poiltics & the Life Sciences 20(1):67-8. 
Rushton (1998a). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Warfare, and Ideology: Evolutionary perspectives (pp. 369-388). Oxford: Berghahn Books. 
Rushton (1998b). Genetic similarity theory and the roots of ethnic conflict. Journal of Social, Political, and Economic Studies, 23, 477-486. 
Rushton, (2005) Ethnic Nationalism, Evolutionary Psychology and Genetic Similarity Theory, Nations and Nationalism 11(4): 489-507. 
Sailer (1997) Is love colorblind? National Review, July 14. 
Sailer (2021) Do 48% of White Male College Applicants Lie About Their Race? Interesting, if It Replicates. Unz Review, October 21. 
Salmon (1998) The Evocative Nature of Kin Terminology in Political Rhetoric. Politics & the Life Sciences, 17(1): 51-57.   
Salter (2000) A Defense and Extension of Pierre van den Berghe’s Theory of Ethnic Nepotism. In James, P. and Goetze, D. (Eds.)  Evolutionary Theory and Ethnic Conflict (Praeger Studies on Ethnic and National Identities in Politics) (Westport, Connecticut: Greenwood Press). 
Salter (2002) Estimating Ethnic Genetic Interests: Is It Adaptive to Resist Replacement Migration? Population & Environment 24(2): 111–140. 
Salter (2008) Misunderstandings of Kin Selection and the Delay in Quantifying Ethnic Kinship, Mankind Quarterly 48(3): 311–344. 
Tooby & Cosmides (1989) Kin selection, genic selection and information dependent strategies Behavioral and Brain Sciences 12(3): 542-544 
Van den Berghe (2005) Review of On Genetic Interests: Family, Ethny and Humanity in the Age of Mass Migration by Frank Salter Nations and Nationalism 11(1) 161-177 
Van den Berghe & Frost (1986) Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9: 87-113.
Whitney G (1999) The Biological Reality of Race. American Renaissance, October 1999.

Kevin Macdonald’s ‘Culture of Critique’: A Fundamentally Flawed Theory of Twentieth Century Jewish Intellectual and Political Activism

Kevin Macdonald, The Culture of Critique: An Evolutionary Involvement of Jewish Involvement in Twentieth Century Intellectual and Political Movements (1st Books Library 2002). 

In A People That Shall Dwell Alone (which I have reviewed here), psychologist Kevin Macdonald conceptualized Judaism as a group evolutionary strategy that functioned to promote the survival and prospering of the Jewish people and religion in diaspora. 

In ‘Culture of Critique’, its more famous (and controversial) sequel, Macdonald purports to extend this theory to the behaviour of secular twentieth-century intellectuals of Jewish ancestry

Here, however, he encounters an immediate and, in my view, ultimately fatal problem. 

For, in A People That Shall Dwell Alone (PTSA) (reviewed here), Macdonald was emphatic that his theory of Judaism was a theory of cultural, not biological, group selection

In other words, it is a strategy that is encoded, not in Jewish genes, but in the rather teachings of Judaism, the religion. 

It is therefore a theory, not of genetics, but rather memetics, in accordance with the idea of memes’ as units of cultural selection analogous to genes, as first proposed by Richard Dawkins in The Selfish Gene (which I have reviewed here).[1]

Yet Macdonald envisages even secular Jews as continuing to pursue this so-called group evolutionary strategy, even though they have long previously abandoned the religion in whose precepts this cultural group strategy is ostensibly contained, or, in some cases, raised in secular homes, never even exposed to it in the first place.[2]

Presumably Macdonald is not arguing that these intellectuals, many of them militant atheists (e.g. Marx and Freud), are actually secret practitioners of Judaism, engaging in what Macdonald somewhat conspiratorially terms crypsis

How then is this possible? 

Group Commitment 

Macdonald never really directly addresses, or even directly acknowledges, this fundamental problem with his theory. 

The closest he comes to addressing it is by arguing that, since Jewish collectivism and ethnocentrism are, at least according to Macdonald, partly innate, secular Jews continued to pursue ethnocentric ends even after abandoning the religion of their forebears. 

Moreover, just as Jewish ethnocentrism is innate, so, Macdonald argues, is Jewish intelligence and other aspects of the typical Jewish personality profile. Thus, Macdonald claims that the ethnic Jews drawn to movements such as psychoanalysis and Marxism

Retained their high IQ, their ambitiousness, their persistence, their work ethic, and their ability to organize and participate in cohesive highly committed groups” (p4). 

These traits, he argues, gave them a key advantage in competition with other intellectual currents. 

The success of these intellectual movements (i.e. Freudianism, Boasian anthropology, Marxism, the Frankfurt School) reflected, then, not their (decidedly modest) explanatory power, but rather the intense commitment and dedication of their adherents to the movement and ideology. 

Thus, just as Macdonald attributes the economic success of Jews to their collectivism and hence their tendency to operate  price-fixing trade cartels and favour their co-ethnics in commercial operations, so, he argues, the success of Jewish intellectual movements reflects the commitment and solidarity of their members: 

Cohesive groups outcompete individualist strategies. The fundamental truth of this axiom has been central to the success of Judaism throughout its history whether in business alliances and trading monopolies or in the intellectual and political movements discussed here” (p5-6; see also p209-10). 

Thus, Macdonald emphasizes the cult-like qualities of psychoanalysis, Marxism and Boasian anthropology, whose members evince a fanatical quasi-religious devotion to the movement, its ideology and leaders. 

He argues that these movements recreated the structure of traditional Jewish religious groups in Eastern European shtetlach, being grouped around a charismatic leader (a rebbe) who is the object of reverence and veneration, and against whom no dissent was tolerated on pain of excommunication from the group (p225-6).  

Thus, according to Macdonald, ideologies such as Marxism, psychoanalysis and the ‘standard social science model’ (SSM) in psychology, sociology and anthropology take on many features of traditional religion, including the tendency to persecute heresy

This does indeed seem to represent an accurate model of how the psychoanalytic movement operated under the dictatorial leadership of Freud. It is also an accurate model of how the Soviet Union operated under communism, with deviationism relentlessly persecuted and suppressed in successive purges

Similarly, among social scientists, biological approaches to understanding human behaviour, such as sociobiology, evolutionary psychology and behavioural genetics, and especially theories of sex and race differences (and social class differences), for example in intelligence, have aroused an opposition among sociologists and anthropologists that often borders on persecution and witch-hunts

However, such quasi-religious political cults are hardly exclusive to Jews

On the contrary, National Socialism in Germany evinced a very similar structure, being organized around a charismatic leader (Hitler), who elicited reverence and whose word was law (the so-called führerprinzip). 

But Nazism was, of course, a movement very much composed of and led by white European Gentiles. 

To this, Macdonald would, I suspect, respond by quoting from the previous installment in the Culture of Critique series, where he argued: 

Powerful group strategies tend to beget opposing group strategies that in many ways provide a mirror image of the group which they combat” (Separation and Its Discontents: pxxxvii). 

Thus, in Separation and its Discontents, Macdonald provocatively contends: 

National Socialist ideology was a mirror image of traditional Jewish ideology… [Both shared] a strong emphasis on racial purity and on the primacy of group ethnic interests rather than individual interests[and] were greatly concerned with eugenics” (Separation and Its Discontents: p194). 

On this view, Judaism provided, if not necessarily the conscious model for Nazism, then at least its ultimate catalyst. Nazism was, on this view, ultimately a defensive, or at least reactive, strategy.[3]

In other words, Macdonald suggests cult-like movements in Europe are mostly either manifestations of Judaism as a group evolutionary strategy, or reactions against Judaism as a group evolutionary strategy. 

This strikes me as doubtful, and as according the Jews an importance in determining the course of European history which, for all their gargantuan and vastly dispropotionate contributions to European culture, science and civilization, they do not wholly warrant. 

Instead, I believe there is a pan-human tendency to form such fanatical cult-like groups led by charismatic leaders. 

Indeed, in Separation and Its Discontents, Macdonald himself acknowledges that there is a pan-human proclivity to form such groups but insists that “Jews are higher on average in this system” than are other Europeans (Separation and Its Discontents: p31). 

At any rate, Macdonald’s claim at least has the advantage that it leads to testable predictions, namely that: 

(1) That few such cult-like movements existed in Europe before the settling of Jews, or in regions where Jews were largely absent; and

(2) That all (or most) such movements were either:

(a) Jewish movements, led and dominated by Jews; or
(b) Anti-Semitic movements opposed to Jews.

As noted above, I doubt these predictions can be borne out. However, interestingly, in Separation and Its Discontents, Macdonald does cite two studies that supposedly found that Jews were indeed “overrepresented among [members of] non-Jewish religious cults” (Separation and Its Discontents: p24).[4]

At any rate, a final problem with Macdonald’s theory is that, even if the Jewish tendency towards ethnocentrism and collectivism is indeed partly innate, this surely involves a disposition towards, not a specifically Jewish ethnocentrism, but rather an ethnocentrism in respect of whatever group the person in question comes to identify as. 

Thus, since many Jews are raised in secular households, often not even especially aware of their Jewish ancestry, we would hence expect Jewish ethnocentrism to manifest itself in disproportionate numbers of Jews joining the white nationalist movement![5]

Debunking Marx, Boas and Freud 

Undoubtedly the strongest part of Macdonald’s book is his debunking of the scientific merits of such intellectual paradigms as Boasian anthropology, the the standard social science model and Freudian psychoanalysis

Macdonald fails to convince me that these ideologies and belief-systems function as part of a Jewish ‘group evolutionary strategy’ (read: Jewish conspiracy) to subvert Western culture. He does, however, amply demonstrate that they are indeed pseudo-scientific nonsense. 

Yet, for Macdonald, the very scientific weakness of such paradigms as Marxism, Freudian psychoanalysis and the Standard Social Science Model is positive evidence that they serve a group evolutionary function, as otherwise their success in attracting adherents is difficult to explain. 

Thus, he writes: 

The scientific weakness of these movements is evidence of their group-strategic function” (pvi). 

Here, however, Macdonald goes too far. 

The scientific weakness of the theories and movements in question does indeed suggest that the reason for their popularity and success in attracting adherents must reflect something other than their explanatory power. However, he is wrong in presupposing this something is necessarily their supposed “group strategic function” in ethnic competition.[6]

Therefore, Macdonald’s critique of the theoretical and scientific merits of the intellectual movements discussed is not only the best part of his book, but also, in principle, entirely separable from his theory of the role of these movements in promoting an ostensible Jewish group evolutionary strategy. 

Take, for example, his critiques of Boasian anthropology and Freudian psychoanalysis, which are, of those discussed by Macdonald, the two intellectual movements with which I am most familiar and hence with respect to which I am most qualified to assess the merits of his critique.[7]

In assessing the scientific merits of Boasian cultural anthropology, Macdonald concludes that Boasian psychoanalysis was not so much a science, nor even a pseudo-science, as an outright rejection of science: 

An important technique of the Boasian school was to cast doubt on general theories of human evolution, such as those implying developmental sequences, by emphasizing the vast diversity and chaotic minutiae of human behavior, as well as by emphasizing the relativism of standards of cultural evaluation. The Boasians argued that general theories of cultural evolution must await a detailed cataloguing of cultural diversity, but in fact no general theories emerged from this body of research in the ensuing half-century of its dominance of the profession… Because of its rejection of fundamental scientific activities such as generalization and classification, Boasian anthropology may thus be characterized more as an anti-theory than as a theory” (p24). 

In other words, the Boasian paradigm involves, and seeks to make a perverse virtue out of, throwing one’s arms up in despair and declaring that human behaviour is simply too complex, and too culturally variable, to permit the formulation of any sort of general theory. 

This reminds me of David Buss’s critique of the notion that ‘culture’ is itself an adequate explanation for cultural differences, another idea very much derived from post-Boasian American anthropology. Buss writes: 

Patterns of local within-group similarity and between-group differences are best regarded as phenomena that require explanation. Transforming these differences into an autonomous causal entity called ‘culture’ confuses the phenomena that require explanation with a proper explanation of the phenomena. Attributing such phenomena to culture provides no more explanatory power than attributing them to God, consciousness, learning, socialization, or even evolution, unless the causal processes subsumed by these labels are properly described. Labels for phenomena are not proper causal explanations for them” (Evolutionary Psychology: The New Science of the Mind: p404). 

Accepting that no society is more advanced than another, that there is no general direction to cultural change and that all differences between societies and cultures are purely random is essentially to accept the null hypothesis as true and abandoning, or ruling out a priori, any attempt to generate a causal framework for explaining cultural differences. 

It is not science, but a form of obscurantism in direct opposition to science. 

Jews and the Left 

Another interesting element of Macdonald’s work is his summary of just how predominantly Jewish-dominated these ostensibly Jewish intellectual movements indeed really were. 

This is something of a revelation precisely because this is a topic politely passed over in most mainstream histories of, say, revolutionary communism in Eastern Europe and America, or the psychoanalytic movement, both those sympathetic, and those hostile, to the movements under discussion. 

Among radical leftists, the Jewish overrepresentation is especially striking in the USA, probably because of both the relatively high numbers of Jews resident in the USA and the only very low levels of support for socialism among non-Jewish Americans throughout most of the twentieth century.  

Thus, Macdonald reports that: 

From 1921 to 1961, Jews constituted 33.5 percent of the Central Committee members [of the Communist Party USA] and the representation of Jews was often above 40 percent (Klehr 1978, 46). Jews were the only native-born ethnic group from which the party was able to recruit. Glazer (1969, 129) states that at least half of the CPUSA membership of around 50,000 were Jews into the 1950s” (p72). 

Similarly, Macdonald reports: 

In the 1930s Jews ‘constituted a substantial majority of known members of the Soviet underground in the United States’ and almost half the individuals prosecuted under the Smith Act of 1947 (Rothman & Lichter 1982)” (p74).

Likewise, with respect to the so-called new left and 1960s student radicalism, Macdonald reports: 

Flacks (1967: 64) found that 45% of students involved in a protest at the University of Chicago were JewishJews constituted 80% of the students signing a petition to end the ROTC at Harvard and 30-50% of the Students for a Democratic Society – the central organization for radical students. Adelson (1972) found that 90 percent of his sample of radical students at the University of Michigan were JewishBraungart (1979) found that 43% of the SDS had at least one Jewish parent and an additional 20 percent had no religious affiliation. The latter are most likely to be predominantly Jewish: Rothman and Lichter (1982: 82) found that the ‘overwhelming majority of radical students who claimed that their parents were atheists had Jewish backgrounds” (p76-7).  

In short, it appears not unreasonable to claim that the radical left in twentieth century America, which never gained significant electoral support but nevertheless had a substantial social, cultural, academic and indirect political influence on American society, would scarcely have existed were it not for the presence of Jewish radicals.

However, in this respect, the USA was quite exceptional, due to both the relatively large numbers of Jews resident in the country, and the almost complete lack of support of radical leftism among non-Jewish Americans until very recently.[8]

Jewish Dominated Sciences – and Pseudo-Sciences

Just as Jews numberically dominated the American radical left, so, Macdonald reveals, they dominated the psychoanalytic movement. Thus, we learn from Macdonald’s account that, not only were the leaders of the psychoanalytic movement, and individual psychoanalysts, disproportionately Jewish, so were their clients: 

Jews have been vastly overrepresented as patients seeking psychoanalytic treatments, accounting for 60 percent of the applicants to psychoanalytic clinics in the 1960s” (p133). 

Indeed, Macdonald reports that there was: 

A Jewish subculture in New York in mid-twentieth-century America in which psychoanalysis was a central cultural institution that filled some of the same functions as traditional religious affiliation” (p133). 

This was that odd, and now fast disappearing, New York subculture, familiar to most of us only through watching Woody Allen movies, where visiting a psychoanalyst was a regular weekly ritual analogous to attending a church or synogogue. 

Yet, as noted above, the overrepresentation of Jews in the psychoanalytic movement is an aspect of Freudianism that is usually downplayed in most discussions or histories of the psychoanalytic movement, including those hostile to psychoanalysis. 

For example, Hans Eysenck, in his Decline and Fall of the Freudian Empire, mentions the allegation that psychoanalysis was a ‘Jewish science’, only to dismiss it as irrelevant to question of the substantive merits of psychoanalysis as a theoretical paradigm or method of treatment (Decline and Fall of the Freudian Empire: p12).  

Yet, here, Eysenck is right. Whether an intellectual movement is Jewish-dominated, or even part of a ‘Jewish group evolutionary strategy’, is ultimately irrelevant to whether its claims are true and represent a useful and empirically-productive way of viewing the world.[9]

For example, many German National Socialsts dismissed theoretical physics as a ‘Jewish science, and, given the overrepresentation of Jews among leading theoretical physicists in Germany and elsewhere, it was indeed a disproportionately Jewish-dominated field. 

However, whereas psychoanalysis was indeed a pseudoscience, theoretical physics certainly was not. 

Indeed, the fact that so many leading theoretical physicists were forced to flee Germany and German-occupied territories in the mid-twentieth century on account of their Jewishness, together with the National Socialist regime’s a priori dismissal of theoretical physics as a discredited Jewish science, has even been implicated as a key factor in the Nazis ultimate defeat, as it arguably led to their failure to develop an atom bomb

Cofnas’s Default Hypothesis 

In a recent critique of Macdonald’s work, Nathan Cofnas (2018) argues that Jews are in fact overrepresented, not only in the political and intellectual movements discussed by Macdonald, but indeed in all intellectual and political movements that are not overtly antisemitic

Here, Cofnas is surely right. Whatever your politics (short of Nazism), you are likely to count Jews among your intellectual heroes. 

For example, Karl Popper was ethnically Jewish, yet was also a leading critic of both psychoanalysis and Marxism, dismissing both as quintessential unfalsifiable pseudo-sciences. Likewise, Robert Trivers and David Barash were pioneering early-sociobiologists, but also of Jewish ethnicity. 

Indeed, Macdonald, to his credit, himself helpfully lists several prominent Jewish sociobiologists and behavior geneticists, acknowledging: 

Several Jews have been prominent contributors to evolutionary thinking as it applies to humans as well as human behavioral genetics, including Daniel G Freedman, Richard Herrnstein, Seymour Itzkoff, Irwin Silverman, Nancy Sigel, Lionel Tiger and Glenn Weisfeld” (p39) (p39). 

Indeed, ethnic Jews are even seemingly overrepresented among race theorists

These include Richard Herrnstein, co-author of The Bell Curve (which I have reviewed here); Stanley Garn, the author of Human Races and co-author, with Carleton Coon, of Races: A Study of the Problems of Race Formation in Man; Nathaniel Weyl, the author of, among other racialist works, The Geography of Intellect; Daniel Freedman, the author of some controversial and, among racialists, seminal, studies on race differences in behaviour among newborn babies; and Michael Levin, author of Why Race Matters.[10]

Likewise, the most prominent champions of hereditarianism with regard to race differences in intelligence in the mid- to late twentieth, namely Hans Eysenck and Arthur Jensen, were half-Jewish and a quarter-Jewish respectively.[11]

Meanwhile the most prominent contemporary populariser and champion of hereditarianism, including with respect to race differences, is Steven Pinker, who is also ethnically Jewish.[12]

Indeed, Nathan Cofnas is himself Jewish and likewise a staunch hereditarian

Also, although not a racial theorist as such, it is perhaps also worth noting that the infamous nineteenth-century ‘positivist criminologist’, Cesare Lombroso, a bête noire of radical environmental determinists, who infamously argued that criminals were an atavistic throwback to an earlier stage in human evolution, was also of Jewish background, albeit Sephardic rather than Ashkenazi. 

On the other hand, however, the first five opponents of sociobiology I could name offhand when writing this review (namely, Stephen Jay Gould, Richard Lewontin, Leon Kamin, Steven Rose and Marshall Sahlins) were all ethnic Jews to a man.[13]

In short, if ethnic Jews are vastly overrepresented among malignly influential purveyors of obscurantist pseudoscience, they are also vastly overrepresented among important contributors to real science, including in controversial areas such as the study of sex differences and race differences in intelligence and behaviour

Indeed, if there is a national or ethnic group disproportionately responsible for obscurantist, faddish, anti-scientific and just plain bad (but nevertheless highly influential) ideas in philosophy, social science, and the humanities, then I would say that it is not Jewish intellectuals, but rather French intellectuals.[14]

Are we then to posit that these intellectuals were somehow secretly advancing a ‘Group Evolutionary Strategy’ to advance the interests of the France? 

Why Are Jews Overrepresented Among Leading Intellectuals? 

Cofnas (2018), for his part, attributes the overrepresentation of Jews among leading intellectuals to: 

1) The higher average IQ of Jews; and
2) The disproportionate concentration of Jews in urban areas.

In explaining the overrepresentation of Jews by reference to just two factors, Cofnas’s theory is certainly simpler and more parsimonious than Macdonald’s theory of partly unconscious group strategizing, which comes close to being a conspiracy theory. 

Indeed, if one were to go through passages of Macdonald’s work replacing the words “Jewish Group Evolutionary Strategy” with “Jewish conspiracy”, it would read much a traditional antisemitic conspiracy theory. 

However, I suspect Macdonald is right that a further factor is the tendency of Jews to promote the work of their co-ethnics. Thus, he cites one interesting study which used surname analysis to suggest that academic researchers with stereotypically Jewish surnames were more likely to both collaborate with, and cite the work of, other academic researchers with stereotypically Jewish surnames, as compared to those with non-Jewish surnames (p210; Greenwald & Schuh 1994). 

This, of course, reflects an ethnocentric preference. However, to admit as much is not necessarily to agree with Macdonald that Jews are any more ethnocentric than Gentile Europeans, but rather to recognize that ethnocentrism is a pan-human psychological trait and Jews are no more exempt from this tendency than are other groups (see The Ethnic Phenomenon: which I have reviewed here). 

Leftism and Iconoclasm 

But there is one thing that Cofas’s default hypothesis cannot explain—namely why, if Jews are overrepresented in leadership positions among all political and intellectual movements, they are nevertheless especially overrepresented on the Left (see here for data confirming this pattern). 

This overrepresentation on the left is paradoxical, since Jews are disproportionately wealthy, and leftism is hence against their economic interests. 

Moreover, Macdonald himself argues in A People That Shall Dwell Alone that Jews traditionally acted as agents and accessories of governmental oppression (e.g. as tax farmers), resented by the poor, but typically protected by their elite patrons.[15]

Why, then, were Jews, throughout most of the twentieth century, especially overrepresented on the left?

Cofnas (2018) suggests that Jews will be overrepresented among any political or intellectual movements that are not overtly antisemitic

However, this cannot explain the especial overrepresentation of Jews on the Left, since, since at least by the middle of the twentieth century, overt antisemitism has been as anathema among mainstream conservatives as it is among leftists.[16]

Yet all the movements discussed by Macdonald are broadly leftist. 

Perhaps the only exception is Freudian psychoanalysis.  

Indeed, although Macdonald emphasizes its co-option by the Left, especially by the Frankfurt School, some leftists dismiss Freudianism as inherently reactionary, as when student radicalism is dismissed as a form of adolescent rebellion against a father-figure, and feminism as a form of penis envy.[17]

Indeed, amusingly, in this context, Rod Liddle even claims that:

Many psychoanalysts believe that the Left’s aversion to capitalism is simply a displaced loathing of Jews” (Liddle 2005).

Nevertheless, though not intrinsically leftist, Freudianism is certainly iconoclastic. 

Thus, one almost universal feature of Jewish intellectuals has been iconoclasm

Thus, Jews seem as overrepresented among leading libertarians as among leftists. For example, Ludwig von Mises, Ayn Rand, Milton Friedman, Robert Nozick and Murray Rothbard were all of Jewish ancestry. 

Yet libertarianism is usually classed as an extreme right-wing ideology, at least in accordance with the simplistic one-dimensional left-right axis by which most people attempt to conceptualize the political spectrum and plot people’s politics. 

However, in reality, far from being in any sense ‘conservative’, libertarian ideas, if and when put into practice, are just as destructive of traditional societal mores as is Marxism, possibly more so. It is therefore anything but ‘conservative’ in the true sense. 

In contrast, while prominent among neoliberals and, of course, so-called neoconservatives, relatively few Jews seem to be socially conservative (e.g. in relation to issues like abortion, gay rights and feminism, not to mention immigration).  

Orthodox and Conservative Jews are perhaps an exception here. However, the latter are highly insular, living very much in a closed world, like religious Jews in the pre-emancipation era.  

Therefore, although they may indeed vote predominantly for conservative candidates, beyond voting, they rarely involve themselves in politics outside their own communities, either as candidates or activists. 

Macdonald himself seeks to explain Jewish iconoclasm in terms of social identity theory

On this view, Jews, by virtue of their alien origins, enforced separation and minority status, not to mention the discrimination and resentment often directed towards them by host populations, felt estranged and alienated from mainstream culture and hence developed a hostility towards it. 

Here, Macdonald echoes Thorstein Veblen’s theory of Jewish intellectual preeminence (Veblen 1919). 

Veblen argued that Jewish intellectual achievements reflected their only partial assimilation into western societies, which meant that they were less committed to the prevailing dogmas of those societies, which produced both a degree of scholarly detachment and objectivity, and a highly skeptical, and enquiring, state of mind, which ideally suited them to careers in scholarship and science. 

At first, Macdonald reports: 

Negative views of gentile institutions were… confined to internal consumption within the Jewish community” (p7). 

However, with emancipation and secularization, Jewish critiques of the West increasingly went mainstream and began to gain a following even among Gentiles. 

Jewish Radical Critique… of Judaism Itself? 

However, the problem with seeing Jewish iconoclasm as an attack on Gentile culture is that the ideologies espoused necessarily entail a rejection of traditional Jewish culture too. 

Thus, if Christianity was indeed delusional, repressive and patriarchal, then this critique applied equally to the religion whence Christianity derived – namely Judaism

Indeed, far from Judaism being a religion that, unlike Christianity and Islam, is not sexually repressive (a view Macdonald attributes to Freud), the most sexually repressive, illiberal and, from a contemporary left-liberal perspective, problematic elements of Christian doctrine almost all derive directly from Judaism and the Old Testament

Thus, explicit condemnation of homosexuality occurs, not in the teaching of Jesus, but rather in the Old Testament (Leviticus 18:22; Leviticus 20:13). Similarly, it is principally from a passage in the Old Testament, that the Christian opposition to masturbation and coitus interruptus derives (Genesis 38:8-10). 

The Old Testament also, of course, contains the most racist and genocidal biblical passages (e.g. Deuteronomy 20:16-17; Joshua 10:40) as well as the only biblical commandments seemingly advocating mass rape and sexual enslavement (e.g. Deuteronomy 20: 13-14; Numbers 31: 17-18) – see discussion here

Only in respect of the question of divorce and remarriage is the teaching of Jesus in the New Testament arguably less liberal than that in the Old Testament.[18]

Likewise, if the nuclear family was pathological, patriarchal and the root cause of all neurosis, then this applied also to the traditional Jewish family. 

In short, radical critique is necessarily destructive of all traditional values and institutions, Jewish values and traditions very much included. 

Neither is this radical critique of Jewish culture always merely implicit. 

True, many Jewish iconoclasts concentrated their fire on Christian and Gentile cultural traditions. However, this might be excused by reference to the fact that it was Christian and gentile cultural traditions that represented the dominant cultural traditions within the societies in which they found themselves. 

However, secular Jewish intellectuals had, not least by virtue of their secularism, rejected Jewish culture and traditions too. 

Indeed, far from arbitrarily exempting Jews from their radical critique of traditional society and religion, many Jewish intellectuals were positively anti-Semitic in the degree of their criticism of Jews and of Judaism.  

A case in point is the granddaddy of Jewish Leftism, Karl Marx, who receives comparatively scant attention from Macdonald, probably for precisely this reason.[19]

Yet Marx’s writings, especially but not exclusively, in his infamous essay On the Jewish Question, are so anti-Jewish that, were it not for Marx’s own Jewish background and impeccable leftist credentials, modern readers would surely dismiss him as a raving anti-Semite, if not insist upon his cancellation for crimes against political correctness (see Whisker 1984).[20]

Although I dislike the term self-hating Jew on account of its pejorative and Freudian connotations of psychopathology, the tradition of Jewish self-criticism continues – from the anti-Zionism of radical leftists like Noam Chomsky and Norman Finkelstein, to broadly ‘alt right’ Jews like Ron Unz and David Cole.[21]

Macdonald claims that Jewish leftists envisaged an ethnically inclusive society in which Jews would continue to exist as a distinct group. 

Actually, however, in my understanding, most radical leftists envisaged all forms of religious or ethnic identity as withering away in the coming communist utopia, such that both Judaism as a religion and the Jews as a people would ultimately cease to exist in a post-revolutionary society.

Thus, Yuri Slezkine, in The Jewish Century, like Macdonald, emphasizes the hugely disproportionate role of Jews in the Bolshevik revolution, yet interprets their motivation quite differently.

Most Jewish rebels did not fight the state in order to become free Jews; they fought the state in order to become free from Jewishness—and thus Free. Their radicalism was not strengthened by their nationality; it was strengthened by their struggle against their nationality. Latvian or Polish socialists might embrace universalism, proletarian internationalism, and the vision of a future cosmopolitan harmony without ceasing to be Latvian or Polish. For many Jewish socialists, being an internationalist meant not being Jewish at all… The Jews, as a group, were the only true Marxists because they were the only ones who truly believed that their nationality was ‘chimerical’; the only ones who—like Marx’s proletarians but unlike the real ones—had no motherland” (The Jewish Century: p152-3).

Admittedly, Macdonald does amply demonstrate that even secular Jewish leftists, in both the West and Soviet Russia, continued to socialize, and intermarry, overwhelmingly among themselves. Yet this is hardly surprising, since ethnocentrism and in-group preference are universal phenomena, and people in general tend to marry, and socialize with, those with similar backgrounds and personal chatacteristics to themselves.

However, what Macdonald does not acknowledge is that, in the aftermath of the Bolshevik revolution, there was actually a massave increase in the rate of Jewish-Gentile intermarriage, Slezkine reporting:

Between 1924 and 1936, the rate of mixed marriages for Jewish males increased from 1.9 to 12.6 percent (6.6 times) in Belorussia, from 3.7 to 15.3 percent (4.1 times) in Ukraine, and from 17.4 to 42.3 percent (2.4 times) in the Russian Republic. The proportions grew higher for both men and women as one moved up the Bolshevik hierarchy. Trotsky, Zinoviev, and Sverdlov were married to Russian women… The non-Jews Andreev, Bukharin, Dzerzhinsky, Kirov, Kosarev, Lunacharsky, Molotov, Rykov, and Voroshilov, among others, were married to Jewish women” (The Jewish Century: p179).

Indeed, it is difficult to see how Jews could indefinitely remain an separate and endogamous ethnic group in the long-term in the absence of a shared religion, not just in the Soviet Union, but also in the west as a whole, as, over time, the basis for their shared kinship will inevitably become increasingly remote. 

It is true that some Marranos, in Iberia and elsewhere, managed to retain a Jewish identity over multiple generations by secretly continuing to practise Judaism, practising what Macdonald calls crypsis.  

However, this could hardly apply to Jewish leftists, since even Macdonald does not go as far as to claim that such militant secularists and anti-religionists as Marx and Freud were actually secret practitioners of Judaism.[22]

Macdonald also argues that, since the Jewish tendency towards higher IQs, high conscientiousness and highinvestment parenting is (supposedly) partly innate, Jews were relatively immunized against the destructive effects of the sexual revolution on rates of divorce, illegitimacy and single-parenthood (p147-9).[23]

Likewise, if the Jewish tendency towards ethnocentrism is also innate, Jews would be presumably less vulnerable to the impact of universalist and antiracist ideologies on group cohesion.

However, even assuming that this is true, does Macdonald actually envisage that the Jewish psychoanalysts and other Jewish thinkers who (supposedly) promoted hedonism and universalism actually consciously foresaw and intended that their social, intellectual and political activism would have a greater effect on gentile family and culture than on that of Jews for this reason?

This is surely implausible and would amount to a conspiracy theory. 

Moreover, it might instead be argued that, since Jews were at the forefront of, and overrepresented within, these intellectial movements, Jewish culture was actually especially vulnerable to the effect of such ideologies. 

Thus, perhaps Orthodox Jews were indeed relatively insulated from, and insulated against, the effects of the 1960s counterculture. But, then, so were the Amish and Christian fundamentalists. 

On the other hand, however, many Jewish student radicals very much practised what they preached (e.g. hedonism, promiscuity, drug abuse, and terrorism). 

Immigration 

Macdonald’s penultimate chapter discusses the role of Jews in reforming immigration law in the USA.[24]

Macdonald shows that Jewish individuals, networks and organizations played a central role in advocating for the opening up of America’s borders, and the passage of the 1965 Immigration Act, which exposed white America to replacement levels of non-white immigration, resulting in an ongoing, and now surely irreversible, demographic displacement.[25]

The basis of Macdonald’s thesis is that Jews perceive themselves as safer in multi-ethnic societies where they, as Jews, don’t stand out so much. This essence of this cynical logic was perhaps best distilled by Jewish comedienne, Sarah Silverman, who, during one of her stand-up routines, claimed: 

The Holocaust would never have happened if black people lived in Germany in the 1930s and 40s… well, it wouldn’t have happened to Jews.”[26]

There is indeed some truth to this idea. If I walk around London and see Sikhs in turbans, Muslims in burqas and hijabs and people of all different racial phenotypes, then even the elaborate apparel of Hasidic Jews might not jump out at me as overly strange. 

As for those Jews the only evidence of whose ethnicity is, say, a skullcap or an especially large nose, I am likely to see them as just another white person, no more exotic than, say, an Italian-American. 

Thus, today, most people see Jews as white and hence fail to notice their overrepresentation in media, politics, government and big business, and, when leftist campaigners protest that the Oscars are so white, the average man in the street is perhaps to be forgiven for not enquiring too far into the precise ethnic background of all these white Hollywood executives and movie producers.

However, I’m not entirely convinced that mass immigration is indeed ‘good for the Jews’. 

For one thing, many such immigrants, especially in Europe, tend to be Muslim, and Muslims have their own ‘beef’ with the Jews regarding the conquest, expulsion and subsequent persecution of their coreligionists in Palestine.[27]

Thus, while stories periodically trend in the media regarding an increase in anti-Semitic hate-crimes in Europe, what is almost invariably missed out of these news stories is that those responsible for these anti-Semitic hate crimes in Europe are almost invariably Muslim youths (see The Retreat of Reason, reviewed here: p107-11).[28]

In addition, some blacks, like Nation of Islam leader Louis Farrakhan, also stand accused of anti-Semitism

In fact, however, Farrakhan’s anti-Semitism is, in one sense, overblown. His religion holds that all white people, Jew and Gentile alike, are a race of white devils invented by an evil black scientist called Yakub (the most preposterous part of which theory is arguably the idea of a black scientist inventing something that useful).  

His comments about Jews are thus no more disparaging than his beliefs about whites in general. The particular outrage that his anti-Jewish comments have garnered reflect only the greater ‘victim-status’ accorded Jews in the contemporary West as compared to other whites, despite their hugely disproportionate wealth and political power

In contrast, anti-white rhetoric is all but ubiquitous on the political left, and indeed throughout American society as a whole, and hardly unique to Farrakhan. It therefore passes almost entirely without comment. 

Yet this points to another problem for American Jews as a direct result of both increasing ethnic diversity and increasing anti-white animosity – namely that, if increasing ethnic diversity does indeed mean that Jews come to be seen as no different from other whites, then the animosity of many non-whites towards whites, an animosity often nurtured by leftist Jewish intellectuals, is, unlike the destroying angel of Exodus, unlikely to distinguish Jew from Gentile. 

Yet, given their history, Jews, more than other whites, should be all too aware of the dangers in becoming a wealthy but resented minority, as whites in America are poised to become by the middle of the current century⁠, thanks to the immigration policy that Jews were, in Macdonald’s own telling, instrumental in moulding. 

In short, if I began this section of my review with a quote from a Jewish comedienne regarding blacks, it behoves to conclude with a quote from a black comedian, concerning Jews. Chris Rock, discussing the alleged anti-Semitism of Farrakhan in one of his stand-up routines, explains: 

Black people don’t hate Jews. Black people hate white people. We don’t got time to dice white people into little groups.” 

Endnotes

[1] Macdonald, however, never mentions the meme concept in PTSDA, perhaps on account of an antipathy to Richard Dawkins, whom he blames for prejudicing evolutionists against the idea groups have any important role to play in evolution (A People That Shall Dwell Alone: pviii). He does, however, mention the meme concept on one occasion in ‘Culture of Critique’, where he acknowledges:

The Jewish intellectual and cultural movements reviewed here may be viewed as memes designed designed to facilitate the continued existence of Judaism as an group evolutionary strategy” (p237).

However, Macdonald cautions:

Their adaptedness for gentiles who adopt them is highly questionable, however, and indeed, it is unlikely that any gentile who believes that, for example, anti-Semitism is necessarily a sign of a pathological personality is behaving adaptively” (p237).

[2] Curiously, Macdonald even refers to these secular thinkers and political activists as still continuing to practise what he calls “Judaism as a group evolutionary strategy”, a phrase he uses repeatedly throughout this book, even though the vast majority of the thinkers he discusses are secular in orientation. This suggests that, for Macdonald, the word “Judaism” has a rather different, and broader, meaning than it does for most other people, referring not merely to a religion, but rather to a group evolutionary strategy that is, as he purports to show in PTSDA, encapsulated in this religion, but also somehow broader than the religion itself, and capable of being practised by, say, secular psychoanalysts, Marxists and anthropologists just as much as by, say, devout orthodox Jews. This is a rather odd idea, and certainly a very odd definition of ‘Judaism’, that Macdonald never gets around to explaining.

[3] Indeed, Macdonald goes even further, provocatively arguing that the ultimate progenitor of Nazi race theory is not to be found among such infamously anti-Semitic proto-Nazi notables as Wagner, Chamberlain or Gobineau, let alone Eckart, Rosenberg or Hitler himself, but rather the celebrated, and ethnically Jewish, British Prime Minister Benjamin Disraeli. Despite being, at least nominally, a Christian convert and marrying a Gentile, Disraeli, according to Macdonald, not only considered the Jews a superior race vis a vis white Gentiles, but also attributed this superiority to their alleged “racial purity” (Separation and Its Discontents: p181).
Thus, he quotes Disraeli as observing:

The other degraded races wear out and disappear; the Jew remains, as determined, as expert, as persevering, as full of resource and resolution as ever… All of which proves that it is in vain for man to attempt to battle the inexorable law of nature, which has decreed that a superior race shall never be destroyed or absorbed by an inferior” (Lord George Bentinck: A Political Biography: quoted in Separation and Its Discontents: p181).

Indeed, Macdonald reports, Disraeli considered Jews as being responsible for “virtually all the advances of civilization”, and, evincing black Israelite levels of delusion, apparently even considered Mozart to be Jewish. Thus, Macdonald quotes LJ Rather as concluding:

Disraeli rather than Gobineau—still less Chamberlain—is entitled to be called the father of nineteenth-century racist ideology” (Reading Wagner: quoted in Separation and Its Discontents: p180).

[4] The studies cited by Macdonald for this claim are: Marciano 1981; Schwartz 1978

[5] Of course, in making this claim, I am being at least semi-facetious. Jews are not be overrepresented among most white nationalist groups because most such groups are also highly anti-Semitic and hence Jews would not be welcome there. On the other hand, Jews would be welcome among more mainstream civic nationalist and anti-immigration groups, not least because they would lend such groups a defence against the charge of being anti-Semitic or ‘Nazis’. However, they do not appear to be especially well represented among these groups, or, at the very least, not as overrepresented among these groups as they are on the political left

[6] On the contrary, other plausible explanations as for why Jew and Gentile alike were drawn to the intellectual movements discussed readily present themselves. For example, wishful thinking may have motivated the Marxist belief in the coming of a communist utopia. Simply a sense of belonging, and of intellectual superiority, may also be a motivating factor in joining such movements as psychoanalysis and Marxism. Indeed, many disparate cults and religions have posited all kinds of odd religious beliefs (arguably odder even than those of Freud), such as reincarnation, miracles etc., without their being any discernible strategic advantage for the overwhelming majority of adherents, indeed sometimes at considerable cost to themselves (e.g. religiously imposed celibacy). 

[7] These are also the movements with which I suspect Macdonald himself is most familiar. As an evolutionary psychologist, he is naturally familiar with Boasian anthropology and the the standard social science model, to which evolutionary psychology stands largely in opposition. Also, he has a longstanding interest in Freudian psychoanalysis, having earlier written a critique of psychoanalysis as a cult in Skeptic magazine (Macdonald 1996), and also, ten years earlier, a not entirely unsympathetic assessment of Freud’s theories in the light of sociobiological theory (Macdonald 1986), both of which articles critique Freudianism without recourse to anti-Semitism or any talk of ‘Jewish group evolutionary strategies’. Also, the title of his previous book on ‘the Jewish question’, namely ‘Separation and Its Discontents’, is obviously drawn from the title of one of Freud’s own books, namely ‘Civilization and its Discontents’

[8] In contrast, in Britain, for example, there was an independent, indigenous socialist tradition, which developed quite independent of any external Jewish influence. In Britian, while Jews would certainly have been overrepresented among leftist radicals during the twentieth century, I suspect that it would not have been to anything like the same degree, not necessarily because of any lesser per capita involvement of Jews, but rather because of:

  1. The relatively lower numbers of Jews resident in the UK as a proportion of the overall population during this time frame; and
  2. The greater per capita involvement of Gentiles in leftist and radical socialist movements.

Meanwhile, in Scandinavian countries, so-called Nordic social democracy surely developed without any significant Jewish influence, or at least any direct influence, if only because so few Jews were resident in these countries. In short, socialism and radical leftism cannot be credited to (or blamed on) Jews alone.
The question of the overrepresentation of Jews among Marxist revolutionaries in Russia is a controversial one, linked, not least on account Nazi propaganda regarding so-called Judeo-Bolshevism. Contrary to some anti-Semitic propaganda, it seems that Jews did not constitute a particularly large proportion of the party membership as a whole. In fact, Slezkine, reports that the most overrepresented ethnicity were not Jews, but rather Latvians (The Jewish Century: p169).
Yet, if Jews were not overrepresented among the rank-and-file party membership in Russia, they do seem to have been vastly overrepresented among the party leadership, at least prior to Stalin’s purges. Thus, Slezkine reports:

Their overall share of Bolshevik party membership during the civil war was relatively modest (5.2 percent in 1922), but… [it is estimated that] Jews had made up about 40 percent of all top elected officials in the army… In April 1917, 10 out of 24 members (41.7 percent) of the governing bureau of the Petrograd Soviet were Jews. At the First All-Russian Congress of Soviets in June 1917, at least 31 percent of Bolshevik delegates (and 37 percent of Unified Social Democrats) were Jews. At the Bolshevik Central Committee meeting of October 23, 1917, which voted to launch an armed insurrection, 5 out of the 12 members present were Jews. Three out of seven Politbureau members charged with leading the October uprising were Jews (Trotsky, Zinoviev, and Grigory Sokolnikov [Girsh Brilliant]). The All-Russian Central Executive Committee (VtsIK) elected at the Second Congress of Soviets included 62 Bolsheviks… Among them were 23 Jews, 20 Russians, 5 Ukrainians, 5 Poles, 4 “Balts,” 3 Georgians, and 2 Armenians… [A]ll 15 speakers who debated the takeover as their parties’ official representatives were Jews” (The Jewish Century: p175)

Similarly, one Jewish Israeli publication reports that, despite only ever representing a tiny proportion of the overall Soviet Russian population:

In 1934, according to published statistics, 38.5 percent of those holding the most senior posts in the Soviet security apparatuses were of Jewish origin” (Plocker 2006).

Historian Robert Gellately gives that seems to give a balanced picture when he reports of the Jewish role in the October revolution and Soviet regime:

Their participation in the Bolshevik Revolution in absolute terms was not great, but five of the twelve members at the Bolshevik Central Committee meeting on October 23 1917 were Jews. The Politburo that led the revolution had seven members, three of whom were Jews. During the stormy years of 1918-21, Jews generally made up one-quarter of the Central Committee and were active in other institutions as well including the Cheka” (Lenin, Stalin & Hitler: p67-8).

In short, the myth of Judeo-Bolshevism was just that – a myth. However, the role of the Jews in both the Communist revolution and the later regime, especially in leadership positions and prior to Stalin’s purges, was nevertheless vastly disproportionate to their numbers in the population as a whole. Regarding Macdonald’s own take on the involvement of Jews in the Soviet regime, and especially in Soviet repression in Eastern Europe, see Macdonald 2005.

[9] Analogously, leftist critics of neoliberal economics, sociobiological theory and evolutionary psychology sometimes claim that these theories were devised within a liberal-capitalist milieu, ultimately in order to justify the capitalist system. However, even assuming this were true, it is not directly relevant to the question of whether the theories in question are true, or at least provide a productive model of how the real world operates. Thus, biologist John Maynard Smith wrote of how:

There is a recent fashion in the history of science to throw away the baby and keep the bathwater to ignore the science, but to describe in sordid detail the political tactics of the scientists” (The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today: px).

[10] I am aware that all these writers and researchers are Jewish either because they have mentioned their ethnicity in their own writings, or it has been mentioned by other authors whom I regard as reliable. I have not, for example, merely relied on their having Jewish-sounding names. This is actually a very inaccurate way of determining ancestry, because, not only have many Jewish people anglicized their names, but also most surnames that Americans and British people think of as characteristically Jewish are actually German in origin, and only relatively more or less common among Jews than among German gentiles. Only a few surnames (e.g. Levin, Cohen) are exclusively Jewish in origin, and even these indicate, of course, only male-line ancestry.

[11] For whatever reason, Eysenck spent most of his life denying and concealing his own Jewish ancestry, practising what Macdonald calls crypsis. Interestingly, he also favourably reviewed the first installment of Macdonald’s so-called ‘Culture of Critique trilogy’, A People That Shall Dwell alone (which I myself have reviewed here) in the psychology journal, Personality & Individual Differences, describing it asa potentially very important contribution to the literature on eugenics, and on reproductive strategy”. Another prominent Jewish champion of hereditarian theories of racial difference was the leading libertarian economist Murray Rothbard

[12] On his blog, Macdonald has repeatedly disparaged Pinker as occupying “the Stephen Jay Gould Chair for Politically Correct Popularization of Evolutionary Biology at Harvard”. This may be a witty (and perhaps anti-Semitic) putdown. It is also, however, grossly unfair. Pinker has not only championed IQ testing, behavioural genetics and sociobiology, but even the idea of innate differences between races in psychological traits such as intelligence (see What is Your Dangerous Idea: p13-5; Pinker 2006). 

[13] Admittedly, the first four of these very much form a clique, very much associated with one another, having jointly authored books and articles together and frequently citing one another’s work. This may be why they were the first five names to occur to me. It might also explain their common ethnicity, as it seems that, according to a study cited by Macdonald, Jewish scholars are more likely to collaborate with and cite fellow Jews (Greenwald & Schuh 1994). On the other hand, anthropologist Marshall Sahlins is not associated with this group, and prior to looking up his biographical details for the purpose of writing this paragraph, I was not aware he was of Jewish ancestry. Perhaps the next best-known critic of sociobiology (or at least the next one I could name offhand) is philosopher Phillip Kitcher, who, despite his German-sounding surname, is not, to my knowledge, of Jewish ancestry.

[14] Admittedly, a fair few of the worst offenders among them have been both French and Jewish (e.g. Claude Lévi-Strauss and Jacques Derrida). 

[15] This explains why, despite its supposed association with the so-called ‘far-right, anti-Semitism and leftism typically go together. Thus, on the one hand, Marxists believe that society is controlled by a conspiracy of wealthy capitalists who control the mass media and exploit and oppress everyone else. On the other hand, anti-Semites believe that society is controlled by a conspiracy of wealthy Jewish capitalists who control the mass media and exploit and oppress everyone else.
Thus, as a famous aphorism has it: Anti-Semitism is the socialism of fools.
Thus, since the contemporary left in America is endlessly obsessed with the supposed ‘overrepresentation’ of white males in positions of power and influence, it ought presumably also to be concerned about the even greater per capita overrepresentation of Jews in those exact same positions of power and influence, as were the Nazis.
In short, National Socialism is indeed a form of socialism – the clue’s is in the name. 

[16] Indeed, today, anti-Semitism is arguably more common on the left, as the left has increasingly made common cause with Palestinians and indeed with Muslims more generally. Yet, in America, Jews still vote overwhelmingly for the leftist Democratic Party, even though Republicans now tend to be even more vociferously pro-Israel than the Democratics. In the UK, on the other hand, Jews are now more likely to vote for Conservative candidates than for Labour. However, I recall reading that, even in the UK, after controlling for socioeconomic status and income, Jews are still more likely to vote for leftist parties than are non-Jews of equivalent socioeconomic status and income-level.

[17] In contrast, as emphasized by Macdonald, other theorists sought to reclaim Freudianism on behalf of the left, notably the infamous (and influential) Frankfurt School, to whom Macdonald devotes a chapter in ‘Culture of Critique’. Thus, the Frankfurt School are today remembered primarily for having combined, on the one hand, Freudian psychoanalysis with, on the other, Marxist social and economic theory. Regarding this brilliant theorietical synthesis, Rod Liddle once memorably remarked:

“[This] is a bit like being remembered for having combined the theory that the sun revolves around the earth with the theory that the earth is flat” (Liddle 2008). 

[18] Thus, whereas various passages in the Old Testament envisage and provide for divorce and remarriage, in contrast Jesus’s teaching on this matter, as reported in the New Testament Gospels, is very strict in forbidding both divorce and remarriage (Matthew 19:3-9; Matthew 5:32). Moreover, precisely because these teachings go against what was common practice amongst Jews at the time of Jesus’s ministry, they are regarded as satisfying the criterion of dissimilarity and hence as historically reliable teachings of the historical Jesus

[19] Thus, despite including in-depth discussion of the supposed ethnic motivations of many ethnically Jewish Marxist thinkers in his chapter on ‘Jews and the Left’, Macdonald passes over Marx himself in less than a page at the very beginning of this chapter, where he concedes: 

Marxism, at least as envisaged by Marx himself, is the very antithesis of Judaism… [and] Marx himself, though born of two ethnically Jewish parents, has been viewed by many as an anti-Semite” (p50). 

While also conceding that “Marx viewed Judaism as an abstract principal of human greed that would end in the communist society of the future”, he also claims, citing a secondary source, that: 

He envisaged that Judaism, freed from the principal of greed, would continue to exist in the transformed society of the future (Katz 1986, 113)” (p50). 

On his Occidental Observer website, Macdonald has also published a piece by the surely pseudonymousFerdinand Bardamu’ arguing that, despite appearances to the contrary, Marx was indeed pursuing a ‘Jewish group evolutionary strategy’ in his political activism (Bardamu 2020). The attempt is, in my view, singularly unpersuasive. 

[20] Marx was also highly racist by modern standards. Indeed, Marx even delightfully combined his racism with anti-Semitism in a letter to his patron and collaborator Friedrich Engels, where he describes fellow Jewish socialist (and friend), Ferdinand Lassalle, as “the Jewish nigger” and theorizes: 

It is now quite plain to me—as the shape of his head and the way his hair grows also testify—that he is descended from the negroes who accompanied Moses’ flight from Egypt (unless his mother or paternal grandmother interbred with a nigger)… The fellow’s importunity is also niggerlike.

[21] A complete list of prominent Jews who have iconoclastically challenged cherished and venerated Jewish institutions, beliefs and traditions is beyond the scope of this review. However, such a list would surely include, among others, such figures as Gilad Atzmon, Shlomo Sand and Otto Weininger. Israel Shahak is another Jewish intellectual frequently accused by his detractors of anti-Semitism, and certainly his book Jewish History, Jewish Religion is critical of aspects of Judaism and Talmudic teachings. Likewise, in Israel, the so-called New Historians, themselves overwhelmingly Jewish in ethnicity, were responsible for challenging many of the founding myths of Israel. Also perhaps meriting honourable (or, for some, dishonourable) mention in this context are Murray Rothbard, also Jewish, who extolled the work of Harry Elmer Barnes, himself widely considered an anti-Semite and early pioneer of ‘holocaust denial’; and Paul Gottfreid, the paleoconservative Jewish intellectual credited with coining the term ‘alt right’.

[22] In fact, even many Marranos seem to have ultimately lost their Jewish identity, especially those who migrated to the New World, who retained, at most, faint remnants of their former faith in certain cultural traditions the significance of which was gradually lost even to themselves. 

[23] Thus, Macdonald writes:

Given the very large differences between Jews and gentiles in intelligence and tendencies towards intelligence and highinvestment parenting… Jews suffer to a lesser extent than gentiles from the erosion of cultural supports for high-investment parenting. Given that differences between Jews and gentiles are genitically mediated, Jews would not be as dependent on the preservation of cultural supports for high-investment parenting parenting as would be the case among gentiles… Facilitation of the pursuit of sexual gratification, low investment parenting, and elimination of social controls on sexual behavior may therefore be expected to affect Jews and gentiles differently with the result that the competitive difference between Jews and gentiles… would be exacerbated” (p148-9). 

[24] Whereas his former chapters focussed on intellectual movements, which, though they almost invariably had a large political dimension, were nevertheless at least one remove away from the determination of actual government policy, this chapter focuses on political activism directly concerned with reforming government policy.

[25] Macdonald also charges Jewish activists with hypocrisy for opposing ethnically-based restrictions on immigration to the USA, while also supporting the overtly racialist immigration policy of Israel, which provides a so-called right of return for ethnic Jews who have never previously set foot in Israel, while denying a literal right of return to Palestinian refugees driven from their homeland in the mid-twentieth century.
In response, Cofnas (2018) notes that Macdonald has not cited that any Jews who actually take both these positions. He has only shown that American Jews favour mass non-white immigration to America, whereas Israeli Jews, a separate population, are opposed to non-Jewish immigration in Israel.
However, this only raises the question as to why it is that those Jews resident in America support mass immigration, whereas those resident in Israel support border control and maintaining a Jewish majority. Self-selection may explain part of the difference, as more ethnocentric Jews may prefer to be resident in Israel. However, given the scale of the disparity, and the extent of intermigration and even dual citizenship, it is highly doubtful that this can explain all of it.
As an example, Cofnas (2018) argues that American liberals such as Alan Dershowitz actually support the campaign for Israel to admit the (non-white) Beta Israel of Ethiopia into Israel.
However, the Beta Israel in total only number around 150,000. Therefore, even if all were permitted to emigrate to Israel (which is still yet to occur), they would represent less than 2% of Israel’s total population. Clearly, allowing a few thousand token ‘black Jews’ to immigrate to Israel is hardly comparable to advocating that people of all ethnicities (and all religions) be permitted to immigrate to Western jurisdictions.
Moreover, the Beta Israel, and even the Falash Mula, are still Jewish in a religious, if not a racial sense. Yet, attempts by white western countries other than Israel to restrict immigration on either racial or religious lines are universally condemned, including by Dershowitz, who condemned Trump’s call for a moratorium on Muslim immigration as incompatible with “the best values of what America should be like. Dershowitz is therefore indeed guilty of hypocrisy and double-standards when it comes the immigration issue.
Similarly, American TV presenter and political commentator Tucker Carlson recently revealed the hypocrisy of perhaps the most powerful Jewish advocacy group in the USA, the ADL, who had condemned Carlson for crimes against political correctness for opposing replacement-level immigration in the USA, while at the same time, and on the same website, themselves arguing, in a post since blocked from public access, that:

It is unrealistic and unacceptable to expect the State of Israel to voluntarily subvert its own sovereign existence and nationalist identity and become a vulnerable minority within what was once its own territory. 

Yet this is precisely what the ADL is insisting white Americans do by insisting that any opposition to replacement level immigration to America is evidence of ‘white supremacism’.
Macdonald may then, as Cofnas complains, not have actually named any Jewish individuals who are hypocritical with respect to immigration policy in America and Israel; however, Carlson has identified a major Jewish organization that is indeed hypocritical with respect to this issue.
I might add here that, unlike Macdonald, I do not think this type of hypocrisy is either unique to, or indeed especially prevalent or magnified among, Jewish people. On the contrary, hypocrisy is I suspect, like ethnocentrism, a universal human phenomenon.
In short, people are much better at being tolerant, moderate and conciliatory in respect of what they perceive as other people’s quarrels. Yet, when they perceive themselves, or their people, as having a direct ethnic or genetic stake in an issue at hand, they tend to be altogether less tolerant and conciliatory.

[26] Macdonald himself puts it this way: 

Ethnic and religious pluralism also serves external Jewish interests because Jews become just one of many ethnic groups. This results in the diffusion of political and cultural influence among the various ethnic and religious groups, and it becomes difficult or impossible to develop unified, cohesive groups of gentiles united in their opposition to Judaism. Historically, major anti-Semitic movements have tended to erupt in societies that have been, apart from the Jews, religiously or ethnically homogeneous (see SAID). Conversely, one reason for the relative lack of anti-Semitism in the United States compared to Europe was that ‘Jews did not stand out as a solitary group of [religious] non-conformists’” (p242). 

In addition, Macdonald contends that a further advantage of increased levels of ethnic diversity within the host society is that: 

Pluralism serves both internal (within-group) and external (between-group) Jewish interests. Pluralism serves internal Jewish interests because it legitimates the internal Jewish interest in rationalizing and openly advocating an interest in overt rather than semi-cryptic Jewish group commitment and nonassimilation” (p241).

In other words, multi-culturalism allows Jews to both abandon the (supposed) pretence of assimilation and overtly advocate for their own ethnic interests, because, in a multi-ethnic society, other groups will inevitably be doing likewise.
However, Jews may also have had other reasons for supporting open borders. After all, Jews are a sojourning diaspora people, who have often migrated from one host society to another, not least to escape periodic pogroms and persecutions. Thus, they had an obvious motive for supporting open borders, namely so that their own coreligionists would be able to migrate to America should the need arise.
One might also argue that, as a people who often had to migrate to escape persecution, they were naturally sympathetic to refugees of other ethnicities, or indeed other immigrants travelling to new pastures in search of a better life, as their own ancestors have so often done in the past, though Macdonald would no doubt dismiss this interpretation as naïve. 

[27] In my view, a better explanation for why so many western countries have opened up their borders to replacement levels of racially, culturally and religiously alien and unassmilable minorities, is the economic one. Indeed, here, a Marxist perspective may be of value, since the economically-dominant capitalist class benefits from the cheap labour that Third World migrants provide, as do wealthy consumers who can afford to purchase a disproportionate share the cheap products and services that such cheap labour provides and produces. In contrast, it is the indigenous poor and working-class, of all ethnicities, who bear a disproportionate share of the costs associated with such migration, including both depressed wages and ethnically-divided, crime-ridden and distrustful communities (see Liddle 2006).

[28] Ironically then, given the substantial numbers of Arab Muslims resident in France, for example, many of the people responsible for so-called ‘anti-Semitic hate crimes’ are themselves ‘Semitic’, and indeed have a rather stronger case for being ‘Semitic’ in a racial sense than do most of their Jewish victims. 

References 

Bardamu (2020) Karl Marx: Founding Father of the Jewish Left? Occidental Quarterly, 4 January.
Cofnas (2018) Judaism as a Group Evolutionary Strategy. Human Nature, 29:134–156. 
Greenwald & Schuh (1994) An Ethnic Bias in Scientific Citations. European Journal of Social Psychology, 24(6), 623–639.
Liddle (2005) Why Labour does not need the Jews, Spectator, 19 February.
Liddle (2006) The Politics of Pleasantville, Spectator, 21 January.
Liddle (2008) Stand by for a year of nostalgia for 1968, Spectator, 5 January.
Macdonald (1986) Civilization and Its Discontents Revisited: Freud as an Evolutionary Biologist. Journal of Social and Biological Structures, 9, 213-220. 
Macdonald (1996) Freud’s Follies: Psychoanalysis as religion, cult, and political movement. Skeptic, 4(3), 94-99.
Macdonald (2005) Stalin’s Willing Executioners The Impact of Orthography Jews As a Hostile Elite in the USSR. Occidental Observer, 5(3): 65-100.
Marciano (1981) Families and CultsMarriage and Family Review, 4(3-4): 101-117. 
Pinker (2006) Groups and Genes. New Republic, 26 June. 
Plocker (2006) Stalin’s Jews, Yedioth Ahronoth (ynetnews.com), 21 December.
Whisker (1984) Karl Marx: Anti-Semite. Journal of Historical Review, 5(1): 69-76.
Schwartz (1978) Cults and the vulnerability of Jewish YouthJewish Education, 46(2): 23-42.
Veblen (1919) The Intellectual Pre-Eminence of Jews in Modern Europe. Political Science Quarterly 34(1).