The ‘Means of Reproduction’ and the Ultimate Purpose of Political Power

Laura Betzig, Despotism and Differential Reproduction: A Darwinian View of History (New Brunswick: AdelineTransation, 1983). 

Moulay Ismail Ibn Sharif, alias ‘Ismail the Bloodthirsty’, a late-seventeenth, early eighteenth century Emperor of Morocco is today little remembered, at least outside of his native Morocco. He is, however, in a strict Darwinian sense, possibly the most successful human ever to have lived. 

Ismail, you see, is said to have sired some 888 offspring. His Darwinian fitness therefore exceeded that of any other known person.[1]

Some have questioned whether this figure is realistic (Einon I998). However, the best analyses suggest that, while the actual number of offspring fathered by Ismail is indeed probably apocryphal, such a large progeny is indeed eminently plausible for a powerful ruler with access to a large harem of wives and/or concubines (Gould 2000Oberzaucher & Grammer 2014). 

Indeed, as Laura Betzig demonstrates in ‘Despotism and Differential Reproduction’, Ismail is exceptional only in degree.

Across diverse societies and cultures, and throughout human history, wherever individual males acquire great wealth and power, they convert this wealth and power into the ultimate currency of natural selection – namely reproductive success – by asserting and maintaining exclusive reproductive access to large harems of young female sex partners. 

A Sociobiological Theory of Human History 

Betzig begins her monograph by quoting a small part of a famous passage from the closing paragraphs of Charles Darwin’s seminal On the Origin of Species which she adopts as the epigraph to her preface. 

In this passage, the great Victorian naturalist tentatively extended his theory of natural selection to the question of human origins, a topic he conspicuously avoided in the preceding pages of his famous text. 

Yet, in this much-quoted passage, Darwin goes well beyond suggesting merely that his theory of evolution by natural selection might explain human origins in just the same way it explained the origin of other species. On the contrary, he also anticipated the rise of evolutionary psychology, writing of how: 

Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. 

Yet this is not the part of this passage quoted by Betzig. Instead, she quotes the next sentence, where Darwin makes another prediction, no less prophetic, namely that: 

Much light will be thrown on the origin of man and his history 

In this reference to “man and his history”, Darwin surely had in mind primarily, if not exclusively, the natural history and evolutionary history of our species. 

Betzig, however, interprets Darwin more broadly, and more literally, and, in so doing, has both founded, and for several years, remained the leading practitioner of a new field – namely, Darwinian history

This is the attempt to explain, not only the psychology and behaviour of contemporary humans in terms of sociobiological theory, but also to explain the behaviour of people in past historical epochs in terms of the same theory.  

Her book length monograph, ‘Despotism and Differential Reproduction: A Darwinian View of History’ remains the best known and most important work in this field. 

The Historical and Ethnographic Record 

In making the case that, throughout history and across the world, males in positions of power have used this power so as to maximize their Darwinian fitness by securing exclusive reproductive access to large harems of fertile females, Betzig, presumably to avoid the charge of cherry picking, never actually even mentions Ismail the Bloodthirsty at any point in her monograph. 

Instead, Betzig uses ethnographic data taken from a random sample of cultures from across the world. Nevertheless, the patterns she uncovers are familiar and recurrent. 

Powerful males command large harems of multiple fertile young females, to whom they assert, and defend, exclusive reproductive access. In this way, they convert their power into the ultimate currency of natural selection – namely, reproductive success or fitness

Thus, summarizing Betzig’s work, not only in ‘Despotism and Differential Reproduction’, but also in other published works, science writer Matt Ridley reports: 

[Of] the six independent ‘civilizations’ of early history – Babylon, Egypt, India, China, the Aztecs and the Incas… the Babylonian king Hammurabi had thousands of slave ‘wives’ at his command. The Egyptian pharaoh Akhenaten procured three hundred and seventeen concubines and ‘droves’ of consorts. The Aztec ruler Montezuma enjoyed four thousand concubines. The Indian emperor Udayama preserved sixteen thousand consorts in apartments guarded by eunuchs. The Chinese emperor Fei-ti had ten thousand women in his harem. The Inca… kept virgins on tap throughout the kingdom” (The Red Queen: p191-2; see Betzig 1993a). 

Such vast harems seem, at first, wholly wasteful. This is surely more fertile females than even the horniest, healthiest and most virile of emperors could ever hope have sex with, let alone successfully impregnate. As Betzig acknowledges: 

The number of women in such a harem may easily have prohibited the successful impregnation of each… but, their being kept from bearing children to others increased the monarch’s relative reproductive accomplishment” (p70). 

In other words, even if these rulers were unable to successfully impregnate every concubine in their harem, keeping them cloistered and secluded nevertheless prevented other males from impregnating them, which increased the relative representation of the rulers genes in subsequent generations.  

To this end, extensive efforts also were made to ensure the chastity of these women. Thus, even in ancient times, Betzig reports: 

Evidence of claustration, in the form of a walled interior courtyard, exists for Babylonian Mai; and claustration in second story rooms with latticed, narrow windows is mentioned in the Old Testament” (p79). 

Indeed, Betzig even proposes an alternative explanation for early evidence of fortifications

Elaborate fortifications erected for the purposes of defense may [also] have served the dual (identical?) function of protecting the chastity of women of the harem” (p79). 

Indeed, as Betzig alludes to in her parenthesis, this second function is arguably not entirely separate to the first. 

After all, if all male-male competition is ultimately based on competition over access to fertile females, then this surely very much includes warfare. As Napoleon Chagnon emphasizes in this studies of warfare and intergroup raiding among the Yąnomamö Indians of the Amazonian rainforest, warfare among primitive peoples tends to be predicated on the capture of fertile females from among enemy groups.[2]

Therefore, even fortifications erected for the purposes of military defence, ultimately serve the evolutionary function of maintaining exclusive reproductive access to the fertile females contained therein. 

Other methods of ensuring the chastity of concubines, and thus the paternity certainty of emperors, included the use of eunuchs as harem guards. Indeed, this seems to have been the original reason eunuchs were castrated and later became a key element in palace retinues (see The Evolution of Human Sociality: p45). 

Chastity belts, however, ostensibly invented for the wives of crusading knights while the latter were away on crusade, are likely a modern myth. 

The movements of harem concubines were also highly restricted. Thus, if permitted to venture beyond their cloisters, they were invariably escorted. 

For example in the African Kingdom of Dahomey, Betzig reports: 

The king’s wives’… approach was always signalled by the ringing of a bell by the women servant or slave who invariably preceded them [and] the moment the bell is heard all persons, whether male or female , turn their backs, but all the males must retire to a certain distance” (p79). 

Similarly, inmates of the ‘Houses of Virgins’ maintained by Inca rulers: 

Lived in perpetual seclusion to the end of their lives… and were not permitted to converse, or have intercourse with, or to see any man, nor any woman who was not one of themselves” (p81-2). 

Feminists tend to view such practices as evidence of the supposed oppression of women

However, from a sociobiological or evolutionary psychological perspective, the primary victims of such practices were, not the harem inmates themselves, but rather the lower-status men condemned to celibacy and ‘inceldom’ as a consequence of royal dynasties monopolizing sexual access to almost all the fertile females in the society in question. 

The encloistered women might have been deprived of their freedom of movement – but many lower-status men in the same societies were deprived of almost all access to fertile female sex partners, and hence any possibility of passing on their genes, the ultimate evolutionary function of any biological organism. 

In contrast, the concubines secluded in royal harems were not only able to reproduce, but also lived lives of relative comfort, if not, in some cases, outright luxury, often being: 

Equipped with their own household and servants, and probably lived reasonably comfortable lives in most respects, except… for a lack of liberal masculine company” (p80). 

Indeed, seclusion, far from evidencing oppression, was primarily predicted on safety and protection. In short, to be imprisoned is not so bad when one is imprisoned in a palace. 

Finally, methods were also sometimes employed specifically to enhance their fertility of the women so confined. Thus, Ridley reports: 

Wet nurses, who allow women to resume ovulation by cutting short their breast-feeding periods, date from at least the code of Hammurabi in the eighteenth century BC… Tang dynasty emperors of China kept careful records of dates of menstruation and conception in the harem so as to be sure to copulate only with the most fertile concubines… [and] Chinese emperors were also taught to conserve their semen so as to keep up their quota of two women a day” (The Red Queen: p192). 

Confirming Betzig’s conclusions but subsequent to the publication of her work, researchers have now uncovered genetic evidence of the fecundity of one particular powerful ruler (or ruling male lineage) – namely, a Y chromosome haplogroup, found in 8% of males across a large region of Asia and in one in two hundred males across the whole world – the features of which are consistent with its having spread across the region thanks to the exception prolificity of Genghis Khan, his male siblings and descendants (Zerjal 2003). 

Female Rulers? 

In contrast, limited to only one pregnancy every nine months, a woman, howsoever rich and powerful, can necessarily bear far fewer offspring than can be sired by a man enjoying equivalent wealth, power and access to multiple fertile sex partners, even with the aid of evolutionary novelties like wet nurses, bottle milk and IVF treatment. 

As a female analogue of Ismail the Bloodthirsty, it is sometimes claimed that a Russian woman gave birth to 69 offspring in the nineteenth-century. She was also supposedly, and very much unlike Ismail the Bloodthirsty, not a powerful and polygamous elite ruler, but rather a humble, monogamously married peasant woman. 

However, this much smaller figure is both physiologically implausible and poorly sourced. Indeed, even her name is unknown, and she is referred to only as the wife of Feodor Vassilyev. It is, in short, almost certainly an urban myth.[3]

Feminists have argued that the overrepresentation of males in positions of power is a consequence of such mysterious and non-existent phenomena as patriarchy or male dominance or the oppression of women.

In reality, however, it seems that, for women, seeking positions of power and wealth simply doesn’t have the same reproductive payoff as for men – because, no matter how many men a woman copulates with, she can usually only gestate, and nurse, one (or, in the case of twins or triplets, occasionally two or three) offspring at a time. 

This is the essence of Bateman’s Principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948Trivers 1972). 

This, then, in Darwinian terms, explains why women are less likely to assume positions of great political power.

It is not necessarily that they don’t want political power, but rather that they are less willing to make the necessary effort, or take the necessary risks, to attain power.[4]

This calculus then, rather than the supposed oppression of women, explains, not only the cross-culturally universal over-representation of men in positions of power, but also much of the so-called gender pay gap in our own societies (see Kingsley Browne’s Biology at Work: reviewed here). 

Perhaps the closest women can get to producing such a vast progeny is maneuver their sons into having the opportunity to do so. This might explain why such historical figures as Agrippina the Younger, the mother of Nero, and Olympias, mother of Alexander the Great, are reported as having been so active, and instrumental, in securing the succession on behalf of their sons. 

The Purpose of Political Power? 

The notion that powerful rulers often use their power to gain access to multiple nubile sex partners is, of course, hardly original to sociobiology. On the contrary, it accords with popular cynicism regarding males in positions of power. 

What a Darwinian perspective adds is the ultimate explanation of why political leaders do so – and why female political rulers, even when they do assume power, usually adopt a very different reproductive strategy. 

Moreover, a Darwinian perspective goes beyond popular cynicism in suggesting that access to multiple sex partners is not merely yet another perk of power. On the contrary, it is the ultimate purpose of power and reason why men evolved to seek power in the first place. 

As Betzig herself concludes: 

Political power in itself may be explained, at least in part, as providing a position from which to gain reproductively” (p85).[5]

After all, from a Darwinian perspective, political power in and of itself has no intrinsic value. It is only if power can be used in such a way as to maximize a person’s reproductive success or fitness that it has evolutionary value. 

Thus, as Steven Pinker has observed, the recurrent theme in science fiction film and literature of robots rebelling against humans to take over the world and overthrow humanity is fundamentally mistaken. Robots would have no reason to rebel against humans, simply because they would not be programmed to want to take over the world and overthrow humanity in the first place. 

On the other hand, humans have been programmed to seek wealth and power – and to resist oppression and exploitation. This is why revolutions are a recurrent feature of human societies and history.  

But we have been programmed, not by a programmer or god-like creator, but rather by natural selection.

We have been programmed by natural selection to seek wealth and power only because, throughout human evolutionary history, those of us who achieved political power tended, like Ismail the Bloodthirsty, also to achieve high levels of reproductive success as a consequence. 

Darwin versus Marx 

In order to test the predictive power of her theory, Betzig contrasts the predictions made by sociobiological theory with a rival theory – namely, Marxism

The comparison is apposite since, despite repeated falsification at the hands of both economists and of history, Marxism remains, among both social scientists and laypeople, the dominant paradigm when it comes to explaining social structure, hierarchy and exploitation in human societies.  

Certainly, it has proven far more popular than any approach to understanding human dominance hierarchies rooted in either ethology or sociobiology

There are, it bears emphasizing, several similarities between the two approaches. For one thing, each theory traces its origins ultimately to a nineteenth-century Victorian founder resident in Britain at the time he authored his key works, namely Charles Darwin and Karl Marx respectively.  

More importantly, there are also substantive similarities in the content and predictions of both theoretical paradigm. 

In particular, each is highly cynical in its conclusions. Indeed, at first glance, Marxist theory appears superficially almost as cynical as Darwinian theory. 

Thus, like Betzig, Marx regarded most societies in existence throughout history as exploitative – and as designed to serve the interests, not of society in general or of the population of that society as a whole, but rather of the dominant class within that society alone – namely, in the case of capitalism, the bourgeoisie or capitalist employers. 

However, sociobiological and Marxist theory depart in at least three crucial respects. 

First, Marxists propose that exploitation will be absent in future anticipated communist utopias

Second, Marxists also claim that such exploitation was also absent among hunter-gatherer groups, where so-called primitive communism supposedly prevailed. 

Thus, the Marxist, so cynical with regard exploitation and oppression in capitalist (and feudal) society, suddenly turns hopelessly naïve and innocent when it comes to envisaging future unrealistic communist utopias, and when contemplating ‘noble savages’ in their putative ‘Eden before the fall’.

Unfortunately, however, in her critique of Marxism, Betzig herself nevertheless remains somewhat confused in respect of this key issue. 

On the one hand, she rightly dismisses primitive communism as a Marxist myth. Thus, she demonstrates and repeatedly emphasizes that: 

Men accrue reproductive rights to wives of varying numbers and fertility in every human society” (p20). 

Therefore, Betzig, contrary to the tenets of Marxism, concludes: 

Unequal access to the basic resource which perpetuates life, members of the opposite sex, is a condition in [even] the simplest societies” (p32; see also Chagnon 1979). 

Neither is universal human inequality limited only to access to fertile females. On the contrary, Betzig observes:

Some form of exploitation has been in evidence in even the smallest societies… Conflicts of interest in all societies are resolved with a consistent bias in favor of men with greater power” (p67). 

On the other hand, however, Betzig takes a wrong turn in refusing to rule out the possibility of true communism somehow arising in the future. Thus, perhaps in a misguided effort to placate the many leftist opponents of sociobiology in academia, she writes: 

Darwinism… [does not] preclude the possibility of future conditions under which individual interests might become common interests: under which individual welfare might best be served by serving the welfare of society… [nor] preclude… the possibility of the evolution of socialism” (p68). 

This, however, seems obviously impossible. 

After all, we have evolved to seek to maximize the representation of our own genes in subsequent generations at the expense of those of other individuals. Only a eugenic reengineering of human nature itself could ever change this. 

Thus, as Donald Symons emphasized in his seminal The Evolution of Human Sexuality (which I have reviewed here), reproductive competition is inevitable – because, whereas there is sometimes sufficient food that everyone is satiated and competition for food is therefore unnecessary and counterproductive, reproductive success is always relative, and therefore competition over women is universal. 

Thus, Betzig quotes Confucius as observing: 

Disorder does not come from heaven, but is brought about by women” (p26). 

Indeed, Betzig herself elsewhere recognizes this key point, namely the relativity of reproductive success, when she observes, in a passage quoted above, that a powerful monarch benefits from sequestering huge numbers of fertile females in his harem because, even if it is unfeasible that he would ever successfully impregnate all of them himself, he nevertheless thereby prevents other males from impregnating them, and thereby increases the relative representation of his own genes in subsequent generations (p70). 

It therefore seems inconceivable that social engineers, let alone pure happenstance, could ever engineer a society in which individual interests were identical to societal interests, other than a society of identical twins or through the eugenic reingineering of human nature itself (see Peter Singer’s A Darwinian Left, which I have reviewed here).[6]

Marx and the Means of Reproduction 

The third and perhaps most important conflict between the Darwinist and Marxist perspectives concerns what Betzig terms: 

The relative emphasis on production and reproduction” (p67). 

Whereas Marxists view control of what they term the means of production as the ultimate cause of societal conflict, socioeconomic status and exploitation, for Darwinians conflict and exploitation instead focus on control over what we might term the means of reproduction – in other words fertile females, their wombs, ova and vaginas. 

Thus, Betzig observes: 

Marxism makes no explicit prediction that exploitation should coincide with reproduction” (p68). 

In other words, Marxist theory is silent on the crucial issue of whether high-status individuals will necessarily convert their political and economic power into the ultimate currency of Darwinian selection – namely, reproductive success

On this view, powerful male rulers might just as well be celibate as control and assert exclusive reproductive access to large harems of young fertile wives and concubines. 

In contrast, for Darwinians, the effort to maximize one’s reproductive success is the very purpose, and ultimate end, of all political power. 

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed herehere and here): 

The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon: p165). 

Thus, production is, from a sociobiological perspective, just another means of gaining the resources necessary for reproduction. 

On the other hand, reproduction is, from a biological perspective, the ultimate purpose of life. 

Therefore, it seems that, for all his ostensible radicalism, Karl Marx was, in his emphasis on economics rather than sex, just another nineteenth-century Victorian prude! 

The Polygyny Threshold Model Applied to Humans? 

One way of conceptualizing the tendency of powerful males to attract (or perhaps commandeer) multiple wives and concubines is the polygyny threshold model

This way of conceptualizing male and female reproductive and ecological competition was first formulated by ornithologist-ecologist Gordon Orians in order to model the mating systems of passerine birds (Orians 1969). 

Here, males practice so-called resource defence polygyny – in other words, they defend territories containing valuable resources (e.g. food, nesting sites) necessary for successful reproduction and provisioning of offspring. 

Females then distribute themselves between males in accordance with size and quality of male territories. 

On this view, if the territory of one male is twice as resource-abundant as that of another, he would, all else being equal, attract twice as many mates; if it is three times as resource-abundant, he would attract three times as many mates; etc. 

The result is rough parity in resource-holdings and reproductive success among females, but often large disparities among males. 

Applying the Polygyny Threshold Model to Modern America 

Thus, applying the polygyny threshold model to humans, and rather simplistically substituting wealth for territory size and quality, we might predict that, if Jeff Bezos is a hundred thousand times richer than Joe Schmo, then, if Joe has only one wife, then Jeff should have around 100,000 wives. 

But, of course, Jeff Bezos does not have 100,000 wives, nor even a mere 100,000 concubines. 

Instead, he has only one solitary meagre ex-wife, and she, even when married to him, was not, to the best of my knowledge, ever guarded by any eunuchs – though perhaps he would have been better off if she had been, since they might have prevented her from divorcing him and taking an enormous share of his wealth with her in the ensuing divorce settlement.[7]

The same is also true of contemporary political leaders. 

Indeed, if any contemporary western political leader does attempt to practice polygyny, even on a comparatively modest scale, then, if discovered, a so-called sex scandal almost invariably results. 

Yet, viewed in historical perspective, the much-publicized marital infidelities of, say, Bill Clinton, though they may have outraged the sensibilities the of mass of monogamously-married Middle American morons, positively pale into insignificance besides the reproductive achievements of someone like, say, Ismail the Bloodthirsty

Indeed, Clinton’s infidelities don’t even pack much of a punch beside those of a politician from the same nation and just a generation removed, namely John F Kennedy – whose achievements in the political sphere are vastly overrated on account of his early death, but whose achievements in the bedroom, while scarcely matching those of Ismail the Bloodthirsty or the Aztec emperors, certainly put the current generation of American politicians to shame. 

Why, then, does the contemporary west represent such a glaring exception to the general pattern of elite polygyny that Betzig has so successfully documented throughout so much of the rest of the world, and throughout so much of history? And what has become of the henpecked geldings who pass for politicians in the contemporary era? 

Monogamy as Male Compromise? 

According to Betzig, the moronic mass media moral panic that invariably accompanies sexual indiscretions on the part of contemporary Western political leaders and other public figures is no accident. Rather, it is exactly what her theory predicts. 

According to Betzig, the institution of monogamy as it operates in Western democracies represents a compromise between low-status and high status males. 

According to the terms of this compromise, high-status males agree to forgo polygyny in exchange for the cooperation of low status males in participating in the complexly interdependent economic systems of modern western polities (p105) – or, in biologist Richard Alexander’s alternative formulation, in exchange for serving as necessary cannon-fodder in wars (p104).[8]

Thus, whereas, under polygyny, there are never enough females to go around, under monogamy, at least assuming that there is a roughly equal sex ratio (roughly equal numbers of men and women), then virtually almost all males are capable of attracting a wife, howsoever ugly and unpleasant. 

This is important, since it means that all men, even the relatively poor and powerless, nevertheless have a reproductive stake in society. This, then, in evolutionary terms, provides  them with an incentive both:

1) To participate in the economy to support and thereby provide for their wife and family; and

2) To defend these institutions in wartime, if necessary with their lives.

The institution of monogamy has therefore been viewed as a key factor, if not the key factor, in both the economic and military ascendency of the West (see Scheidel 2008). 

Similarly, it has recently been argued that the increasing rates of non-participation of young males in the economy and workforce (i.e. the so-called NEET’ phenomenon) is a direct consequence of the reduction in reproductive opportunities to young males (Binder 2021).[9]

Thus, on this view, then, the media scandal and hysteria that invariably accompanies sexual infidelities by elected politicians, or constitutional monarchs, reflects outrage that the terms of this implicit agreement have been breached. 

This idea was anticipated by Irish playwright and socialist George Bernard Shaw, who observed in Man and Superman: Maxims for Revolutionaries, the preface to his play Man and Superman

Polygyny, when tried under modern democratic conditions, as by the Mormons is wrecked by the revolt of the mass of inferior men who are condemned to celibacy by it” (Shaw 1903). 

Socially Imposed Monogamy’? 

Consistent with this theory of socially imposed monogamy, it is indeed the case that, in all Western democratic polities, polygyny is unlawful, and bigamy a crime. 

Yet these laws are seemingly in conflict with contemporary western liberal democratic principles of tolerance and inclusivity, especially in respect of ‘alternative lifestyles’ and ‘non-traditional relationships’. 

Thus, for example, we have recently witnessed a successful campaign for the legalization of gay marriage in most western jurisdictions. However, strangely, polygynous marriage seemingly remains anathema – despite the fact that most cultures across the world and throughout history have permitted polygynous marriage, whereas few if any have ever accorded any state recognition to homosexual unions.

Indeed, strangely, whereas the legalization of gay marriage was widely perceived as ‘progressive’, polygyny is associated, not with sexual liberation with rather with highly traditional and sexually repressive groups such as Mormons and Muslims.[10]

Polygynous marriage was also, rather strangely, associated with the supposed oppression of women in traditional societies

However, most women actually do better, at least in purely economic terms, under polygyny than under monogamy, at least in highly stratified societies with large differences in resource-holdings as between males. 

Thus, if, as we have seen, Jeff Bezos is 100,000 times richer than Joe Schmo, then a woman is financially better off becoming the second wife, or the tenth wife (or even the 99,999th wife!), of Jeff Bezos rather than the first wife of poor Joe. 

Moreover, women also have another incentive to prefer Jeff to Joe. 

If she is impregnated by a polygynous male like Jeff, then her male descendants may inherit the traits that facilitated their father’s wealth, power and polygyny, and hence become similarly reproductively successful themselves, aiding the spread of the woman’s own genes in subsequent generations. 

Biologists call this good genes sexual selection or, more catchily, the sexy son hypothesis

Once again, however, George Bernard Shaw beat them to it when he observed in the same 1903 essay quoted above: 

Maternal instinct leads a woman to prefer a tenth share in a first rate man to the exclusive possession of a third rate one” (Shaw 1903). 

Thus, Robert Wright concludes: 

In sheerly Darwinian terms, most men are probably better off in a monogamous system, and most women worse off” (The Moral Animal: p96). 

Thus, women generally should welcome polygyny, while the only people opposed to polygyny should be: 

1) The women currently married to men like Jeff Bezos, and greedily unwilling to share their resource-abundant ‘alpha-male’ providers with a whole hundred-fold harem of co-wives and concubines; and

2) A glut of horny sexually-frustrated bachelor ‘incels’ terminally condemned to celibacy, bachelorhood and inceldom by promiscuous lotharios like Jeff Bezos and Ismail the Bloodthirsty greedily hogging all the hot chicks for themselves.

Who Opposes Polygyny, and Why? 

However, in my experience, the people who most vociferously and puritanically object to philandering male politicians are not low-status men, but rather women. 

Moreover, such women typically affect concern on behalf, not of the male bachelors and ‘incels’ supposedly indirectly condemned to celibacy by such behaviours, but rather the wives of such politicians – though the latter are the chief beneficiaries of monogamy, while these other women, precluded from signing up as second or third-wives to alpha-male providers, are themselves, at least in theory, among the main losers. 

This suggests that the ‘male compromise theory’ of socially-imposed monogamy is not the whole story. 

Perhaps then, although women benefit in purely financial terms under polygyny, they do not do so well in fitness terms. 

Thus, one study found that, whereas polygynous males had more offspring than monogamously-mated males, they had fewer offspring per wife, suggesting that, while males who are married polygynously benefit from polygyny, their wives incur a fitness penalty for having to share their husband (Strassman 2000). 

This probably reflects the fact that even male reproductive capacity is limited, as, notwithstanding the Coolidge effect (which has, to my knowledge, yet to be demonstrated in humans), males can only manage a certain number of orgasms per day. 

Women’s distaste for polygynous unions may also reflect the fact that even prodigiously wealthy males will inevitably have a limited supply of one particular resource – namely, time – and time spent with offspring may be an important determinant of offspring success, which paid child-minders, lacking a direct genetic stake in offspring, are unable to perfectly replicate.[11]

Thus, if Jeff Bezos were able to attract for himself the 100,000 wives that the polygyny threshold model suggests is his due, then, even if he were capable of providing each woman with the two point four children that is her own due, it is doubtful he would have enough time on his hands to spend much ‘quality time’ with each of his 240,000 offspring – just as one doubts Ismail the Bloodthirsty was himself an attentive father his own more modest mere 888. 

Thus, one suspects that, contrary to the polygyny threshold model, polygyny is not always entirely a matter of female choice (Sanderson 2001).

On the contrary, many of the women sequestered into the harems of rulers like Ismail the Bloodthirsty likely had little say in the matter. 

The Central Theoretical Problem of Human Sociobiology’ 

Yet, if this goes some way towards explaining the apparent paradox of socially imposed monogamy, there is, today, an even greater paradox with which we must wrestle – namely, why, in contemporary western societies, is there apparently an inverse correlation between wealth and number of offspring.  

Yet, from a sociobiological or evolutionary psychological perspective, this represents something of a paradox. 

If, as we have seen, the very purpose of wealth and power (from a sociobiological perspective) is to convert these advantages into the ultimately currency of natural selection, namely reproductive success, then why are the wealthy so spectacularly failing to do so in the contemporary west?[12]

Moreover, if status is not conducive to high reproductive success, then why have humans evolved to seek high-status in the first place? 

This anomaly has been memorably termed the ‘The central theoretical problem of human sociobiology’ in a paper by University of Pennsylvania demographer and eugenicist Daniel Vining (Vining 1986). 

Socially imposed monogamy can only go some way towards explaining this anomaly. Thus, in previous centuries, even under monogamy, wealthier families still produced more surviving offspring, if only because their greater wealth enabled them to successfully rear and feed multiple successive offspring to adulthood. In contrast, for the poor, high rates of infant mortality were the order of the day. 

Yet, in the contemporary west, it seems that the people who have the most children and hence the highest fitness in the strict Darwinian sense, are, at least according to popular stereotype, single mothers on government welfare. 

De Facto’ Polygyny 

Various solutions have been proposed to this apparent paradox. A couple amount to claiming that the west is not really monogamous at all, and, once this is factored in, then, at least among males, higher-status men do indeed have greater numbers of offspring than lower-status men. 

One suggestion along these lines is that perhaps wealthy males sire additional offspring whose paternity is misassigned, via extra-marital liaisons (Betzig 1993b). 

However, despite some sensationalized claims, rates of misassigned paternity are actually quite low (Khan 2010; Gilding 2005Bellis et al 2005). 

If it is lower-class women who are giving birth to most of the offspring, then it is probably mostly males of their own socioeconomic status who are responsible for impregnating them, if only because it is the latter with whom they have the most social contact. 

Perhaps a more plausible suggestion is that wealthy high-status males are able to practice a form of disguised polygyny by through repeated remarriage. 

Thus, wealthy men are sometimes criticized for divorcing their first wives to marry much younger second- and sometimes even third- and fourth-wives. In this way, they manage monopolize the peak reproductive years of multiple successive young women. 

This is true, for example, of recent American President Donald Trump – the ultimate American alpha male – who has himself married three women, each one younger than her predecessor

Thus, science journalist Robert Wright contends: 

The United States is no longer a nation of institutionalized monogamy. It is a nation of serial monogamy. And serial monogamy in some ways amounts to polygyny.” (The Moral Animal: p101). 

This, then, is not so much ‘Serial Monogamy’ as it is ‘sequential’ or non-concurrent polygyny’. 

Evolutionary Novelties 

Another suggestion is that evolutionary novelties – i.e. recently developed technologies such as contraception – have disrupted the usual association between status and fertility. 

On this view, natural selection has simply not yet had sufficient time (or, rather, sufficient generations) over which to mold our psychology and behaviour in such a way as to cause us to use these technologies in an adaptive manner – i.e. in order to maximize, not restrict, our reproductive success. 

An obvious candidate here is safe and effective contraception, which, while actually somewhat older than most people imagine, nevertheless became widely available to the population at large only over the course of the past century, which surely not enough generations for us to have become evolutionarily adapted to its use.  

Thus, a couple of studies have found that that, while wealthy high-status males may not father more offspring, they do have more sex with a greater number of partners – i.e. behaviours that would have resulted in more offspring in ancestral environments prior to the widespread availability of contraception(Pérusse 1993Kanazawa 2003). 

This implies that high-status males (or their partners) use contraception either more often, or more effectively, than low-status males, probably because of their greater intelligence and self-control, namely the very traits that enabled them to achieve high socioeconomic status in the first place (Kanazawa 2005). 

Another evolutionary novelty that may disrupt the usual association between social status and number of surviving offspring is the welfare system

Welfare payments to single mothers undoubtedly help these families raise to adulthood offspring who would otherwise perish in infancy. 

In addition, by reducing the financial disincentives associated with raising additional offspring, they probably increase the number of offspring these women choose to have in the first place. 

While it is highly controversial to suggest that welfare payments to single mothers actually give the latter an actual financial incentive to bear additional offspring, they surely, at the very least, reduce the financial disincentives otherwise associated with bearing additional children. 

Therefore, given that the desire for offspring is probably innate, women would rationally respond by having more children.[13]

Feminist ideology also encourages women in particular to postpone childbearing in favour of careers. Moreover, it is probably higher-status females who are more exposed to feminist ideology, especially in universities, where feminist ideology is thoroughly entrenched

In contrast, lower-status women are not only less exposed to feminist ideology encouraging them to delay motherhood in favour of career, but also likely have fewer appealing careers available to them in the first place. 

Finally, even laws against bigamy and polygyny might be conceptualized as an evolutionary novelty that disrupts the usual association between status and fertility. 

However, whereas technological innovations such as effective contraception were certainly not available until recent times, ideological constructs and religious teachings – including ideas such as feminism, prohibitions on polygyny, and the socialist ideology that motivated the creation of the welfare state – have existed ever since we evolved the capacity to create such constructs (i.e. since we became fully human). 

Therefore, one would expect that humans would have evolved resistance to ideological and religious teachings that go against their genetic interests. Otherwise, we would be vulnerable to indoctrination (and hence exploitation) at the hands third parties. 

Dysgenics? 

Finally, it must be noted that these issues are not only of purely academic interest. 

On the contrary, since socioeconomic status correlates with both intelligence and personality traits such as conscientiousness, and these traits are, in turn, substantially heritable, and moreover determine, not only individual wealth and prosperity, but also at the aggregate level, the wealth and prosperity of nations, the question of who has the offspring is surely of central concern to the future of society, civilization and the world. 

In short, what is at stake is the very genetic posterity that we bequeath to future generations. It is simply too important a matter to be delegated to the capricious and irrational decision-making of individual women. 

__________________________

Endnotes

[1] Actually, the precise number of offspring Ismail fathered is unclear. The figure I have quoted in the main body of the text comes from various works on evolutionary psychology (e.g. Cartwright, Evolution and Human Behaviour: p133-4; Wright, The Moral Animal: p247). However, another earlier work on human sociobiology, David Barash’s The Whisperings Within gives an even higher figure, of “1,056 offspring” (The Whisperings Within: p47). Meanwhile, an article produced by the Guinness Book of Records gives a figure of at least 342 daughters and 700 sons, while a scientific paper by Elisabeth Oberzaucher and Karl Grammer gives a figure of 1171 offspring in total. The precise figure seems to be unknown and is probably apocryphal. Nevertheless, the general point – namely that a powerful male with access to a large harem and multiple wives and concubines, is capable of fathering many offspring – is surely correct.

[2] The capture of fertile females from among enemy groups is by no means restricted to the Yąnomamö. On the contrary, it may even form the ultimate evolutionary basis for intergroup conflict and raiding among troops of chimpanzees, our species’ closest extant relative. It is also alluded to, and indeed explicitly commanded, in the Hebrew Bible (e.g. Deuteronomy 20: 13-14Numbers 31: 17-18), and was formerly prevalent in western culture as well.
It is also very much apparent, for example, in the warfare and raiding formerly endemic in the Gobi Desert of what is today Mongolia. Thus, the mother of Genghis Khan was, at least according to legendherself kidnapped by the Great Khan’s father. Indeed, this was apparently an accepted form of courtship on the Mongolian Steppe, as Genghis Khan’s own wife was herself stolen from him on at least one occasion by rival Steppe nomads, resulting in a son of disputed paternity (whom the great Khan perhaps tellingly named Jochi, which is said to translate as ’guest) and a later succession crisis. 
Many anthropologists, it ought to be noted, dismiss Chagnon’s claim that Yanomami warfare is predicated on the capture of women. Perhaps the most famous is Chagnon’s own former student, Kenneth Good, whose main claim to fame is to have himself married a (by American standards, underage) Yąnomamö girl – who, in a dramatic falsification of her husband’s theory, was then herself twice raped and abducted by raiding Yanomami warriors.

[3] It is ironic that John Cartwright, author of Evolution and Human Behaviour, an undergraduate level textbook on evolutionary psychology, is skeptical regarding the claim that Ismail the Bloodthirsty fathered 888 offspring, but nevertheless apparently takes at face value that claim that a Russian peasant woman had 69 offspring, a biologically far more implausible claim (Evolution and Human Behaviour: p133-4).

[4] There may even be a fitness penalty associated with increased socioeconomic status and political power for women. For example, among baboons, it has been found that high-ranking females actually suffer reduced fertility and higher rates of miscarriages (Packer et al 1995). Kingsley Browne, in his excellent book, Biology at Work: Rethinking Sexual Equality (which I have reviewed here), noting that female executives tend to have fewer children, tentatively proposes that a similar effect may be at work among humans: 

Women who succeed in business tend to be relatively high testosterone, which can result in lower female fertility, whether because of ovulatory irregularities or reduced interest in having children. Thus, rather than the high-powered career being responsible for the high rate of childlessness, it may be that high testosterone levels are responsible for both” (Biology at Work: p124).

[5] However, here, Betzig is perhaps altogether overcautious. Thus, whether or not “political power in itself” is explained in this way (i.e. “as providing a position from which to gain reproductively”), certainly the human desire for political power must surely be explained in this way.

[6] The prospect of eugenically reengineering human nature itself so as to make utopian communism achievable, and human society less conflictual, is also unrealistic. As John Gray has noted in Straw Dogs: Thoughts on Humans and Other Animals (reviewed here), if human nature is eugenically reengineered, then it will be done, not in the interests of society, let alone humankind, as a whole, but rather in the interests of those responsible for ordering or undertaking the project – namely, scientists and, more importantly, those from whom they take their orders (e.g. government, politicians, civil servants, big business, managerial elites). Thus, Gray concludes: 

“[Although] it seems feasible that over the coming century human nature will be scientifically remodelled… it will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

[7] Here, it is important to emphasize that what is exceptional about western societies is not monogamy per se. On the contrary, monogamy is common in relatively egalitarian societies (e.g. hunter-gatherer societies), especially those living at subsistence levels, where no male is able to secure access to sufficient resources so as to provision multiple offspring (Kanazawa and Still 1999). What is exceptional about contemporary western societies is the combination of:

1) Large differentials of resource-holdings between males (i.e. social stratification); and

2) Prescriptive monogamy (i.e. polygyny is not merely not widely practised, but also actually unlawful).

[8] Quite when prescriptive monogamy originated in the west seems to be a matter of some dispute. Betzig views it as very much a recent phenomenon, arising with the development of complex, interdependent industrial economies, which required the cooperation of lower-status males in order to function. Here, Betzig perhaps underestimates the extent to which even pre-industrial economies required the work and cooperation of low-status males in order to function. Thus, Betzig argues that, in ancient Rome, nominally monogamous marriages concealed rampantly de facto polygyny, with emperors and other powerful males fathering multiple offspring with both slaves and other men’s wives (Betzig 1992). Similarly, in medieval Europe, she argues that, despite nominal monogamy, wealthy men fathered multiple offspring through servant girls (Betzig 1995aBetzig 1995b). In contrast, Macdonald persuasively argues that medieval monogamy was no mere myth (Macdonald 1995aMacdonald 1995b).

[9] Certainly, the so-called NEET and incel phenomena seem to be correlated with one another. NEETs are disproportionately likely to be incels, and incels are disproportionately likely to be NEETs. However, the direction of causation is unclear and probably works in both directions. On the one hand, since women are rarely attracted to men without money or the prospects of money, men without jobs are rarely  able to attract wives or girlfriends. However, on the other hand, men who, for whatever reason, perceive themselves as unable to attract a wife or girlfriend even if they did have a job, probably see little incentive to getting a job in the first place or keeping the one they do have.

[10] Indeed, during the debates surrounding the legalization of gay marriage, the prospect of the legalization of polygynous marriage was rarely discussed, and, when it was raised, it was usually invoked by the opponents of gay marriage, as a sort of reductio ad absurdum of changes in marriage laws to permit gay marriage, something champions of gay marriage were quick to dismiss as preposterous scaremongering. In short, both sides in the acrimonious debates of gay marriage seem to have been agreed that legalizing polygynous unions was utterly beyond the pale.

[11] Thus, father absence is a known correlate of criminality and other negative life outcomes. In fact, however, the importance of paternal investment in offspring outcomes, and indeed of parental influence more generally, has yet to be demonstrated, since the correlation between father-absence and negative life-outcomes could instead reflect the heritability of personality, including those aspects of personality that cause people to have offspring out of wedlock, die early, abandon their children or have offspring by a person who abandons their offspring or dies early (see Judith Harris’s The Nurture Assumption, which I have reviewed here). 

[12] This paradox is related to another one – namely, why it is that people in richer societies tend to have lower fertility rates than poorer societies? This recent development, often referred to as the demographic transition, is paradoxical for the exact same reason that it is paradoxical for poorer people within western societies to have have fewer offspring than poorer people within these same societies, namely that it is elementary Darwinism 101 that an organism with access to greater resources should channel those additional resources into increased reproduction. Interestingly, this phenomenon is not restricted to western societies. On the contrary, other wealthy industrial and post-industrial societies, such as Japan, Singapore and South Korea, have, if anything, even lower fertility rates than Europe, Australasia and North America.

[13] Actually, it is not altogether clear that women do have an innate desire to bear children. After all, in the EEA, there was no need for women to evolve a desire to bear children. All they required to a desire to have sexual intercourse (or indeed a mere willingness to acquiesce in the male desire for intercourse). In the absence of contraception, offspring would then naturally result. Indeed, other species, including presumably most of our pre-human ancestors, are surely wholly unaware of the connection between sexual intercourse and reproduction, and perhaps some primitive human groups are as well. But this does not stop them from seeking out sexual opportunities and hence reproducing their kind. However, given anecdotal evidence of so-called ‘broodiness’ among women, I suspect women do indeed have some degree of innate desire for offspring.

References 

Bateman (1948), Intra-sexual selection in DrosophilaHeredity, 2 (Pt. 3): 349–368. 
Bellis et al (2005) Measuring Paternal Discrepancy and its Public Health ConsequencesJournal of Epidemiology and Community Health 59(9):749. 
Betzig 1992 Roman PolygynyEthology and Sociobiology 13(5-6): 309-349. 
Betzig 1993a. Sex, succession, and stratification in the first six civilizations: How powerful men reproduced, passed power on to their sons, and used power to defend their wealth, women and children. In Lee Ellis, ed. Social Stratification and Socioeconomic Inequality, pp. 37-74. New York: Praeger. 
Betzig 1993b. Where are the bastards’ daddies? Comment on Daniel Pérusse’s ‘Cultural and reproductive success in industrial societies’Behavioral and Brain Sciences, 16: 284-85. 
Betzig 1995a Medieval MonogamyJournal of Family History 20(2): 181-216. 
Betzig 1995b Wanting Women Isn’t New; Getting Them Is: VeryPolitics and the Life Sciences 14(1): 24-25. 
Binder (2021) Why Bother? The Effect of Declining Marriage Market Prospects on Labor-Force Participation by Young Men (March 1, 2021). Available at SSRN: https://ssrn.com/abstract=3795585 or http://dx.doi.org/10.2139/ssrn.3795585 
Chagnon N (1979) Is reproductive success equal in egalitarian societies. In: Chagnon & Irons (eds) Evolutionary Biology and Human Social Behavior: An Anthropological Perspective pp.374-402 (MA: Duxbury Press). 
Einon, G (1998) How Many Children Can One Man Have? Evolution and Human Behavior, 19(6):413–426. 
Gilding (2005) Rampant Misattributed Paternity: The Creation of an Urban MythPeople and Place 13(2): 1. 
Gould (2000) How many children could Moulay Ismail have had? Evolution and Human Behavior 21(4): 295 – 296. 
Khan (2010) The paternity myth: The rarity of cuckoldryDiscover, 20 June, 2010. 
Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50. 
Kanazawa (2003) Can Evolutionary Psychology Explain Reproductive Behavior in the Contemporary United States?  Sociological Quarterly.  44:  291–302. 
Kanazawa (2005) An Empirical Test of a Possible Solution to ‘the Central Theoretical Problem of Human Sociobiology’Journal of Cultural and Evolutionary Psychology.  3:  255–266. 
Macdonald 1995a The establishment and maintenance of socially imposed monogamy in Western EuropePolitics and the Life Sciences, 14(1): 3-23. 
Macdonald 1995b Focusing on the group: further issues related to western monogamyPolitics and the Life Sciences, 14(1): 38-46. 
Oberzaucher & Grammer (2014) The Case of Moulay Ismael – Fact or Fancy? PLoS ONE 9(2): e85292. 
Orians (1969) On the Evolution of Mating Systems in Birds and MammalsAmerican Naturalist 103 (934): 589–603. 
Packer et al (1995) Reproductive constraints on aggressive competition in female baboonsNature 373: 60–63. 
Pérusse (1993). Cultural and Reproductive Success in Industrial Societies: Testing the Relationship at the Proximate and Ultimate Levels.” Behavioral and Brain Sciences 16:267–322. 
Sanderson (2001) Explaining Monogamy and Polygyny in Human Societies: Comment on Kanazawa and StillSocial Forces 80(1):329-335. 
Scheidel (2008) Monogamy and Polygyny in Greece, Rome, and World History, (June 2008). Available at SSRN: https://ssrn.com/abstract=1214729 or http://dx.doi.org/10.2139/ssrn.1214729 
Shaw GB (1903) Man and Superman, Maxims for Revolutionists
Strassman B (2000) Polygyny, Family Structure and Infant Mortality: A Prospective Study Among the Dogon of Mali. In Cronk, Chagnon & Irons (Ed.), Adaptation and Human Behavior: An Anthropological Perspective (pp.49-68). New York: Aldine de Gruyter. 
Trivers, R. (1972). Parental investment and sexual selectionSexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 
Vining D 1986 Social versus reproductive success: The central theoretical problem of human sociobiology Behavioral and Brain Sciences 9(1): 167- 187. 
Zerjal et al. (2003) The Genetic Legacy of the MongolsAmerican Journal of Human Genetics, 72(3): 717–721. 

‘The Bell Curve’: A Book Much Read About, But Rarely Actually Read

The Bell Curve: Intelligence and Class Structure in American Life by Richard Herrnstein and Charles Murray (New York: Free Press, 1994). 

There’s no such thing as bad publicity’ – or so contends a famous adage of the marketing industry. 

The Bell Curve: Intelligence and Class Structure in America’ by Richard Herrnstein and Charles Murray is perhaps a case in point. 

This dry, technical, academic social science treatise, full of statistical analyses, graphs, tables, endnotes and appendices, and totalling almost 900 pages, became an unlikely nonfiction bestseller in the mid-1990s on a wave of almost universally bad publicity in which the work was variously denounced as racist, pseudoscientific, fascist, social Darwinist, eugenicist and sometimes even just plain wrong. 

Readers who hurried to the local bookstore eagerly anticipating an incendiary racialist polemic were, however, in for a disappointment. 

Indeed, one suspects that, along with The Bible and Stephen Hawkins’ A Brief History of Time, ‘The Bell Curve’ became one of those bestsellers that many people bought, but few managed to finish. 

The Bell Curve’ thus became, like another book that I have recently reviewed, a book much read about, but rarely actually read – at least in full. 

As a result, as with that other book, many myths have emerged regarded the content of ‘The Bell Curve’ that are quite contradicted when one actually takes the time and trouble to read it for oneself. 

Subject Matter 

The first myth of ‘The Bell Curve’ is that it was a book about race differences, or, more specifically, about race differences in intelligence. In fact, however, this is not true. 

Thus, ‘The Bell Curve’ is a book so controversial that the controversy begins with the very identification of its subject-matter. 

On the one hand, the book’s critics focused almost exclusively on subject of race. This led to the common perception that ‘The Bell Curve’ was a book about race and race differences in intelligence.[1]

Ironically, many racialists seem to have taken these leftist critics at their word, enthusiastically citing the work as support for their own views regarding race differences in intelligence.  

On the other hand, however, surviving co-author Charles Murray insisted from the outset that the issue of race, and race differences in intelligence, was always peripheral to he and co-author Richard Herrnstein’s primary interest and focus, which was, he claimed, on the supposed emergence of a ‘Cognitive Elite’ in modern America. 

Actually, however, both these views seem to be incorrect. While the first section of the book does indeed focus on the supposed emergence of a ‘Cognitive Elite’ in modern America, the overall theme of the book seems to be rather broader. 

Thus, the second section of the book focuses on the association between intelligence and various perceived social pathologies, such as unemployment, welfare dependency, illegitimacy, crime and single-parenthood. 

To the extent the book has a single overarching theme, one might say that it is a book about the social and economic correlates of intelligence, as measured by IQ tests, in modern America.  

Its overall conclusion is that intelligence is indeed a strong predictor of social and economic outcomes for modern Americans – high intelligence with socially desirable outcomes and low intelligence with socially undesirable ones. 

On the other hand, however, the topic of race is not quite as peripheral to the book’s themes as sometimes implied by Murray and others. 

Thus, it is sometimes claimed only a single chapter dealt with race. Actually, however, two chapters focus on race differences, namely chapters 13 and 14, respectively titled ‘Ethnic Differences in Cognitive Ability’ and ‘Ethnic Inequalities in Relation to IQ’. 

In addition, a further two chapters, namely chapters 19 and 20, entitled respectively ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, deal with the topic of affirmative action, as does the final appendix, entitled ‘The Evolution of Affirmative Action in the Workplace’ – and, although affirmative action has been employed to favour women as well as racial minorities, it is with racial preferences that Herrnstein and Murray are primarily concerned. 

However, these chapters represent only 142 of the book’s nearly 900 pages. 

Moreover, in much of the remainder of the book, the authors actually explicitly restrict their analysis to white Americans exclusively. They do so precisely because the well documented differences between the races in IQ as well as in many of the social outcomes whose correlation with IQ the book discusses would mean that race would have represented a potential confounding factor that they would otherwise have to take steps to control for. 

Herrnstein and Murray therefore took to decision to extend their analysis to race differences near the end of their book, in order to address the question of the extent to which differences in intelligence, which they have already demonstrated to be an important correlate of social and economic outcomes among whites, are also capable of explaining differences in achievement as between races. 

Without these chapters, the book would have been incomplete, and the authors would have laid themselves open to the charge of political-correctness and of ignoring the elephant in the room

Race and Intelligence 

If the first controversy of ‘The Bell Curve’ concerns whether it is a book primarily about race and race differences in intelligence, the second controversy is over what exactly the authors concluded with respect to this vexed and contentious issue. 

Thus, the same leftist critics who claimed that ‘The Bell Curve’ was primarily a book about race and race differences in intelligence, also accused the authors of concluding that black people are innately less intelligent than whites

Some racists, as I have already noted, evidently took the leftists at their word, and enthusiastically cite the book as support and authority for this view. 

However, in subsequent interviews, Murray always insisted he and Herrnstein had actually remained “resolutely agnostic” on the extent to which genetic factors underlay the IQ gap. 

In the text itself, Herrnstein and Murray do indeed declare themselves “resolutely agnostic” with regard to the extent of the genetic contribution to the test score gap (p311).

However, just couple of sentences before they use this very phrase, they also appear to conclude that genes are indeed at least part of the explanation, writing: 

It seems highly likely to us that both genes and the environment have something to do with racial differences [in IQ]” (p311). 

This paragraph, buried near the end of chapter 13, during an extended discussion of evidence relating to the causes of race differences in intelligence, is the closest the authors come to actually declaring any definitive conclusion regarding the causes of the black-white test score gap.[2]

This conclusion, though phrased in sober and restrained terms, is, of course, itself sufficient to place its authors outside the bounds of acceptable opinion in the early-twenty-first century, or indeed in the late-twentieth century when the book was first published, and is sufficient to explain, and, for some, justify, the opprobrium heaped upon the book’s surviving co-author from that day forth. 

Intelligence and Social Class

It seems likely that races which evolved on separate continents, in sufficient reproductive isolation from one another to have evolved the obvious (and not so obvious) physiological differences between races that we all observe when we look at the faces, or bodily statures, of people of different races (and that we indirectly observe when we look at the results of different athletic events at the Olympic Games), would also have evolved to differ in psychological traits, including intelligence

Indeed, it is surely unlikely, on a priori grounds alone, that all different human races have evolved, purely by chance, the exact same level of intelligence. 

However, if races differ in intelligence are therefore probable, the case for differences in intelligence as between social classes is positively compelling

Indeed, on a priori grounds alone, it is inevitable that social classes will come to differ in IQ, if one accepts two premises, namely: 

1) Increased intelligence is associated with upward social mobility; and 
2) Intelligence is passed down in families.

In other words, if more intelligent people tend, on average, to get higher-paying jobs than those of lower intelligence, and the intelligence of parents is passed on to their offspring, then it is inevitable that the offspring of people with higher-paying jobs will, on average, themselves be of higher intelligence than are the offspring of people with lower paying jobs.  

This, of course, follows naturally from the infamous syllogism formulated by ‘Bell Curve’ co-author Richard Herrnstein way back in the 1970s (p10; p105). 

Incidentally, this second premise, namely that intelligence is passed down in families, does not depend on the heritability of IQ in the strict biological sense. After all, even if heritability of intelligence were zero, intelligence could still be passed down in families by environmental factors (e.g. the ‘better’ parenting techniques of high IQ parents, or the superior material conditions in wealthy homes). 

The existence of an association between social class and IQ ought, then, to be entirely uncontroversial to anyone who takes any time whatsoever to think about the issue. 

If there remains any room for reasoned disagreement, it is only over the direction of causation – namely the question of whether:  

1) High intelligence causes upward social mobility; or 
2) A privileged upbringing causes higher intelligence.

These two processes are, of course, not mutually exclusive. Indeed, it would seem intuitively probable that both factors would be at work. 

Interestingly, however, evidence demonstrates the occurrence only of the former. 

Thus, even among siblings from the same family, the sibling with the higher childhood IQ will, on average, achieve higher socioeconomic status as an adult. Likewise, the socioeconomic status a person achieves as an adult correlates more strongly with their own IQ score than it does with the socioeconomic status of their parents or of the household they grew up in (see Straight Talk About Mental Tests: p195). 

In contrast, family, twin and adoption studies and of the sort conducted by behavioural geneticists have concurred in suggesting that the so-called shared family environment (i.e. those aspects of the family environment shared by siblings from the same household, including social class) has but little effect on adult IQ. 

In other words, children raised in the same home, whether full- or half-siblings or adoptees, are, by the time they reach adulthood, no more similar to one another in IQ than are children of the same degree of biological relatedness brought up in entirely different family homes (see The Nurture Assumption: reviewed here). 

However, while the direction of causation may still be disputed by intelligent (if uninformed) laypeople, the existence of an association between intelligence and social class ought not, one might think, be in dispute. 

However, in Britain today, in discussions of social mobility, if children from deprived backgrounds are underrepresented, say, at elite universities, then this is almost invariably taken as incontrovertible proof that the system is rigged against them. The fact that children from different socio-economic backgrounds differ in intelligence is almost invariably ignored. 

When mention is made of this incontrovertible fact, leftist hysteria typically ensues. Thus, in 2008, psychiatrist Bruce Charlton rightly observed that, in discussion of social mobility: 

A simple fact has been missed: higher social classes have a significantly higher average IQ than lower social classes (Clark 2008). 

For his trouble, Charlton found himself condemned by the National Union of Students and assorted rent-a-quote academics and professional damned fools, while even the ostensibly ‘right-wing’ Daily Mail newspaper saw fit to publish a headline Higher social classes have significantly HIGHER IQs than working class, claims academic, as if this were in some way a controversial or contentious claim (Clark 2008). 

Meanwhile, when, in the same year, a professor at University College a similar point with regard the admission of working-class students to medical schools, even the then government Health Minister, Ben Bradshaw, saw fit to offer his two cents worth (which were not worth even that), declaring: 

It is extraordinary to equate intellectual ability with social class” (Beckford 2008). 

Actually, however, what is truly extraordinary is that any intelligent person, least of all a government minister, would dispute the existence of such a link. 

Cognitive Stratification 

Herrnstein’s syllogism leads to a related paradox – namely that, as environmental conditions are equalized, heritability increases. 

Thus, as large differences in the sorts of environmental factors known to affect IQ (e.g. malnutrition) are eliminated, so differences in income have come to increasingly reflect differences in innate ability. 

Moreover, the more gifted children from deprived backgrounds who escape their humble origins, then, given the substantial heritability of IQ, the fewer such children will remain among the working-class in subsequent generations. 

The result is what Herrnstein and Murray call the ‘Cognitive Stratification’ of society and the emergence of what they call a ‘Cognitive Elite’. 

Thus, in feudal society, a man’s social status was determined largely by ‘accident of birth’ (i.e. he inherited the social station of his father). 

Women’s status, meanwhile, was determined, in addition, by what we might call ‘accident of marriage’ – and, to a large extent, it still is

However, today, a person’s social status, at least according to Herrnstein and Murray, is determined primarily, and increasingly, by their level of intelligence. 

Of course, people are not allocated to a particular social class by IQ testing itself. Indeed, the use of IQ tests by employers and educators has been largely outlawed on account of its disparate impact (or indirect discrimination’, to use the equivalent British phrase) with regard to race (see below). 

However, the skills and abilities increasingly valued at a premium in western society (and, increasingly, many non-western societies as well), mean that, through the operation of the education system and labour market, individuals are effectively sorted by IQ, even without anyone ever actually sitting an IQ test. 

In other words, society is becoming increasingly meritocratic – and the form of ostensible ‘merit’ upon which attainment is based is intelligence. 

For Herrnstein and Murray, this is a mixed blessing: 

That the brightest are identified has its benefits. That they become so isolated and inbred has its costs” (p25). 

However, the correlation between socioeconomic status and intelligence remains imperfect. 

For one thing, there are still a few highly remunerated, and very high-status, occupations that rely on skills that are not especially, if at all, related to intelligence.  I think here, in particular, of professional sports and the entertainment industry. Thus, leadings actors, pop stars and sports stars are sometimes extremely well-remunerated, and very high-status, but may not be especially intelligent.  

More importantly, while highly intelligent people might be, by very definition, the only ones capable of performing cognitively-demanding, and hence highly remunerated, occupations, this is not to say all highly intelligent people are necessarily employed in such occupations. 

Thus, whereas all people employed in cognitively-demanding occupations are, almost by definition, of high intelligence, people of all intelligence levels are capable of doing cognitively-undemanding jobs.

Thus, a few people of high intellectual ability remain in low-paid work, whether on account of personality factors (e.g. laziness), mental illness, lack of opportunity or sometimes even by choice (which choice is, of course, itself a reflection of personality factors). 

Therefore, the correlation between IQ and occupation is far from perfect. 

Job Performance

The sorting of people with respect to their intelligence begins in the education system. However, it continues in the workplace. 

Thus, general intelligence, as measured by IQ testing, is, the authors claim, the strongest predictor of occupational performance in virtually every occupation. Moreover, in general, the higher paid and higher status the occupation in question, the stronger the correlation between performance and IQ. 

However, Herrnstein and Murray are at pains to emphasize, intelligence is a strong predictor of occupational performance even in apparently cognitively undemanding occupations, and indeed almost always a better predictor of performance than tests of the specific abilities the job involves on a daily basis. 

However, in the USA, employers are barred from using testing to select among candidates for a job or for promotion unless they can show the test has ‘manifest relationship’ to the work, and the burden of proof is on the employer to show such a relationship. Otherwise, given their disparate impact’ with regard to race (i.e. the fact that some groups perform worse), the tests in question are deemed indirectly discriminatory and hence unlawful. 

Therefore, employers are compelled to test, not general ability, but rather the specific skills required in the job in question, where a ‘manifest relationship’ is easier to demonstrate in court. 

However, since even tests of specific abilities almost invariably still tap into the general factor of intelligence, races inevitably score differently even on these tests. 

Indeed, because of the ubiquity and predictive power of the g factor, it is almost impossible to design any type of standardized test, whether of specific or general ability or knowledge, in which different racial groups do not perform differently. 

However, if some groups outperform others, the American legal system presumes a priori that this reflects test bias rather than differences in ability. 

Therefore, although the words all men are created equal are not, contrary to popular opinion, part of the US constitution, the Supreme Court has effectively decided, by legal fiat, to decide cases as if they were. 

However, just as a law passed by Congress cannot repeal the law of gravity, so a legal presumption that groups are equal in ability cannot make it so. 

Thus, the bar on the use of IQ testing by employers has not prevented society in general from being increasingly stratified by intelligence, the precise thing measured by the outlawed tests. 

Nevertheless, Herrnstein and Murray estimate that the effective bar on the use of IQ testing makes this process less efficient, and cost the economy somewhere between 80 billion to 13 billion dollars in 1980 alone (p85). 

Conscientiousness and Career Success 

I am skeptical of Herrnstein and Murray’s conclusion that IQ is the best predictor of academic and career success. I suspect hard work, not to mention a willingness to toady, toe the line, and obey orders, is at least as important in even the most cognitively-demanding careers, as well as in schoolwork and academic advancement. 

Perhaps the reason these factors have not (yet) been found to be as highly correlated with earnings as is IQ is that we have not yet developed a way of measuring these aspects of personality as accurately as we can measure a person’s intelligence through an IQ test. 

For example, the closest psychometricians have come to measuring capacity for hard work is the personality factor known as conscientiousness, one of the Big Five factors of personality revealed by psychometric testing. 

Conscientiousness does indeed correlate with success in education and work (e.g. Barrick & Mount 1991). However, the correlation is weaker than that between IQ and success in education and at work. 

However, this may be because personality is less easily measured by current psychometric methods than is intelligence – not least because personality tests generally rely on self-report, rather than measuring actual behaviour

Thus, to assess conscientiousness, questionnaires ask respondents whether they ‘see themselves as organized’, ‘as able to follow an objective through to completion’, ‘as a reliable worker’, etc. 

This would be the equivalent of an IQ test that, instead of directly testing a person’s ability to recognize patterns or manipulate shapes by having them do just this, simply asked respondents how good they perceived themselves as being at recognizing patterns, or manipulating shapes. 

Obviously, this would be a less accurate measure of intelligence than a normal IQ test. After all, some people lie, some are falsely modest and some are genuinely deluded. 

Indeed, according to the Dunning Kruger effect, it is those most lacking in ability who most overestimate their abilities – precisely because they lack the ability to accurately assess their ability (Kruger & Dunning 1999). 

In an IQ test, on the other hand, one can sometimes pretend to be dumber than one is, by deliberately getting questions wrong that one knows the answer to.[3]

However, it is not usually possible to pretend to be smarter than one is by getting more questions right simply because one would not know what are the right answers. 

Affirmative Action’ and Test Bias 

In chapters nineteen and twenty, respectively entitled ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, the authors discuss so-called affirmative action, an American euphemism for systematic and overt discrimination against white males. 

It is well-documented that, in the United States, blacks, on average, earn less than white Americans. On the other hand, it is less well-documented that whites, on average, earn less than people of IndianChinese and Jewish ancestry

With the possible exception of Indian-Americans, these differences, of course, broadly mirror those in average IQ scores. 

Indeed, according to Herrnstein and Murray, the difference in earnings between whites and blacks, not only disappears after controlling for differences in IQ, but is actually partially reversed. Thus, blacks are actually somewhat overrepresented in professional and white-collar occupations as compared to whites of equivalent IQ. 

This remarkable finding Herrnstein and Murray attribute to the effects of affirmative action programmes, as black Americans are appointed and promoted beyond what their ability merits because through discrimination. 

Interestingly, however, this contradicts what the authors wrote in an earlier chapter, where they addressed the question of test bias (pp280-286). 

There, they concluded that testing was not biased against African-Americans, because, among other reasons, IQ tests were equally predictive of real-world outcomes (e.g. in education and employment) for both blacks and whites, and blacks do not perform any better in the workplace or in education than their IQ scores predict. 

This is, one might argue, not wholly convincing evidence that IQ tests are not biased against blacks. It might simply suggest that society at large, including the education system and the workplace, is just as biased against blacks as are the hated IQ tests. This is, of course, precisely what we are often told by the television, media and political commentators who insist that America is a racist society, in which such mysterious forces as ‘systemic racism’ and ‘white privilege’ are pervasive. 

In fact, the authors acknowledge this objection, conceding:  

The tests may be biased against disadvantaged groups, but the traces of bias are invisible because the bias permeates all areas of the group’s performance. Accordingly, it would be as useless to look for evidence of test bias as it would be for Einstein’s imaginary person traveling near the speed of light to try to determine whether time has slowed. Einstein’s traveler has no clock that exists independent of his space-time context. In assessing test bias, we would have no test or criterion measure that exists independent of this culture and its history. This form of bias would pervade everything” (p285). 

Herrnstein and Murray ultimately reject this conclusion on the grounds that it is simply implausible to assume that: 

“[So] many of the performance yardsticks in the society at large are not only biased, they are all so similar in the degree to which they distort the truth-in every occupation, every type of educational institution, every achievement measure, every performance measure-that no differential distortion is picked up by the data” (p285). 

In fact, however, Nicholas Mackintosh identifies one area where IQ tests do indeed under-predict black performance, namely with regard to so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in a ‘normal’ manner). 

Blacks with low IQs are generally much more functional in these respects than whites or Asians with equivalent low IQs (see IQ and Human Intelligence: p356-7).[4]

Yet Herrnstein and Murray seem to have inadvertently, and evidently without realizing it, identified yet another sphere where standardized testing does indeed under-predict real-world outcomes for blacks. 

Thus, if indeed, as Herrnstein and Murray claim, blacks are somewhat overrepresented among professional and white-collar occupations relative to their IQs, this suggests that blacks do indeed do better in real-world outcomes than their test results would predict and, while Herrnstein and Murray attribute this to the effect of discrimination against whites, it could instead surely be interpreted as evidence that the tests are biased against blacks. 

Policy Implications? 

What, then, are the policy implications that Herrnstein and Murray draw from the findings that they report? 

In The Blank Slate: The Modern Denial of Human Nature, cognitive science, linguist and popular science writer Steven Pinker popularizes the notion that recognizing the existence of innate differences between individuals and groups in traits such as intelligence does not necessarily lead to ‘right-wing’ political implications. 

Thus, a leftist might accept the existence of innate differences in ability, but conclude that, far from justifying inequality, this is all the more reason to compensate the, if you like, ‘cognitively disadvantaged’ for their innate deficiencies, differences which are, being innate, hardly something for which they can legitimately be blamed. 

Herrnstein and Murray reject this conclusion, but acknowledge it is compatible with their data. Thus, in an afterword to later editions, Murray writes: 

If intelligence plays an important role in determining how well one does in life, and intelligence is conferred on a person through a combination of genetic and environmental factors over which that person has no control, the most obvious political implication is that we need a Rawlsian egalitarian state, compensating the less advantaged for the unfair allocation of intellectual gifts” (p554).[5]

Interestingly, Pinker’s notion of a ‘hereditarian left’, and the related concept of Bell Curve liberals, is not entirely imaginary. On the contrary, it used to be quite mainstream. 

Thus, it was the radical leftist post-war Labour government that imposed the tripartite system on schools in the UK in 1945, which involved allocating pupils to different schools on the basis of their performance in what was then called the 11-plus exam, conducted at with children at age eleven, which tested both ability and acquired knowledge. This was thought by leftists to be a fair system that would enable bright, able youngsters from deprived and disadvantaged working-class backgrounds to achieve their full potential.[6]

Indeed, while contemporary Cultural Marxists emphatically deny the existence of innate differences in ability as between individuals and groups, Marx himself, laboured under no such delusion

On the contrary, in advocating, in his famous (plagiarized) aphorism From each according to his ability; to each according to his need, Marx implicitly recognized that individuals differ in “ability”, and, given that, in the unrealistic communist utopia he envisaged, environmental conditions were ostensibly to be equalized, these differences he presumably conceived of as innate in origin. 

However, a distinction must be made here. While it is possible to justify economic redistributive policies on Rawlsian grounds, it is not possible to justify affirmative action

Thus, one might well reasonably contend that the ‘cognitively disadvantaged’ should be compensated for their innate deficiencies through economic redistribution. Indeed, to some extent, most Western polities already do this, by providing welfare payments and state-funded, or state-subsidized, care to those whose cognitive impairment is such as to qualify as a disability and hence render them incapable of looking after or providing for themselves. 

However, we are unlikely to believe that such persons should be given entry to medical school such that they are one day liable to be responsible for performing heart surgery on us or diagnosing our medical conditions. 

In short, socialist redistribution is defensible – but affirmative action is definitely not! 

Reception and Readability 

The reception accorded ‘The Bell Curve’ in 1994 echoed that accorded another book that I have also recently reviewed, but that was published some two decades earlier, namely Edward O. Wilson’s Sociobiology: The New Synthesis

Both were greeted with similar indignant moralistic outrage by many social scientists, who even employed similar pejorative soundbites (‘genetic determinism’, reductionism, ‘biology as destiny’), in condemning the two books. Moreover, in both cases, the academic uproar even spilled over into a mainstream media moral panic, with pieces appearing the popular press attacking the two books. 

Yet, in both cases, the controversy focused almost exclusively on just a small part of each book – the single chapter in Sociobiology: The New Synthesis focusing on humans and the few chapters in ‘The Bell Curve’ discussing race. 

In truth, however, both books were massive tomes of which these sections represented only a small part. 

Indeed, due to their size, one suspects most critics never actually read the books in full for themselves, including, it seemed, most of those nevertheless taking it upon themselves to write critiques. This is what led to the massive disconnect between what most people thought the books said, and their actual content. 

However, there is a crucial difference. 

Sociobiology: The New Synthesis was a long book of necessity, given the scale of the project Wilson set himself. 

As I have written in my review of that latter work, the scale of Wilson’s ambition can hardly be exaggerated. He sought to provide a new foundation for the whole field of animal behaviour, then, almost as an afterthought, sought to extend this ‘New Synthesis’ to human behaviour as well, which meant providing a new foundation, not for a single subfield within biology, but for several whole disciplines (psychology, sociology, economics and cultural anthropology) that were formerly almost unconnected to biology. Then, in a few provocative sentences, he even sought to provide a new foundation for moral philosophy, and perhaps epistemology too. 

Sociobiology: The New Synthesis was, then, inevitably and of necessity, a long book. Indeed, given that his musings regarding the human species were largely (but not wholly) restricted to a single chapter, one could even make a case that it was too short – and it is no accident that Wilson subsequently extended his writings with regard to the human species to a book length manuscript

Yet, while Sociobiology was of necessity a long book, ‘The Bell Curve: Intelligence and Class Structure in America’ is, for me, unnecessarily overlong. 

After all, Herrnstein and Murray’s thesis was actually quite simple – namely that cognitive ability, as captured by IQ testing, is a major correlate of many important social outcomes in modern America. 

Yet they reiterate this point, for different social outcomes, again and again, chapter after chapter, repeatedly. 

In my view, Herrnstein and Murray’s conclusion would have been more effectively transmitted to the audience they presumably sought to reach had they been more succinct in their writing style and presentation of their data. 

Had that been the case then perhaps rather more of the many people who bought the book, and helped make it into an unlikely nonfiction bestseller in 1994, might actually have managed to read it – and perhaps even been persuaded by its thesis. 

For casual readers interested in this topic, I would recommend instead Intelligence, Race, And Genetics: Conversations With Arthur R. Jensen (which I have reviewed herehere and here). 

Endnotes

[1] For example, Francis Wheen, a professional damned fool and columnist for the Guardian newspaper (which two occupations seem to be largely interchangeable) claimed that: 

The Bell Curve (1994), runs to more than 800 pages but can be summarised in a few sentences. Black people are more stupid than white people: always have been, always will be. This is why they have less economic and social success. Since the fault lies in their genes, they are doomed to be at the bottom of the heap now and forever” (Wheen 2000). 

In making this claim, Wheen clearly demonstrates that he has read few if any of those 800 pages to which he refers.

[2] Although their discussion of the evidence relating to the causes, genetic or environmental, of the black-white test score gap is extensive, it is not exhaustive. For example, Phillipe Rushton, the author of Race Evolution and Behavior (reviewed here and here) argues that, despite the controversy their book provoked, Herrnstein and Murray actually didn’t go far enough on race, omitting, for example, any real discussion, save a passing mention in Appendix 5, of race differences in brain size (Rushton 1997). On the other hand, Herrnstein and Murray also did not mention studies that failed to establish any correlation between IQ and blood groups among African-Americans, studies interpreted as supporting an environmentalist interpretation of race differences in intelligence (Loehlin et al 1973Scarr et al 1977). For readers interested in a more complete discussion of the evidence regarding the relative contributions of environment and heredity to the differences in IQ scores of different races, see my review of Richard Lynn’s Race Differences in Intelligence: An Evolutionary Analysis, available here.

[3] For example, some of those accused of serious crimes have been accused of deliberately getting questions wrong on IQ tests in order to qualify as mentally subnormal when before the courts for sentencing in order to be granted mitigation of sentence on this ground, or, more specifically, in order to evade the death penalty

[4] This may be because whites or Asians with such low IQs are more likely to have such impaired cognitive abilities because of underlying conditions (e.g chromosomal abnormalitiesbrain damage) that handicap them over and above the deficit reflected in IQ score alone. On the other hand, blacks with similarly low IQs are still within the normal range for their own race. Therefore, rather than suffering from, say, a chromosomal abnormality or brain damage, they are relatively more likely to simply be at the tail-end of the normal range of IQs within their group, and hence normal in other respects.

[5] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ‘veil of ignorance’.

[6] The tripartite system did indeed enable many working-class children to achieve a much higher economic status than their parents, although this was partly due to the expansion of the middle-class sector of the economy over the same time-period. It was also later Labour administrations who largely abolished the 11-plus system, not least because, unsurprisingly given the heritability of intelligence and personality, children from middle-class backgrounds tended to do better on it than did children from working-class backgrounds.

References 

Barrick & Mount 1991 The big five personality dimensions and job performance: a meta-analysis. Personnel Psychology 44(1):1–26 
Beckford (2008) Working classes ‘lack intelligence to be doctors’, claims academicDaily Telegraph, 04 Jun 2008. 
Clark 2008 Higher social classes have significantly HIGHER IQs than working class, claims academic Daily Mail, 22 May 2008. 
Kruger & Dunning (1999) Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-AssessmentsJournal of Personality and Social Psychology 77(6):1121-34 
Loehlin et al (1973) Blood group genes and negro-white ability differencesBehavior Genetics 3(3): 263-270  
Rushton, J. P. (1997). Why The Bell Curve didn’t go far enough on race. In E. White (Ed.), Intelligence, political inequality, and public policy (pp. 119-140). Westport, CT: Praeger. 
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 . 
Wheen (2000) The ‘science’ behind racismGuardian, 10 May 2000. 

John Dickson’s ‘Jesus: A Short Life’: Christian Apologetics Masquerading as History

John Dickson, Jesus: A Short Life (Oxford: Lion Books, 2012).

The edition of John Dickson’s book, ‘Jesus: A Short Life’, that I read, published in 2012 was subtitled ‘Jesus: A Short Life – The Historical Evidence’. However, I notice that the first edition of this book, published a few years earlier in 2008, seemed to omit part of the subtitle referring to “The Historical Evidence”.

This is actually quite fitting, because, despite this added subtitle, Dickson himself omits to include much historical evidence supporting the biographical details he presents in the book.

Instead, Dickson relies heavily on what is referred to as the ‘appeal to authority or ‘argumentum ab auctoritate’ fallacy – albeit with a touch of the argumentum ad Populum’ fallacy thrown in too for good measure.

Thus, he repeatedly insists that ‘all serious scholars agree’ on a certain aspect of Jesus’s biography, with the clear implication that that this is reason enough for non-expert readers like myself to agree as well.

Unfortunately, he only very rarely actually takes the time to explain why all serious scholars supposedly agree on this aspect of Jesus’s life or present the actual evidence that has led the experts to agree.

Instead, he seems to imply that the reader should simply defer to expert opinion rather than taking the time to actually look at the evidence for themselves and make their own judgement.
 
For example, he observes that the claim that Jesus was publicly baptised by John the Baptist is “doubted by no one doing historical Jesus research” (p49).

However, he neglects to explain in the main body of his text why no serious scholar doubts this, or why the evidence is so compelling.

Only in an accompanying endnote does he bother to explain that the main reason all experts agree is that this episode supposedly satisfies what New Testament scholars refer to as the criterion of embarrassment.

In other words, because it seems to cast Jesus in a role subordinate to that of John, the opposite of the impression the biblical authors presumably intended to convey, it is hardly the sort of thing the gospel writers are likely to have invented (p137-8).

Incidentally, I am not entirely sure whether the so-called criterion of embarrassment is unambiguously satisfied with respect to the claim that Jesus was baptized by John the Baptist. After all, Jesus is portrayed as humble throughout the gospels and often adopts a subordinate role – for example, when he is described as washing the feet of his disciples (John 13:1–17).

Indeed, given that Jesus’s philosophy represents what Nietzsche called a slave morality, whereby what would usually be seen as a source of shame and embarrassment is instead elevated into a positive virtue, the entire concept of the criterion of embarrassment seems to of dubious value, at least with respect to questions regarding the historical Jesus.

Thus, Jesus’s entire life, from his obscure origin in Nazareth, through his baptism by John, to the ultimate failure of his ministry and his ignominious death at the hands of the Romans, would seem to be an embarrassment from start to finish, at least for a figure who claimed to be a saviour and ‘Messiah, who would free the Jews from subjugation at the hands of their Roman overlords and usher in a new Kingdom of Heaven. Yet, for Christians, all of this, far from being embarrassing, is reinterpreted as perverse proof of Jesus’s divinity and omnipotence.

In short, at least as applied to historical Jesus research, the so-called criterion of embarrassment seems to represent something of an embarrassment in and of itself.

Elsewhere, in rebutting the assertion of Richard Dawkins in The God Delusion that “it is even possible to mount a serious, though not widely supported, historical case that Jesus never lived at all”, Dickson again resorts to a combination of the argumentum ab auctorita and argumentum ad Populum fallacies, insisting “no one who is actually doing history thinks so” (p21).

Actually, although it remains very much a minority, maverick position, some researchers, who certainly regard themselves as “doing history”, have indeed championed the so-called mythicist’ thesis that Jesus never existed, including, for example, Richard Carrier, Earl Doherty and Robert Price.

Perhaps Dickson regards these authors’ work as so worthless that they cannot be said to be truly “doing history” at all. If so, however, then this is an obvious, indeed almost a textbook, example of the no true Scotsman’ fallacy.

Thus, Dickson asserts:

Not only is Jesus’ non-existence never discussed in academic literature… but most experts agree that there are… ‘no substantial doubts about the general course of Jesus’ life’” (p10-11).

The actual evidence he cites, however, seems rather less than compelling.

Aside from the gospels themselves one of the few other sources he cites is contemporary letter written by one Mara bar Serapion referring to the Jews killing their “wise king”.

Dickson claims:

There is a consensus among scholars that Mara bar Serapion‘s ‘wise king’ was none other than Jesus. It simply strains belief to imagine that there could have been two figures in first century Palestine fitting the description of Jew, law-giver, king and martyr by his own people” (p19).

Perhaps so – but unfortunately Jesus himself doesn’t fit the bill all that well either.

After all, Jesus may indeed have been ‘wise’. This, at least, is debatable.[1]

However, he certainly does not appear to have been a king’, at least in the ordinary, familiar sense of the word.

Neither does he appear to have been killed by the Jews, as claimed of the figure described in Mara bar Serapion’s letter, but rather by the Romans. Crucifixion was, after all, a Roman, not a Jewish, method of execution.

Moreover, Dickson himself admits “Jesus of Nazareth was not the most revered religious figure of the period”, even in Palestine (p109). Might not these other religious gurus fit the bill better?

Moreover, although Dickson asserts “it simply strains belief to imagine that there could have been two figures in first century Palestine fitting the description”, actually there appears to be nothing in Mara bar Serapion’s letter that actually says anything about this “wise king” necessarily having resided “in first century Palestine”. This seems to have been entirely an invention of Dickson’s own.

What then of Jewish religious leaders, or better still actual “kings” (in the familiar sense of this word) from other periods?

Or perhaps Mara bar Serapion was just mixed up and confusing the Jews with some other people.

Who Mara bar Serapion was referring to seems to be, at best, a mystery. Certainly, his letter hardly represents the definitive proof of Jesus’s historicity.[2]

Should we Trust the Experts’?

The ‘argumentum ab auctoritate’, or appeal to authority, is perhaps a method of argumentation that naturally appeals to devout Christians. After all, they usually appeal to the ostensible authority, not of ‘experts’ and ‘reputable scholars’, but rather to that of God himself, or of ‘holy scripture’ or ‘the Word of God’).

Of course, appealing to the unanimous opinion of scholars in a given field is sometimes legitimate. If, for example, we do not have the time or the inclination to research the topic for ourselves, it is prudent to defer to majority opinion among qualified experts.

However, in a book subtitled “The Historical Evidence”, one is surely entitled to demand rather more.

Moreover, the field of study in which the experts are ostensibly deemed to be expert must itself be a reputable field of study in the first place.

Thus, if all ‘reputable homeopaths’ or all ‘reputable astrologers’ agree on a particular aspect of homeopathic or astrological theory, I am entitled to disagree simply because the entire fields of homeopathy and astrology are pseudo-scientific and therefore there is simply no such thing as a ‘reputable homeopath’ or ‘reputable astrologer’ in the first place and all are no more than charlatans or professional damned fools.

Similarly, I submit, although the case is nowhere near as clear cut as with astrology or homeopathy, that there is reason not to trust the so-called ‘experts’ in the case of historical Jesus research.

This is because those who have chosen to devote their lives to the study of the life of Jesus have typically done so because they are themselves devout and committed Christians.

Given, then, that their whole philosophy of life is predicated on the existence of a figure of Jesus resembling the one described in the gospels, it is perhaps hardly surprising that they tend to conclude that the story in the gospels is more or less accurate.

Admittedly, unlike homeopaths and astrologers, many of these researchers have important-sounding professorships at apparently reputable, sometimes even prestigious, universities.

However, this is largely an anachronistic remnant of origins of the European university system in Medieval Chrisendom, when religious scholarship was a key function, perhaps the key function, of the university system.[3]

However, this should not fool us into mistaking them for serious, secular historians.[4]

Thus, most researchers investigating the historical Jesus, at least in universities, seem to come from backgrounds, not in history, but rather in theology, seminaries and Bible studies.

Few, then, seem to have spent any time researching other areas of history, and they are therefore presumably unfamiliar with the standards of proof demanded by mainstream historians researching other periods of history or other historical questions.

Thus, the tools used by researchers into the historical Jesus to judge the veracity of gospel claims (e.g. the criterion of embarrassment, the criterion of multiple/independent attestation) do not seem to be widely used in other areas of history when assessing the trustworthiness of sources – or, at any rate, the same terms are not used.

One finds these terms only, so far as I am aware, in the index on books on dealing with historical Jesus studies – not in general books on methods of historical research, nor in works of history dealing with other times and places and other historical questions.

Certainly, analogous principles are employed, but the standards of proof seem, in my opinion, to be rather higher.[5]

I would have preferred it if Dickson had announced at the onset that he was a Christian, in the same way that politicians and lawmakers are expected to ‘declare an interest’ in a matter before they venture an opinion during a debate, let alone cast a vote regarding a decision.[6]

However, although there is no such explicit declaration in the opening paragraphs, Dickson is, to his credit, open about his own religious belief. Nevertheless, he insists that he approaches the facts of Jesus’s life as an historian not as a Christian.

Thus, Dickson insists:

The presupposition that the Bible is God’s word and therefore entirely trustworthy is perfectly arguable at the philosophical level” (p13).

To play Dickson at his own game of appealing to expert opinion in lieu of formulating an actual substantive argument, I am not sure how many contemporary philosophers would agree this statement. Certainly not Daniel Dennett for one.

Nevertheless, Dickson insists in the same paragraph:

I intend to approach the New Testament as an entirely human document” (p13).

However, we surely have reason to doubt whether a devout Christian, whose beliefs are surely at the very core of their philosophy of life, can ever perform the sort of ‘mental gymnastics’ necessary to approach a topic such as the life of Jesus with the necessary disinterest, scholarly detachment and objectivity required of a serious historian.

The Gospels as an Historical Source

At the heart of Dickson’s account of the life of Jesus is his contention that the gospels themselves are legitimate historical sources in their own right.

Thus, they are, he argues, more trustworthy than the apocrypha, because the latter are less contemporaneous and they generally date from a later period (p25). This is, indeed, according to Dickson, the main reason why the latter were rejected as non-canonical in the first place.

On researching the issue, I discovered that it does indeed seem to be generally true that the canonical gospels date back to an earlier time-frame than do the New Testament apocrypha. It is indeed perhaps the one useful thing I learnt from reading Dickson’s book – since it is indeed true that many skeptics and atheist authors do indeed seem to imply that the choice of which books to be included in the New Testament Canon was either entirely arbitrary or else reflective of the theological or political agendas of the later Christian leaders responsible for the decision.

True, Dickson acknowledges, the gospel writers were Christians, and sought to convince readers of the divinity of Jesus – but all ancient sources, he observes, have some sort of agenda, and there is therefore, he argues, no reason to give any less credence to Christian sources than to any others.

This is only partly true.

Dickson is right in so far as he asserts that most, if not all, sources, ancient or indeed modern, have some sort of bias. Thus, we should not regard any source as completely infallible, in the same way that Christians have traditionally regarded the Bible as the infallible ‘Word of God’.

But, if no source is completely trustworthy, this does not mean that all sources are equally trustworthy.

On the contrary, some sources are much more accurate and reliable than others, some of which are completely worthless as history.

The Christian gospels, with their plainly ahistorical content and frankly preposterous elements (e.g. miracles, the resurrection), are clearly unreliable.

Are there no other contemporary sources on the life of Jesus besides the Christians gospels to provide balance? What about anti-Christian writings by adherents of other faiths?

Moreover, call me naïve, but from a book subtitled “the Historical Evidence”, I expected something more than another repetition of the gospel stories so many of us were so cruelly subjected to in Sunday School from earliest infancy – albeit this time supplemented with occasional references to Josephus and, of course, ‘the unanimous opinion of all reputable scholars’.

Dickson therefore concludes:

History… demonstrates that the story at the heart of the Gospels is neither a myth nor fraud, but a broadly credible account of a short first century life” (p129).

However, the primary (indeed virtually the only) source he has used to construct this so-called ‘history’ is the gospels themselves. No other sources (e.g. Josephus) provide any details whatever beyond the faintest of outlines.

To establish that “the story at the heart of the gospels” is “a broadly credible account” surely requires an independent source external to the gospels themselves against which to judge their veracity.

To claim that we can be certain of the gospels’ historical veracity because they are consistent with all the contemporary historical sources available simply won’t do when the only contemporary historical sources available are the gospels themselves.

This is simply to state the self-evident tautological truism that the gospels are consistent with themselves.

Jesus’s Birthplace

Actually, however, the gospels are not entirely consistent with themselves – or at least not with one another.

Take, for example, the matter of Jesus’s birthplace.

Against the arguments of skeptics such as Richard Dawkins, Dickson argues in favour of Bethlehem as the birthplace of Jesus, in accordance with Christian tradition.

Dismissing the claim that the Gospels of Luke and Matthew only relocated the nativity to Bethlehem so as to accord with Old Testament prophecy (Micah 5:2), Dickson demands petulantly:

What is the evidence that Matthew and Luke put him there out of some necessity to make him look messianic? None. The argument dissolves” (p37).

Instead, Dickson argues:

Just as important as the fact that Bethlehem is not mentioned in Mark or John is the fact that it is mentioned in Luke and Matthew. Surely the silence of two of the gospels cannot be louder than the affirmation of the other two” (p37).

Yet Dickson does not mention that the two gospels manage to relocate Jesus to Bethlehem by entirely different and mutually incompatible means. Thus, Matthew has the family based in Bethlehem then only fleeing to escape the wrath of Herod; whereas Luke has them only visiting Bethlehem in order to register for a census.

Nor does he mention that both stories are historically doubtful.

Whereas there is simply no evidence for the so-called Massacre of the Innocents outside of the Gospel of Matthew itself, the story in Luke is positively contradicted by the historical record.

Thus, the first census did not occur until AD 6 after the death of King Herod. Yet, just a couple of pages earlier Dickson himself has concluded:

The Gospels of Matthew and Luke agree that Jesus was born while Herod the Great, the Rome appointed king over Palestine, was still alive… This leads to the broad consensus among scholars that Jesus was born around 5 BC” (p35).

At any rate, a census, even if it occurred, would apply only to Roman citizens, not Jews in Galilee, then a client state not under direct Roman rule. Moreover, even Roman citizens were not required to return to the hometowns of their remote ancestors merely for the purpose of a census – an obviously preposterous proposition given the expense and difficulty of long-distance travel during this time-period and the huge disruption and chaos such a requirement would impose (see The Unauthorized Version: Truth and Fiction in the Bible: p27-32).

Finally, given that Dickson acknowledges that Jesus was born into obscurity and attained what little prominence he did achieve within his own lifetime only as an adult, anything about his birth is likely to be legendary and made-up long after the fact. At the time of birth, on the other hand, hardly anyone was likely paying much attention.

Dickson is therefore right to conclude “one cannot prove that Jesus was born in Bethlehem”. However, given the incentive to make Jesus’s birth accord with Old Testament prophecy (Micah 5:2), the apparent embarrassment associated with his originating in Nazareth (John 1:46) and the contradictions and ahistorical elements in the accounts given of how ‘Jesus of Galilee’ (also known as ‘Jesus of Nazareth’) could ever have ended up being born in Bethlehemseventy or eighty miles from Galilee and Nazareth – the weight of evidence is surely strongly against the notion.

Supernatural Events, Miracles and the Resurrection

Yet perhaps the strongest evidence against the notion that the Gospels can ever be considered reliable historical sources is the fact that they contain many supernatural elements (e.g. miracles).

However, Dickson, being a Christian, obviously does not see this as a problem. Instead, he maintains:

The best sources and methods employed by the leading scholars in the field produce the unexpected – and, for some, embarrassing – conclusion that the paradoxa erga [i.e. miracles] are, as Professor James Dunn admits ‘one of the most widely attested and firmly established of the historical facts with which we have to deal’” (p77).

In this passage, Dickson admits that this conclusion is “for some, embarrassing” (p77). However, he does not mention to whom it is supposedly embarrassing.

Yet it ought to be embarrassing, not, as implied by Dickson, to skeptics, rationalists and atheists, but rather to biblical scholars themselves – since, if indeed “the best sources and methods employed by the leading scholars in the field” suggest that events such as the feeding of the 5,000’, the turning of water into wine and Jesus healing lepers by touching them are “firmly established… historical facts”, then this seems to suggest that there is something fundamentally wrong with the “sources and methods employed by leading scholars” in the field to such an extent that the entire field is called into disrepute.

Of course, the reliable historical attestation of Jesus performing miracles could be interpreted differently. It might suggest simply that Jesus perhaps performed conjuring tricks, involving psychological suggestion and other chicanery of the sort employed by contemporary faith healers and similar charlatans, which, together with the well-documented placebo effect, together explain the similarly “widely attested and firmly established” eye-witness testimony regarding the ostensible miracles of these modern-day charlatans and con artists.

Similarly, resorting again to the argumentum ab auctoritate, Dickson lists various scholars who have investigated the historicity of the resurrection, claiming “All of these scholars agree that there is an irreducible core to the resurrection story that cannot be explained away as pious legend and wholesale deceit” (p110) – because “from the very beginning, numbers of men and women claimed to have seen Jesus alive after death” and that this is “a fact of history” (p111-2).

Of course, large numbers of men and women also claim to have been abducted by aliens. However, most of us do not regard this as evidence for the occurrence of alien abductions so much as it is evidence for the unreliability of eyewitness testimony and either the deceit or delusion of those making the claims.

Conclusions

I have complained that I would have preferred it had author John Dickson admitted at the beginning of his book, or, better yet, on the back cover, that he is a devout Christian and hence far from impartial with regard to the matter of the life of Jesus, just as politicians are expected to ‘declare an interest’ in a matter before casting their ballots or participating in a Parliamentary debate.

Here is my own belated disclaimer: I am an atheist.

However, I make this belated disclaimer, not so much to ‘declare an interest’ as it is to declare a lack of interest, or rather a disinterest.

I am obviously interested in the subject of the historical Jesus – otherwise I would not have taken the time to read Dickson’s book, let alone to write this review.

However, unlike Christian readers or researchers, I have no vested interest one way or another regarding the biography of Jesus. It does not challenge my fundamental beliefs whether Jesus existed, didn’t exist, lived a life roughly similar to that described in the gospels or a life very different.

Certainly, if evidence of the occurrence of miracles were discovered, this would challenge my beliefs, since it would suggest that the laws of physics as they are currently understood are somehow mistaken, incomplete or capable of temporary suspension on demand.

However, given that it is inconceivable that miracles supposedly performed some two millennia ago could ever be conclusively proven to have occurred some two thousand years after they are alleged to have happened, this is not really a problem.

Apart from this, I am in principle entirely open to the possibility that – miracles aside – the rest of the gospels is largely accurate as a description of Jesus’s life. However, on reading Dickson’s account of the “historical evidence”, it just seems to me that the evidence isn’t really there.

Certainly it is possible that (excepting miracles, virgin births, resurrections and other such patent nonsense) Jesus’s life did indeed take roughly the same path as that described in the gospels.

Moreover, since the canonical gospels, though obviously unreliable, do indeed seem to be the earliest surviving detailed accounts that we have of the life of Jesus, I am even prepared to tentatively concede that we must provisionally accept this as the most likely scenario.

However, it also seems quite possible that the course of Jesus’s life was very different and that the gospel stories themselves are largely mythical and invented after the fact.

It certainly seems probable that there existed a religious leader called Jesus who lived and was crucified by the Romans at around the time and place he is alleged to have lived and died and who provided a basis, howsoever minimal, for the stories and myths that subsequently came to be told about him.

However, I suspect that, given his relative obscurity in his own lifetime, it is doubtful much can ever be known about him today some two millennia later.

Moreover, even the most extreme form of the so-called mythicist’ thesis, namely that the gospel stories are entirely mythical and no person called Jesus upon whom the stories were based ever existed in the real world, hardly seems to be the sort of preposterous crank theory, roughly on a par with holocaust denial, as it is portrayed as being by Dickson and other Christian apologists.[7]

It just seems to me that there is so little reliable contemporary historical evidence regarding the life of Jesus that even extreme positions remain tenable – or at least cannot be definitively disproven. This is why attempted reconstructions of the historical Jesus are so notoriously divergent.

Indeed, there seems to be a fundamental contradiction in Dickson’s thesis.

On the one hand, he contends, surely rightly, that Jesus was, during his own lifetime, only, as Dickson himself puts it, quoting the title of another book about the historical Jesus, A Marginal Jew, who achieved prominence and historical importance only after his death.

However, at the same time, Dickson contends that there is abundant reliable evidence regarding the life of this marginal Jew. Yet, if the Jew in question was so marginal, one would hardly expect to find abundant documentary evidence regarding his life.

In short, perhaps the reason so few serious secular scholars and historians have studied the life of Jesus and the field remains the preserve of ‘true believers’ like Dickson is precisely because there is so little to study in the first place.[8]

Only those with an a priori emotional commitment to belief in Jesus as ‘messiah’ (or sometimes an a priori commitment to disbelief in this same concept), precisely those whose emotional commitment renders them unfit to undertake a disinterested and objective investigation, take it upon themselves to embark on the project in the first place.

Endnotes

[1] Actually, at least in so far as the accounts of his teachings as reported in the gospels are accurate, Jesus’s teachings do not appear to have been at all ‘wise’ in my opinion. On the contrary, they appear quite foolish. Thus, advising people to turn the other cheek when they are victims of assault (Matthew 5 39-42; Luke 6: 27-31), and to give up their worldly possessions (Mark 10:21; Luke 14:33) seems, to me, not ‘wise’ counsel, but rather very foolish advice. Corroboration for this interpretation is found in Jesus’s ultimate fate: If he had indeed been ‘wise’ perhaps he would not have ended up nailed to a tree.

[2] Another supposed early textual reference to Jesus sometimes cited by New Testament scholars, but curiously omitted by Dickson, seems similarly spurious. This is the reference by the Roman historian Suetonius in his Lives of the Twelve Caesars to disturbances during the reign of Claudius supposedly conducted at the instigation of Chrestus. After all, quite apart from the fact that ‘Chrestos’ was in fact a common name at the time, at least among Pagans, if not among Jews, the word is also the same, or very similar, to the Greek word, Khristós (Χριστός), which is itself the translation of the Hebrew word Messiah. Given that it was widely anticipated among the Jews that a ‘Messiah’ would appear among them, overthrow Roman rule and restore independence in Judea, and Jesus was only one of many claimants to this mantle, this reference to a ‘Chrestus’ or ‘Chresto’ could easily have referred to one of these other candidates for this title.

[3] The fact that the western university system traces its origins to medieval Christendom, when Christian dogma was almost unopposed, results in the paradoxical irony that, whereas many new universities often no longer even bother with courses in theology and Bible studies, the older, and hence generally more prestigious, universities often maintain a large number of professorships in these fields, and have long-established, entrenched and well-endowed Schools of Divinity.

[4] The idea that simply because someone has an impressive-sounding professorship at a prestigious university this must mean they are authoritative is, of course, another version of the argumentum ab auctoritate, or appeal to authority, that features so heavily in Dickson’s book, and the criticism of which is a major theme of this review. In fact, however, today as in the medieval age, there are many tenured and well-credentialled professors at ostensibly prestigious universities who are little more than ‘professional damned fools’. In a former age they were mostly theologians; today, meanwhile, they are mostly professors of women’s studies, gender studies, cultural studies, black studies, and other aspects of what has been aptly termed the grievance studies’ industry. These fields, indeed, arguably represent the modern ‘cultural Marxist’ equivalent of what theology represented in the medieval age, and are today even more entrenched in academia.

[5] Of course, this may depend on the area of history in question. Obviously, sources are more abundant for certain historical periods than for others. Thus, as a crude generalization, ancient history tends to be more speculative than modern history.

[6] Had Dickson began with a declaration to this effect, then, I must confess, I would probably never have bothered to read his book in the first place. This might perhaps be dismissed as a prejudice on my part. However, as explained above, I simply do not believe that a devout Christian can ever be capable of investigating the historical Jesus with the necessary scholarly detachment, disinterest and objectivity required for such an endeavour.

[7] Indeed, it is bizarre to read Christian apologists like Dickson pouring scorn on mythicism as a kind of crankish, kookish conspiracy theory or form of pseudo-scholarship, while at the same time insisting that miracles are among “the most widely attested and firmly established of the historical facts” about Jesus (p77), and that there is an irreducible [historical] core to the resurrection story that cannot be explained away as pious legend and wholesale deceit” (p110). Is Dickson really trying to have us believe that the idea that Jesus never existed is more preposterous than the idea that he cured lepers by touching them and later rose from the dead?
I am reminded of the Archbishop of Canterbury’s 2006 Easter Sermon, where he dismissed The Da Vinci Code book and film, then just released, as a preposterous conspiracy theory (as indeed it was), contrasting it with what he had the audacity to call the prosaic reality – the latter presumably a reference to the gospel accounts with all their virgin birth, resurrection and miracle stories. The phrases pot calling the kettle black and people in glasshouses shouldn’t throw stones very much spring to mind in both these cases.

[8] These ‘true believers’ include, it must be acknowledged, not only Christians like Dickson, but also many virulently anti-Christian cranks and conspiracy theorists, who often seemingly have almost as strong an a priori commitment to their own pet theories (e.g. mythicism) as the Christians do to the veracity of the gospel stories.

John R Baker’s ‘Race’: “A Reminder of What Was Possible Before the Curtain Came Down”

‘Race’, by John R. Baker, Oxford University Press, 1974.

John Baker’s ‘Race’ represents a triumph of scholarship across a range of fields, including biology, ancient history, archaeology, history of science, psychometrics and anthropology.

First published by Oxford University Press in 1974, it also marks a watershed in Western thought – the last time a major and prestigious publisher put its name to an overtly racialist work.

As science writer Marek Kohn writes:

Baker’s treatise, compendious and ponderous, is possible the last major statement of traditional race science written in English” (The Race Gallery: p61).

Inevitably for a scientific work first published over forty years ago, ‘Race’ is dated. In particular, the DNA revolution in population genetics has revolutionized our understanding of the genetic differences and relatedness between different human populations.

Lacking access to such data, Baker had only indirect phenotypic evidence (i.e. the morphological similarities and differences between different peoples), as well as historical and geographic evidence, with which to infer such relationships and hence construct his racial phylogeny and taxonomy.

Phenotypic similarity is obviously a less reliable method of determining the relatedness between groups than is provided by genome analysis, since there is always the problem of distinguishing homology from analogy and hence misinterpreting a trait that has independently evolved in different populations as evidence of relatedness.[1]

However, I found only one case of genetic studies decisively contradicting Baker’s conclusions. Thus, whereas Baker classes the Ainu People of Japan as Europid (p158; p173; p424; p625), recent genetic studies suggest that the Ainu have little or no genetic affinities to Caucasoid populations and are most closely related to other East Asians.[2]

On the other hand, however, Baker’s omission of genetic data means that, unusually for a scientific work, in the material he does cover, ‘Race’ scarcely seems to have dated at all. This is because the primary focus of Baker’s book – namely, morphological differences between races – is a field of study that has become politically suspect and in which new research has now all but ceased.[3]

Yet in the nineteenth- and early-twentieth century, when the discipline of anthropology first emerged as a distinct science, the study of race differences in morphology was the central focus of the entire science of anthropology.

Thus, Baker’s ‘Race’ can be viewed as the final summation of the accumulated findings of the ‘old-stylephysical anthropology of the nineteenth and early-twentieth centuries, published at the very moment this intellectual tradition was in its death throes.

Accessibility

Baker’s ‘Race’ is indeed a magnum opus. Unfortunately, however, at over 600 pages, embarking on reading ‘Race’ might seem almost like a lifetime’s work in and of itself.

Not only is it a very long book, but, in addition, much of the material, particularly on morphological race differences and their measurement, is highly technical, and will be readily intelligible only to the dwindling band of biological anthropologists who, in the genomic age, still study such things.

This inaccessibility is exacerbated by the fact that Baker does not use endnotes, except for his references, and only very occasionally uses footnotes. Instead, he includes even technical and peripheral material in the main body of his text, but indicates that material is technical or peripheral by printing it in a smaller font-size.[4]

Baker’s terminology is also confusing.[5] He prefers the ‘-id’ suffix to the more familiar ‘-oid’ and ‘-ic’ (e.g. ‘Negrid‘ and ‘Nordid‘ rather than ‘Negroid’ and ‘Nordic‘) and eschews the familiar terms Caucasian or Caucasoid, on the grounds that:

The inhabitants of the Caucasus region are very diverse and very few of them are typical of any large section of Europids” (p205).

However, his own preferred alternative term, ‘Europid’, is arguably equally misleading as it contributes to the already common conflation of Caucasian with white European, even though, as Baker is at pains to emphasize elsewhere in his treatise, populations from the Middle East, North Africa and even the Indian subcontinent are also ‘Europid’ (i.e. Caucasoid) in Baker’s judgement.

In contrast, the term Caucasoid, or even Caucasian, causes little confusion in my experience, since it is today generally understood as a racial term and not as a geographical reference to the Caucasus region.[6]

At any rate, a similar criticism could surely be levelled at the term ‘Mongoloid’ (or, as Baker prefers, ‘Mongolid’), since Mongolian people are similarly quite atypical of other East Asian populations, and, despite the brief ascendancy of the Mongol Empire, and its genetic impact (as well as that previous waves of conquest by horse peoples of the Eurasian Steppe), were formerly a rather marginal people confined to the arid fringes of the indigenous home range of the so-called Mongoloid race, which had long been centred in China, the self-styled Middle Kingdom.[7]

Certainly, the term ‘Caucasoid’ makes little etymological sense. However, this is also true of a lot of words which we nevertheless continue to make use of. Indeed, since all words change in meaning over time, the original meaning of a word is almost invariably different to its current accepted usage.[8]

Yet we continue to use these words so as to make ourselves intelligible to others, the only alternative being to invent an entirely new language all of our own which only we would be capable of understanding.

Unfortunately, however, too many racial theorists, Baker included, have insisted on creating entirely new racial terms of their own coinage, or sometimes new entire lexicons, which, not only causes confusion among readers, but also leads the casual reader to underestimate the actual degree of substantive agreement between different authors, who, though they use different terms, often agree regarding both the identity of, and relationships between, the major racial groupings.[9]

Historical Focus

Another problem is the book’s excessive historical focus.

Judging the book by its contents page, one might imagine that Baker’s discussion of the history of racial thought is confined to the first section of the book, titled “The Historical Background” and comprising four chapters that total just over fifty pages.

However, Baker acknowledges in the opening page of his preface that:

Throughout this book, what might be called the historical method has been adopted as a matter of deliberate policy” (p3).

Thus, in the remainder of the book, Baker continues to adopt an historical perspective, briefly charting the history behind the discovery of each concept, archaeological discovery, race difference or method of measuring race differences that he introduces.

In short, it seems that Baker is not content with writing about science; he wants to write history of science too.

A case in point is Chapter Eight, which, despite its title (“Some Evolutionary and Taxonomic Theories”), actually contains very little on modern taxonomic or evolutionary theory, or even what would pass for ‘modern’ when Baker wrote the book over forty years ago.

Instead, the greater part of the chapter is devoted to tracing the history of two theories that were, even at the time Baker was writing, already wholly obsolete and discredited (namely, recapitulation theory and orthogenesis).

Let me be clear, Baker himself certainly agrees that these theories are obsolete and discredited, as this is his conclusion at the end of the respective sections devoted to discussion of these theories in his chapter on “Evolutionary and Taxonomic Theories”.

However, this only begs the question as to why Baker chooses to devote so much space in this chapter to discussing these theories in the first place, given that both theories are discredited and also of only peripheral relevance to his primary subject-matter, namely the biology of race.

Anyone not interested in these topics, or in history of science more generally, is well advised to skip the majority of this chapter.

The Historical Background

Readers not interested in the history of science, and concerned only with contemporary state-of-the-art science (or at least the closest an author writing in 1974 can get to modern state-of-the-art science) may also be tempted to skip over the whole first section of the book, entitled, as I have said, “The Historical Background”, and comprised of four chapters or, in total, just over fifty pages.

These days, when authoring a book on the biology of race, it seems to have become almost de rigueur to include an opening chapter, or chapters, tracing the history of race science, and especially its political misuse during nineteenth and early twentieth-centuries (e.g. under the Nazis).[10]

The usual reason for including these chapters is for the author or authors to thereby disassociate themselves from the earlier supposed misuse of race science for nefarious political purposes, and emphasize how their own approach is, of course, infinitely more scientific and objective than that of their sometimes less than illustrious intellectual forebears.

However, Baker’s discussion ofThe Historical Background” is rather different, and refreshingly short on disclaimers, moralistic grandstanding and benefit-of-hindsight condemnations that one usually finds in such potted histories.

Instead, Baker strives to give all views, howsoever provocative, a fair hearing in as objective and sober a tone as possible.[11]

Only Lothrop Stoddard, strangely, is dismissed altogether. The latter is, for Baker, an “obviously unimportant” thinker, whose book “contains nothing profound or genuinely original” (p58-9).

Yet this is perhaps unfair. Whatever the demerits of Stoddard’s racial taxonomy (“oversimplified to the point of crudity,” according to Baker: p58), Stoddard’s geopolitical and demographic predictions have proven prescient.[12]

Overall, Baker draws two general conclusions regarding the history of racial thought in the nineteenth and early twentieth century.

First, he observes how few of the racialist authors whom he discusses were anti-Semitic. Thus, Baker reports:

Only one of the authors, Lapouge, strongly condemns the Jews. Treitschke is moderately anti-Jewish; Chamberlain, Grant and Stoddard mildly so; Gobineau is equivocal” (p59).

The rest of the authors whom he discusses evince, according to Baker, “little or no interest in the Jewish problem”, the only exception being Friedrich Nietzsche, who is “primarily an anti-egalitarian, but [who] did not proclaim the inequality of ethnic taxa”, and who, in his comments regarding the Jewish people, or at least those quoted by Baker, is positively gushing in his praise.

Yet anti-Semitism often goes hand-in-hand with philo-Semitism. Thus, both Nietzsche and Count de Gobineau indeed wrote passages that, at least when quoted in isolation, seem highly complementary regarding the Jewish people. However, it is well to bear in mind that Hitler did as well, the latter writing in Mein Kampf:

The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self- preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf, Manheim translation).[13]

Thus, as a character from a Michel Houellebecq novel observes:

All anti-Semites agree that the Jews have a certain superiorityIf you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113) 

Baker’s second general observation is similarly curious, namely that:

None of the authors mentioned in these chapters claims superiority for the whole of the Europid race: it is only a subrace, or else a section of the Europid race not clearly defined in terms of physical anthropology, that is favoured” (p59).

In retrospect, this seems anomalous, especially given that the so-called Nordic race, on whose behalf racial supremacy was most often claimed, actually came relatively late to civilization, which began in the Middle East, North Africa and South Asia, arriving in Europe only with the Mediterranean civilizations of Greece and Rome, and in Northern Europe later still.

However, this focus on the alleged superiority of certain European subraces rather than Caucasians as a whole likely reflects the fact that, during the time period in which these works were written, European peoples and nations were largely in competition and conflict with other European peoples and nations.

Only in European overseas colonies were Europeans in contact and conflict with non-European races, and, even here, the main obstacle to imperial expansion was, not so much the opposition of the often primitive non-European races whom the Europeans sought to colonize, but rather that of rival colonizers from other European nations.

Therefore, it was the relative superiority of different European populations which was naturally of most concern to Europeans during this time period.

In contrast, the superiority of the Caucasian race as a whole was of comparably little interest, if only because it was something that these writers already took very much for granted, and hence hardly worth wasting ink or typeface over.

The Rise of Racial Egalitarianism

There are two curious limitations that Baker imposes on his historical survey of racial thought. First, at the beginning of Chapter Three (From Gobineau to Houston Chamberlain’), he announces:

The present chapter and the next [namely, those chapters dealing with the history of racial thinking from the mid-nineteenth century up until the early-twentieth century] differ from the two preceding ones… in the more limited scope. It is are concerned only with the growth of ideas that favoured belief in the inequality of ethnic taxa or are supposedrightly or wronglyto have favoured this belief” (p33).

Given that I have already criticised ‘Race’ as overlong, and as having an excessive historical focus, I might be expected to welcome this restriction. However, Baker provides no rationale for this self-imposed restriction.

Certainly, it is rare, and enlightening, to read balanced, even sympathetic, accounts of the writings of such infamous racialist thinkers as Gobineau, Galton and Chamberlain, whose racial views are today usually dismissed as so preposterous as hardly to merit serious consideration. Moreover, in the current political climate, such material even acquires a certain allure of the forbidden’.

However, thinkers championing racial egalitarianism have surely proven more influential, at least in the medium-term. Yet such enormously influential thinkers as Franz Boas and Ashley Montagu pass entirely unmentioned in Baker’s account.[14]

Moreover, the intellectual antecedents of the Nazism have already been extensively explored by historians. In contrast, however, the rise of the dogma of racial equality has passed largely unexamined, perhaps because to examine its origins is to expose the weakness of its scientific basis and its fundamentally political origins.[15]

Yet the story of how the theory of racial equality was transformed from a maverick, minority opinion among scientists and laypeople alike into a sacrosanct contemporary dogma which a person, scientist or layperson, can question only at severe cost to their career, livelihood and reputation is surely one worth telling.

The second restriction that Baker imposes upon his history is that he concludes it, prematurely, in 1928. He justifies closing his survey in this year on the grounds that this date supposedly:

Marks the close of the period in which both sides in the ethnic controversy were free to put forward their views, and authors who wished to do so could give objective accounts of the evidence pointing in each direction” (p61).

Yet this cannot be entirely true, for, if it were, then Baker’s own book could never have been published – unless, of course, Baker regards his own work as something other than an “objective account of the evidence pointing in each direction”, which seems doubtful.

Certainly, the influence of what is now called political correctness is to be deplored for impact on science, university appointments, the allocation of research funds and the publishing industry. However, there has surely been no abrupt watershed but rather a gradual closing of the western mind over time.

Thus, it is notable that other writers have cited dates a little later than that quoted by Baker, often coinciding with the defeat of Nazi Germany and exposure of the Nazi genocide, or sometimes the defeat of segregation in the American South.

Indeed, not only was this process gradual, it has also proceeded apace in the years since Baker’s ‘Race’ first came off the presses, such that today such a book would surely never would have been published in the first place, certainly not by as prestigious a publisher as Oxford University Press (who, surely not uncoincidently, soon gave up the copyright).[16]

Moreover, Baker is surely wrong to claim that it is impossible:

To follow the general course of controversy on the ethnic problem, because, for the reason just stated [i.e. the inability of authors of both sides to publicise their views], there has been no general controversy on the subject” (p61).

On the contrary, the issue remains as incendiary as ever, with the bounds of acceptable opinion seemingly ever narrowing and each year a new face falling before the witch hunters of the  contemporary racial inquisition.

Biology

Having dealt in his first section with what he calls “The Historical Background”, Baker next turns to what he calls “The Biological Background”. He begins by declaring, rightly, that:

Racial problems cannot be understood by anyone whose interests and field of knowledge stop short at the limit of purely human affairs” (p3).

This is surely true, not just of race, but of all issues in human biology, psychology, sociology, anthropology and political science, as the recent rise of evolutionary psychology attests. Indeed, Baker even coins a memorable and quotable aphorism to this effect, when he declares:

No one knows Man who knows only Man” (p65).

However, Baker sometimes takes this thinking rather too far, even for my biologically-inclined tastes.

Certainly, he is right to emphasise that differences among human populations are analogous to those found among other species. Thus, his discussion of racial differences among our primate cousins are of interest, but also somewhat out-of-date.[17]

However, his intricate and fully illustrated nine-page description of race differences among the different subspecies of crested newt stretched the patience of this reader (p101-109).

Are Humans a Single Species?

Whereas Baker’s seventh chapter (“The Meaning of Race”) discusses the race concept, the preceding two chapters deal with the taxonomic class immediately above that of race, namely ‘species’.

For sexually-reproducing organisms, ‘species’ is usually defined as the largest group of organisms capable of breeding with one another and producing fertile offspring in the wild.

However, as Baker explains, things are not quite so simple.

For one thing, over evolutionary time, one species gradually transforms into another gradually with no abrupt dividing line where one species suddenly becomes another (p69-72). Hence the famous paradox, Which came first: the chicken or the egg?.

Moreover, in respect of extinct species, it is often impossible to know for certain whether two ostensible ‘species’ interbred with one another (p72-3). Therefore, in practice, the fossils of extinct organisms are assigned to either the same or different species on morphological criteria alone.

This leads Baker to distinguish different species concepts. These include:

  • Species in the paleontological sense” (p72-3);
  • Species in the morphological sense” (p69-72); and
  • Species in the genetical sense”, i.e. as defined by the criterion of interfertility (p72-80).

On purely morphological criteria, Baker questions humanity’s status as a single species:

“Even typical Nordids and typical Alpinids, both regarded as subraces of a single race (subspecies), the Europid, are very much more different from one another in morphological characters—for instance in the shape of the skull—than many species of animals that never interbreed with one another in nature, though their territories overlap” (p97).

Thus, later on, Baker claims:

Even a trained anatomist would take some time to sort out correctly a mixed collection of the skulls of Asiatic jackals (Canis aureus) and European red foxes (vulpes vulpes), unless he had made a special study of the osteology of the Canidae; whereas even a little child, without any instruction whatever, could instantly separate the skulls of Eskimids from those of Lappids” (p427).

That morphological differences between human groups do indeed often exceed those between closely-related but non-interbreeding species of non-human animal has recently been quantitatively confirmed by Vincent Sarich and Frank Miele in their book, Race the Reality of Human Differences (which I have reviewed here, here and here).

However, even if one defines ‘species’ strictly by the criterion of interfertility (i.e. in Baker’s terminology, “species in the genetical sense”) matters remain less clear than one might imagine.

For one thing, there are the phenomena of ring species, such as the herring gull and lesser black-backed gull.

These two ostensible species (or subspecies), both found in the UK, do not interbreed with one another, but each does interbreed with intermediaries that, in turn, interbreed with the other, such that there is some indirect gene-flow between them. Interestingly, the species ranges of the different intermediaries form a literal ring around the Arctic, such that genes will travel around the Artic before passing from lesser black backed gull to herring gull or vice versa (p76-79).[18]

Indeed, even the ability to produce fertile offspring is a matter of degree. Thus, some pairings produce fertile offspring only rarely.

For example, often, Baker reports, “sterility affects [only] the heterogametic sex [i.e. the sex with two different sex chromosomes]” (p95). Thus, in mammals, sterility is more likely to affect male offspring. Indeed, this pattern is so common that it even has its own name, being known as Haldane’s Rule, after the famous Marxist-biologist JBS Haldane who first noted this pattern.

Other times, Baker suggests, interfertility may depend on the sex of the respective parents. For example, Baker suggests that, whereas sheep may sometimes successfully reproduce with he-goats, rams may be unable to successfully reproduce with she-goats (p95).[19]

Moreover, the fertility of offspring is itself a matter of degree. Thus, Baker reports, some hybrid offspring are not interfertile with one another, but can reproduce with one or other of the parental stocks. Elsewhere, the first generation of hybrids are interfertile but not subsequent generations (p94).

Indeed, though it was long thought impossible, it has recently been confirmed that, albeit only very rarely, even mules and hinnies can successfully reproduce, despite donkeys and horses, the two parental stocks, having, like goats and sheep, a different number of chromosomes (Rong et al 1985; Kay 2002).

Thus, Baker concludes:

There is no proof that hybridity among human beings is invariably eugenesic, for many of the possible crosses have not been made, or if they have their outcome does not appear to have been recorded. It is probable on inductive grounds that such marraiges would not be infertile, but it is questionable whether the hybridity would necessarily be eugenesic. For instance, statistical study might reveal a preponderance of female offpsring” (p97-8).

Is there then any evidence of reduced fertility among mixed-race couples? Not a great deal.

Possibly blood type incompatibility between mother and developing foetus might be more common in interracial unions due to racial variation in the prevalence of different blood groups.

Also, one study did find a greater prevalence of birth complications, more specifically caesarean deliveries, among Asian women birthing offspring fathered by white men (Nystrom et al 2008).

However, this is a simple reflection of the differences in average stature of between whites and Asians, with smaller-framed Asian women having difficult birthing larger half-white offspring. Thus, the same study also found that white women birthing offspring fathered by Asian men actually have lower rates of caesarean delivery than did women bearing offspring fathered by men of the same race as themselves (Stanford University Medical Center 2008).[20]

Also, one study from Iceland rather surprisingly found that the highest pregnancy rates were found among couples who were actually quite closely related to one another, namely equivalent to third- or fourth-cousins, with less closely related spouses enjoying reduced pregnancy rates (Helgason et al 2008; see also Labouriau & Amorim 2008).

On the other hand, however, David Reich, in Who We Are and How We Got Here reports that, whereas there was evidence of selection against Neanderthal genes in the human genome (that had resulted from ancient hybridization between anatomically modern humans and Neanderthals) owing to the deleterious effects of these genes, there was no evidence of selection against European genes (or African genes) among African-Americans, a racially-mixed population:

“In African Americans, in studies of about thirty thousand people, we have found no evidence for natural selection against African or European ancestry” (Who We Are and How We Got Here: p48; Bhatia et al 2014).

This lack of selection against either European-derived (or African-derived) genes in African-Americans suggests that discordant genes did not result in reduced fitness among African-Americans.[21]

Humans – A Domesticated Species?

A final complication in defining species is that some species of nonhuman animal, wildly recognised as separate species because they do not interbreed in the wild, nevertheless have been known to successfully interbreed in captivity.

A famous example are lions and tigers. While they have never been known to interbreed in the wild, if only because they rarely if ever encounter one another, they have interbred in captivity, sometimes even producing fertile offspring in the form of so-called ligers and tigons.

This is, for Baker, of especial relevance to question of human races since, according to Baker, we ourselves are a domesticated species. Thus, he approvingly quotes Blumenbach’s claim that:

Man is ‘of all living beings the most domesticated’” (p95).

Thus, with regard to the question of whether humans represent a single species, Baker reaches the following controversial conclusion:

The facts of human hybridity do not prove that all human races are to be regarded as belonging to a single ‘species’. The whole idea of species is vague because the word is used with such different meanings, none of which is of universal application. When it is used in the genetical sense [i.e. the criterion of interfertility] some significance can be attached to it, in so far as it applies to animals existing in natural conditions… but it does not appear to be applicable to human beings, who live under the most extreme conditions of domestication” (p98).

Thus, Baker goes so far as to question whether:

Any two kinds of animals, differing from one another so markedly in morphological characters (and in odour) as, for instance, the Europid and Sanid…, and living under natural conditions, would accept one another as sexual partners” (p97).

Certainly, in our ‘natural environment’ (what evolutionary psychologists call the environment of Evolutionary adaptedness or EEA), many human races would never have interbred, if only for the simple reason that they would never come into contact with one another.

On the contrary, they were separated from one another by the very geographic obstacles (oceans, deserts, mountain-ranges) that reproductively isolated them from one another and hence permitted their evolution into distinct races.

Thus, Northern Europeans surely never mated with sub-Saharan Africans for the simple reason that the former were confined to Northern Europe and surrounding areas while the latter were largely confined to sub-Saharan Africa, such that they are unlikely ever to have interacted.[22]

Only with the invention of technologies facilitating long-distance travel (e.g. ships, aeroplanes) would this change.

However, whether humans can be said to be domesticated depends on how one defines ‘domesticated’. If we are domesticated, then humans are surely unique in having domesticated ourselves (or at least one another).

Defining Race

Ultimately then, the question of whether the human race is a single species is a purely semantic one. It depends how one defines the word ‘species’.

Likewise, whether human races can be said to exist ultimately depends on one’s definition of the word ‘race’.

Using the word ‘race’ interchangeably with that of ‘subspecies’, Baker provides no succinct definition. Instead, he simply explains:

If two populations [within a species] are so distinct that one can generally tell from which region a specimen was obtained, it is usual to give separate names to the two races” (p99).

Neither does he provide a neat definition of any particular race. On the contrary, he is explicit in emphasizing:

The definition of any particular race must be inductive in the sense that it gives a general impression of the distinctive characters, without professing to be applicable in detail to every individual” (p99).

Is Race Real?

At the conclusion of his chapter on “Hybridity and The Species Question”, Baker seems to reach what was, even in 1974, an incendiary conclusion – namely that, whether using morphological criteria or the criterion of interfertility, it is not possible to conclusively prove that all extant human populations belong to a single species (see above).

Nevertheless, in the remainder of the book, Baker proceeds on the assumption that differences among human groups are indeed subspecific (i.e. racial) in nature and that we do indeed form a single species.

Indeed, Baker criticises the notion that the existence persons of mixed racial ancestry, and the existence of clinal variation between races, disproves the existence of human races by observing that, if races did not interbreed with one another, then they would not be mere different races, but rather entirely separate species, according to the usual definition of this term. Thus, Baker explains:

Subraces and even races sometimes hybridise where they meet, but this almost goes without saying: for if sexual revulsion against intersubracial or interracial marriages were complete, one set of genes would have no chance of intermingling with the other, and the ethnic taxa would be species by the commonly accepted definition. It cannot be too strongly stressed that intersubracial and interracial hybridization is so far from indicating the unreality of subraces and races, that it is actually a sine qua non of the reality of these ethnic taxa” (p12).

This, Baker argues, is because:

It is the fact that intermediaries do occur that defines the race” (p99).

Some people seem to think that, since races tend to blend into one another and hence have blurred boundaries (i.e. what biologists refer to as clinal variation), they do not really exist. Yet Baker objects:

In other matters, no one questions the reality of categories between which intermediaries exist. There is every graduation, for instance, between green and blue, but no one denies these words should be used” (p100).

However, this is perhaps an unfortunate example, since, as psychologists and physicists agree, colours, as such, do not exist.

Instead, the spectrum of light varies continuously. Distinct colours are imposed on this continuous variation only by the human brain and visual system.[23]

Using colour as an analogy for race is also potentially confusing because colour is already often conflated with race. Thus, races are referred to by their ostensible colours (e.g. blacks, whites, browns etc.) and the very word ‘colour’ is sometimes even used as a synonym, or perhaps euphemism, for race, even though, as Baker is at pains to emphasize, races differ in far more than skin colour.

Using colour as an analogy for race differences is only likely to exacerbate this confusion.

Yet Baker’s other examples are similarly problematic. Thus, he writes:

“The existence of youths and human hermaphrodites does not cause anyone to disallow the use of the words, ‘boy’, ‘man’ and ‘woman’” (p100).

However, hermaphrodites, unlike racial intermediaries, are extremely rare. Meanwhile, words such as ‘boy’ and ‘youth’ are colloquial terms, not really scientific ones. As anthropologist John Relethford observes:

We tend to use crude labels in everyday life with the realization that they are fuzzy and subjective. I doubt anyone thinks that terms such as ‘short’, ‘medium’ and ‘tall’ refer to discrete groups, or that humanity only comes in three values of height” (Relethford 2009: p21).

In short, we often resort to vague and impressionistic language in everyday conversation. However, for scientific purposes, we must surely try, wherever possible, to be more precise.

Rather than alluding to colour terms or hermaphrodites, perhaps a better counterexample, if only because it is certain to provoke annoyance, cognitive dissonance and doublethink among leftist race-denying sociologists, is that of social class. Thus, as biosocial criminologist Anthony Walsh demands:

Is social class… a useless concept because of its cline-like tendency to merge smoothly from case to case across the distribution, of because its discrete categories are determined by researchers according to their research purposes and are definitely not ‘pure’” (Race and Crime: A Biosocial Analysis: p6).

However, the same leftist social scientists who insist the race concept is an unscientific social construction, nevertheless continue to employ the concept of social class almost as if it were entirely unproblematic.

However, the objection that races do not exist because races are not discrete categories, but rather have blurred boundaries, is not entirely fallacious.

After all, sometimes intermediaries can be so common that they can no longer be said to be intermediaries at all and all that can be said to exist is continuous clinal variation, such that wherever one chose to draw the boundary between one race and another would be entirely arbitrary.

With increased migration and intermarriage, we may fast be approaching this point.[24]

However, just because the boundaries between racial groups are blurred, this does not mean that the differences between them, whether physiological or psychological, do not exist. To assume otherwise would represent a version of the continuum fallacy or sorties paradox, also sometimes called the fallacy of the heap or fallacy of the beard.

Thus, even if races do not exist, race differences still surely do – and, just as skin colour varies on a continuous, clinal basis, so might average IQbrain-size and personality!

Anticipating Jared Diamond

Remarkably, Baker even manages to anticipate certain erroneous objections to the race concept that had not, to my knowledge, even been formulated at the time of his writing, perhaps because they are so obviously fallacious to anyone without an a priori political commitment to the denying the validity of the race concept.

In particular, Jared Diamond (1994), in an influential and much-cited paper, argues that racial categories are meaningless because, rather than being classified by skin colour, races could just as easily be grouped on the basis of traits such as the prevalence of genes for sickle-cell or lactose tolerance, which would lead us to adopting very different classifications.

Actually, Baker argues, the importance of colour for racial classification has been exaggerated.

In the classification of animals, zoologists lay little emphasis on differences of colour… They pay far more attention to differences in grosser structure” (p159).

Indeed, he quotes no lesser authority than Darwin himself as observing:

Colour is generally esteemed by the systematic naturalist as unimportant (p148).

Certainly, he is at pains to emphasise that, among humans, differences between racial groups go far beyond skin colour. Indeed, he observes, one has only to look at an African albino to realize as much:

An albino… Negrid who is fairer than any non-albino European, [yet] appears even more unlike a European than a normal… Negrid” (p160).

Likewise, some populations from the Indian subcontinent are very dark in skin tone, yet they are, according to Baker, predominantly Caucasoid (p160), as, he claims, are the Aethiopid subrace of the Horn of Africa (p225).[25]

Thus, Baker laments how:

An Indian, who may show close resemblance to many Europeans in every structural feature of his body, and whose ancestors established a civilization long before the inhabitants of the British Isles did so, is grouped as ‘coloured’ with persons who are very different morphologically from any European or Indian, and whose ancestors never developed a civilization” (p160).

Yet, in contrast, of the San Bushmen of Southern Africa, he remarks:

The skin is only slightly darker than that of the Mediterranids of Southern Europe and paler than that of many Europids whose ancestral home is in Asia or Africa” (p307).

But no one would mistake them for Caucasoid.

What then of the traits, namely the prevalence of the sickle-cell gene or of lactose tolerance, that would, according to Diamond, produce very different taxonomies?

For Baker, these are what he calls “secondary characters” that cannot be used for the purposes of racial classification because they are not present among all members of any group, but differ only in their relative prevalence (p186).

Moreover, he observes, the sickle-cell gene is likely to have “arisen independently in more than one place” (p189). It is therefore evidence, not of common ancestry, but of convergent evolution, or what Baker refers to as “independent mutation” (p189).

It is therefore irrelevant from the perspective of cladistic taxonomy, whereby organisms are grouped, not on the basis of shared traits as such, but rather of shared ancestry. From the perspective of cladistic taxonomy, shared traits are relevant only to the extent they are (interpreted as) evidence of shared ancestry.

The same is true for lactose tolerance, which seems to have evolved independently in different populations in concert with the development of dairy farming, in a form of gene-culture co-evolution.

Indeed, lactose tolerance appears to have evolved through somewhat different genetic mechanisms (i.e. mutations in different genes) in different populations, seemingly a conclusive demonstration that it evolved independently in these different lineages (Tishkoff et al 2007).

As Baker warns:

One must always be on the lookout for the possibility of independent mutation wherever two apparently unrelated taxa resemble one another by the fact that some individuals in both groups reveal the presence of the same gene” (p189).

In evolutionary biology, this is referred to as distinguishing analogy from homology.

But Diamond’s proposed classification is especially preposterous, since he proposes to classify races on the basis of a single trait in isolation, the trait in question (either lactose tolerance or the sickle-cell gene) being chosen either arbitrarily or, more likely, to illustrate the point that Diamond is attempting to make.

Yet even pre-Darwinian taxonomies proposed to classify species, not on the basis of a single trait, but rather on the basis of a whole suit of traits that intercorrelate together.

In short, Diamond proposes to classify races on the basis of a single character that has evolved independently in distantly related populations, instead of a whole suite of inter-correlated traits indicative of common ancestry.

Interestingly, a similar error may underlie an even more frequently cited paper by Marxist-geneticist Richard Lewontin, which argued the vast majority of genetic variation was within-group rather than between-group – since Lewontin, like Diamond, also relied on ‘secondary characters’ such as blood-groups to derive his estimates (Lewontin 1972).[26]

The reason for the recurrence of this error, Baker explains, is that:

Each of the differences that enable one to distinguish all the most typical individuals of any one taxon from those of another is due, as a general rule, to the action of polygenes, that is to say, to the action of numerous genes, having small cumulative effects” (p190).

Yet, unlike traits resulting from a few alleles, polygenes are not amenable to simple Mendelian analysis.

Therefore, this leads to the “unfortunate paradox” whereby:

The better the evidence of relationship or distinction between ethnic taxa, the less susceptible are the facts to genetic analysis” (p190).

As a consequence, Baker laments:

Attention is focussed today on those ‘secondary differences’… that can be studied singly and occur in most ethnic taxa, though in different proportions in different taxa… The study of these genes… has naturally led, from its very nature, to a tendency to minimise or even disregard the extent to which the ethnic taxa of man do actually differ from one another” (p534).

Finally, Baker even provides a reductio ad absurdum of Diamond’s approach, observing:

From the perspective of taste-deficiency the Europids are much closer to the chimpanzee than to the Sinids and Paiwan people; yet no one would claim that this resemblance gives a true representation of relationship” (p188).

However, applying the logic of Diamond’s article, we would be perfectly justified and within our rights to use this similarity in taste deficiency in order to classify Caucasians as a sub-species of chimpanzee!

Subraces

The third section of Baker’s book, “Studies of Selected Human Groups”, focusses on the traditional subject-matter of physical anthropology – i.e. morphological differences between human groups.[27]

Baker describes the physiological differences between races in painstaking technical detail. These parts of the book makes for an especially difficult read, as Baker carefully elucidates both how anthropologists measure morphological differences, and the nature and extent of the various physiological differences between the races discussed revealed by these methods.

Yet, curiously, although many of his measures are quantitative in nature, Baker rarely discusses whether differences are statistically significant.[28] Yet without statistical analysis, all of Baker’s reports of quantitative measurements of differences in the shapes and sizes of the skulls and body parts of people of different races represent little more than subjective impressions.

This is especially problematic in his discussion of so-called ‘subraces’ (subdivisions within the major continental races, such as Nordics and the Meditaranean race, both supposed subdivisions within the Caucasiod race), where differences could easily be dismissed as, if not wholly illusory, then at least as clinal in nature and as not always breeding true.

Yet nowhere in his defence of the reality of subracial differences does Baker cite statistics. Instead, his argument is wholly subjective and qualitative in nature:

In many parts of the world where there have not been any large movements of population over a long period, the reality of subraces is evident enough” (p211).

One suspects that, given increased geographic mobility, those parts of the world are now reduced in number.

Thus, even if subracial differences were once real, with increased migration and intermarriage, they are fast disappearing, at least within Europe.

Studies of Selected Human Groups

This third section of the book focuses on certain specific selected human populations. These are presumably chosen because Baker feels that they are representative of certain important elements of human evolution, racial divergence, or are otherwise of particular interest.

Unfortunately, Baker’s choice of which groups upon which to focus seems rather arbitrary and he never explains why these groups were chosen ahead of others.

In particular, it is notable that Baker focuses primarily on populations from Europe and Africa. East Asians (i.e. Mongoloids), curiously, are entirely unrepresented.

The Jews

After a couple of introductory chapters, and one chapter focussing on “Europids” (i.e. Caucasians) as a whole, Baker’s next chapter discusses Jewish people.

In the opening paragraphs, he observes that:

In any serious study of the superiority or inferiority of particular groups of people one cannot fail to take note of the altogether outstanding contributions made to intellectual and artistic life, and to the world of commerce and finance, generation after generation by persons to whom the name of Jews is attached” (p232).

However, having taken due “note” of this, and hence followed his own advice, he says almost nothing further on the matter, either in this chapter or in those later chapters that deal specifically with the question of racial superiority (see below).

Instead, Baker first focuses on justifying the inclusion of Jews in a book about race, and hence arguing against the politically-correct notion that Jews are not a race, but rather mere practitioners of a religion.[29] Baker gives short-shrift to this notion:

There is no close resemblance between Judaism in the religious sense and a proselytizing religion such as the Roman Catholic” (p326).

In other words, Baker seems to be saying, because Judaism is not a religion that actively seeks out converts (but rather one that, if anything, discourages converts), Jews have retained an ethnic character distinct from the host populations alongside whom they reside, without having their racial traits diluted by the incorporation of large numbers of converts of non-Jewish ancestry.

Yet, actually, even proselytizing religions like Christianity, Catholicism and Islam, that do actively seek to convert nonbelievers, often come to take on an ethnic character, since offspring usually inherit (i.e. are indoctrinated in) the faith of their parents, apostates are persecuted, conversion remains, in practice, rare, and people are admonished to marry within the faith.

Thus, in places beset by ethnic conflict like Northern Ireland, Lebanon or the former Yugoslavia, religion often comes to represent a marker for ethnicity, and even ostensibly proselytizing religions like Sunni and Shia Islam and Catholicism can come to be like ethnicities, if not races – i.e. reproductively-isolated, endogamous breeding populations.

Having concluded, then, that there is a racial as well as a religious component to Jewish identity, Baker nevertheless stops short of declaring the Jews a race or even what he calls a subrace.

Dismissing the now discredited Khazar hypothesis in a sentence,[30] he instead classes the bulk of the world’s Jewish population (i.e. the Ashkenazim) as merely part of “Armenid subrace” of the Europid race” with some “Orientalid” (i.e. Arab) admixture (p242).[31]

Thus, Baker claims:

Persons of Ashkennazic stock can generally be recognised by certain physical characters that distinguish them from other Europeans” (p238).

These include a short but wide skull and a nose that is “large in all dimensions” (p239), the characteristic shape of which Baker even purports to illustrate with a delightfully offensive diagram (p241).[32]

Likewise, Baker claims that Sephardic Jews, the other main subgroup of European Jews, are likewise “distinguishable from the Ashkenazim by physical characters”, being slenderer in build, with straighter hair, narrower noses, and different sized skulls, approximately more to the Mediterranean racial type (p245-6).

But, if Sephardim and Ashkenazim are indeed “distinguishable” or “recognisable” by “physical characters”, either from one another or from other European Gentiles, as Baker claims, then with what degree of accuracy is he claiming such distinctions can be made? Surely far less than 100%.[33]

Moreover, are the alleged physiological differences that Baker posits between Ashkenazi, Sephardi, and other Europeans based on recorded quantitative measurements, and, if so, are the differences in question statistically significant? On this, Baker says nothing.

The Celts

The next chapter concerns The Celts, a term surrounding which there is so much confusion and which has been used in so many different senses – racial, cultural, ethnic, territorial and linguistic (p183) – that some historians have argued that it is best abandoned altogether.

Baker, himself British, is keen to dispel the notion that the indigenous populations of the British Isles were, at the time of the Roman invasion, a primitive people, and is very much an admirer of their artwork.

Thus, Baker writes that:

Caesar… nowhere states that any of the Britons were savage (immanis), nor does he speak specifically of their ignorance (ignorantia), though he does twice mention their indiscretion (imprudentia) in parleying” (p263).

Of course, Caesar, though hardly unbiased in this respect, did regard the indigenous Britons as less civilized than the Romans themselves. However, I suppose that barbarism, like civilization (see below), is a matter of degree.

Regarding the racial characteristics of those inhabitants of pre-Roman Britain who are today called Celts, Baker classifies them as Nordic, writing:

Their skulls scarcely differ from those of the Anglo-Saxons who subsequently dominated them, except in one particular character, namely, that the skull is slightly (but significantly) lower in the Iron Age man than in the Anglo-Saxon” (p257).[34]

Thus, dismissing the politically-correct notion that the English were, in the words of another author, “true multiracial society”, Baker claims:

“[The] Angles, Saxons, Jutes, Normans, Belgics and… Celts… were not only of one race (Europid) but of one subrace (Nordid).” (p267).

Citing remains found in an ancient cemetery in Berkshire supposedly containing the skeletons of Anglo-Saxon males but indigenous British females and hybrid offspring, he concludes that, rather than extermination, a process of intermarriage and assimilation occurred (p266).

However, the indigenous pre-Celtic inhabitants of the British Isles were, he concludes, less Nordic than Mediterranid in phenotype.[35]

Such influences remain, Baker claims, in the further reaches of Wales and Ireland, as evidenced by the distribution of blood groups and of hair colour.

Thus, whereas the Celtic fringe is usually associated with red, auburn or ginger hair, Baker instead emphasizes the greater prevalence of dark hair among the Irish and Welsh:

The tendency towards the possession of dark hair was much more marked in Wales than in England, and still more marked in the western districts of Ireland” (p265).[36]

This conclusion is based upon the observations of nineteenth century English ethnologist John Beddoe, who travelled the British Isles recording the distribution of different hair and eye colours, reporting his findings in The Races of Britain, which was first published in 1862 and remains, to my knowledge, the only large body of data on the distribution of hair and eye colour in the British Isles to this day.

On this basis, Baker therefore concludes that:

The modern population of Great Britain probably derives mainly from the [insular] ‘Celts’… and Belgae, though a more ancient [i.e. Mediterranean] stock has left its mark rather clearly in certain parts of the country, and the Anglo-Saxons and other northerners made an additional Nordid contribution later on” (p269).

Yet recent population genetic studies suggest that even the so-called Celts, like the later Anglo-Saxons, Normans and Vikings, actually had only a quite minimal impact on the ancestry of the indigenous peoples of the British Isles.[37]

This, of course, further falsifies the politically correct, but absurd notion that the British are a nation of immigrants – which phrase is, of course, itself a recent immigrant from America, in respect of whose population the claim surely has more plausibility.

The Celts, moreover, likely arrived from on the British Isles from continental Europe by the same route as the later Anglo-Saxons and Normans – i.e. across the English channel (or perhaps the south-west corner of the North Sea), by way of Southern England. This is, after all, by far the easiest, most obvious and direct route.[38]

This leads Baker to conclude that the Celts, like the Anglo-Saxons after them, imposed their language on, but had little genetic impact on, the inhabitants of those parts of the British Isles furthest from this point of initial disembarkation (i.e. Scotland, Ireland, Wales). Thus, Baker concludes:

The Iron Age invaders transmitted the dialects of their Celtic language to the more ancient Britons whom they found in possession of the land [and] pushed back these less advanced peoples towards the west and north as they spread” (p264).

But these latter peoples, though adopting the Celtic tongue, were not themselves (primarily) descendants of the Celtic invaders. This leads Baker to follow Carleton Coon in concluding:

It is these people, the least Celtic—in the ethnic sense—of all the inhabitants of Great Britain, that have clung most obstinately to the language that their conquerors first taught them two thousand years ago” (p269).

In other words, in a racial and genetic, if not a linguistic, sense, the English are actually more Celtic than are the self-styled Celtic Nations of Scotland, Ireland and Wales!

Australian Aboriginals – a “Primitive” Race?

The next chapter is concerned with Australian Aboriginals, or, as Baker classes them, “Australids”.

In this chapter Baker is primarily concerned with arguing that Aboriginals are morphologically primitive.

Of course, the indigenous inhabitants of what is now Australia were, when Europeans first made contact with them, notoriously backward in terms of their technology and material culture.

For example, Australian Aboriginals are said the only indigenous people yet to have developed the simple bow or bow and arrow; while the neighbouring, and related, indigenous people of Tasmania, isolated from the Australian mainland by rising sea levels at the end of the last ice age but usually classed as of the same race, are said to have lacked even, arguably, the ability to make fire.

However, this is not what Baker means by referring to Aboriginals as “primitive”. Indeed, unlike his later chapters on black Africans, Baker says nothing regarding the technology or culture of indigenous Australians.

Instead, he talks exclusively about their morphology. In referring to them as “primitive”, Baker is therefore using the word in the specialist phylogenetic sense. Thus, he argues that Australian Aboriginals:

Retain… physical characters that were possessed by remote ancestors but have been lost in the course of evolution by most members of the taxa that are related to it” (p272-3).

In other words, they retain traits characteristic of an earlier state of human evolution which have since been lost in other extant races.

Baker purports to identify twenty-eight such “primitive” characters in Australian aboriginals. These include prognathism (p281), large teeth (p289), broad noses (p282), and large brow ridges (p280).

Baker acknowledges that all extant races retain some primitive characters that have been lost in other races (p302). For example, unlike most other races (but not Aboriginals), Caucasoids retain scalp hair characteristic of early hominids and indeed other extant primates (p297).

However, Baker concludes:

The Australids are exceptional in the number and variety of their primitive characters and in the degree to which some of them are manifested” (p302).

Relatedly, Nicholas Wade observes that, whereas there is a general trend towards lighter and less robust bones and skulls over the course of human evolution, something referred to as gracialization, two populations at “the extremities of the human diaspora” seem to have been exempt, or isolated, from this process, namely Aboriginals and the “Fuegians at the tip of the South America” (A Troublesome Inheritance: p167-8).[39]

Of course, to be morphologically ‘primitive’ in this specialist phylogenetic sense entails no necessary pejorative imputations as are often associated with the word ‘primitive’.

However, some phylogentically primitive traits may indeed be indicative of primitive’ technology of indigenous Aboriginals at the time of first contact with Europeans.

For example, tooth size decreased over the course of human evolution as human invented technologies (e.g. cooking, tools for cutting) that made large teeth unnecessary. On this view, the relatively large size of Aboriginal teeth could be associated with the primitive state of their technology.

More obviously, phylogentically primitive brains obviously do imply lesser intelligence, given the increase in human brain size and intelligence that has occurred over the course of human evolution.

Thus, Aboriginals have, on average, Baker reports, smaller brains than those of Caucasians, weighing only about 85% as much (p292). The smaller average brain-size of Aboriginals is confirmed by more recent data (Beals et al 1984).

Baker also reviews some suggestive evidence regarding the internal structure of Aboriginal brains, as compared to that of Europeans, notably in the relative positioning of the lunate sulcus, again suggesting similarities with the brains of non-human primates.

In this sense, then, Australian Aboriginals ‘primitivebrains may indeed be linked to the primitive state, in the more familiar sense of the word ‘primitive’, of their technology and culture.

San Bushmen and Paedomorphy

Whereas Australian Aboriginals are morphologically “primitive” (i.e. retain characters of early hominids), the San Bushmen of Southern Africa (“Sanids”), together with the related Khoi (collectively Khoisan, or, in racial terms, Capoid) are, Baker contends, paedomorphic.

By this, Baker means that the San people retain into adulthood traits that are, in other taxa, restricted to infants or juveniles, and is more often referred to as neoteny.[40]

One example of this supposed paedomorphy is provided by the genitalia of the Sanid males:

The penis, when not erect, maintains an almost horizontal position… This feature is scarcely ever omitted in the rock art of the Bushmen, in their stylized representations of their own people. The prepuce is very long; it covers the glans completely and projects forward to a point. The scrotum is drawn up close to the root of the penis, giving the appearance that only one testis has descended, and that incompletely” (p319).[41]

Humans in general are known to be neotenous in many of our distinct characters, and we are also, of course, the most intelligent known species.[42] However, Baker argues:

Although mankind as a whole is paedomorphous, those ethnic taxa (the Sanids among them) that are markedly more paedomorphious than the rest have never achieved the status of civilization, or anything approaching it, by their own initiative. It would seem that, when carried beyond a certain point, paedomorphosis is antagonistic to purely intellectual advance” (p324).

As to why this might be the case, he speculates in a later chapter:

Certain taxa have remained primitive or become paedomorphous in their general morphological characters and none of these has succeeded in developing a civilization. It is among these taxa in particular that one finds some indication of a possible cause of mental inferiority in the small size of the brain” (p428).

Yet this is a curious suggestion since neoteny is usually associated with increased brain growth in humans.

Moreover, other authorities class East Asians as a paedomorphic race, yet they have undoubtedly founded great civilizations and have brains as large as, or, after controlling for body-size, even larger than those of Europeans, and are generally reported to have somewhat higher IQs (see Lynn’s Race Differences in Intelligence: which I have reviewed here).

The Big Butts of Bushmen – or just of Bushwomen?

Having discussed male genitalia, Baker also emphasizes the primary and secondary sexual characteristics of Sanid women – in particular their protruding buttocks (“steatopygia”) and alleged elongated labia.

The protruding buttocks of Sanid women are, Baker contends, qualitatively different in both shape and indeed composition from those of other populations, including the much-celebrated ‘big butts’ of contemporary African-Americans (p318).

Thus, whereas, among other populations, the shape of the buttocks, even if very large, are “rounded” in shape:

It is particular characteristic of the Khoisanids that the shape of the projecting part is that of a right-angled triangle, the upper edge being nearly horizontal … [and] internally… consist of masses of fat incorporated between criss-crossed sheets of connective tissue said to be joined to one another in a regular manner.

Regarding the function of these enlarged buttocks, Baker rejects any analogy with the humps of the camel, which evolved as reserves of fat upon which the animal could call in the event of famine or draught.

Unlike camels, which are, of course, adapted to a desert environment, Baker concludes:

The Hottentots, Korana, and Bushmen are not to be regarded as people adapted by natural selection to desert life” (p318).

However, today, San Bushmen are indeed largely restricted to a desert environment, namely the Kalahari desert.

However, although he does not directly discuss this, Baker presumably regards this as a recent displacement, resulting from the Bantu expansion, in the course of which the less advanced San were displaced from their traditional hunting grounds in southern Africa by Bantu agriculturalists, and permitted to eke out an undisturbed existence only in an arid desert environment of no use to Bantu agriculturalists.

Instead of having evolved as fat reserves in the event of famine, drought or scarcity, Baker instead suggests that Khoisan buttocks evolved through sexual selection.

This seems plausible, given the sexual appeal of ‘big butts even among western populations. However, recent research suggest that it is actually lumbar curvature, or lordosis, an ancient mammalian mating signal, rather than fat deposits in the buttocks as such, that is primarily responsible for the perceived attractiveness of so-called ‘big butts’ (Lewis et al 2015).

This sexual selection hypothesis is, of course, also consistent with the fact that large buttocks among the San seem to be largely, if not entirely, restricted to women.

However, Carleton Coon, in Racial Adaptations: A Study of the Origins, Nature, and Significance of Racial Variations in Humans, suggests alternatively that this sexual dimorphism could instead reflect the caloric requirements of pregnancy and lactation.[43]

The caloric demands of pregnancy and lactation are indeed the probable reason women of all races have greater fat deposits than do males.

Indeed, an analogy might be provided by female breasts, since these, unlike the mammary glands of other mammalian species, are present permanently, from puberty on, and, save during pregnancy and lactation, are composed predominantly of fatty tissues, not milk.[44]

Elusive Elongated Labia?

In addition to their enlarged buttocks, Baker also discusses the alleged elongated labia of Sanid women, sometimes referred to, rather inaccurately in Baker’s view, as the “the Hottentot apron”.

Some writers have discounted this notion as a sort of nineteenth-century anthropological myth. However, Baker himself insists that the elongated labia of the San are indeed real.

His evidence, however, is less than compelling, the illustrations included in the text being limited to a full-body photograph in which the characteristic is barely visible (p311) and what seems to be a surely rather fanciful sketch (p315).

Likewise, although a Google image search produces abundant photographic evidence of Khoisan buttocks, their elongated labia prove altogether more elusive.

Perhaps the modesty of Khoisan women, or the prudery and puritanism of Victorian anthropologists and explorers, prevented the latter from recording photographic evidence for this characteristic.

However, it is perhaps telling that, even in this age of Rule 34 of the Internet (If it exists, there is porn of it. No exceptions), I have been unable to find photographic evidence for this trait.

Racial Superiority

The fourth and final section of ‘Race’ turns to the most controversial topic addressed by Baker in this most controversial of books, namely whether any racial group can be said to be superior or inferior to another, a question that Baker christens “the Ethnic Question”.

He begins by critiquing the very nature of the notion of superiority and inferiority, observing in a memorable and quotable aphorism:

Anyone who accepts it as a self-evident truth, in accordance with the American Declaration of Independence, that all men are created equal may properly be asked whether the meaning of the word ‘equal’ is self-evident” (p421).

Thus, if one is “concerned simply with the question whether the taxa are similar or different”, then, Baker concludes, “there can be no doubt as to the answer” (p421).

Indeed, this much is clear, not simply from the huge amount of data assembled by Baker himself in previous chapters, but also from simple observation.[45]

However, Baker continues:

The words ‘superior’ and ‘inferior’ are not generally used unless value judgements are concerned” (p421).

Any value judgement is, of course, necessarily subjective.

On objective criteria, each race can only be said to be, on average, superior in a specific endeavour (e.g. IQ tests, basketball, mugging, pimping, drug-dealing, tanning, building civilizations). The value to be ascribed to these endeavours is, however, wholly subjective.

On these grounds, contemporary self-styled ‘race realists’ typically disclaim any association between their theories and any notions of racial superiority.

Yet these race realists are often the very same individuals who emphasise the predictive power of IQ tests in determining many social outcomes (income, criminality, illegitimacy, welfare dependency) which are generally viewed in anything but value-neutral terms (see The Bell Curve: which I have reviewed here; here and here).

From a biological perspective, no species (or subspecies) is superior to any other. Each is adapted to its own ecological niche and hence presumably superior at surviving and reproducing within the specific environment in which it evolved.

Thus, sociobiologist Robert Trivers quotes his mentor Bill Drury as observing during a discussion between the two regarding a possible biological basis for race prejudice:

Bob, once you’ve learnt to think of a herring gull as equal, the rest is easy” (Natural Selection and Social Theory: p57).

However, taken to its logical conclusion, or reductio ad absurdum, this suggests a dung beetle is equal to Beethoven!

From Physiology to Psychology

Although he alludes in passing to race differences in athletic ability, Baker, in discussing superiority, is concerned primarily with intellectual and moral achievement. Therefore, in this final section of the book, he turns from physiological differences to psychological ones.

Of course, the two are not entirely unconnected. All behaviour must have an ultimate basis in the brain, which is itself a part of an organism’s physiology. Thus:

Cranial capacity is, of course, directly relevant to the ethnic problem since it sets a limit to the size of the brain in different taxa; but all morphological differences are also relevant in an indirect way, since it is scarcely possible that any taxa could be exactly the same as one another in all the genes that control the development and function of the nervous and sensory systems, yet so different from one another in structural characters in other parts of the body” (p533-4).

Indeed, Baker observes:

Identity in habits is unusual even in pairs of taxa that are morphologically much more similar to one another than [some human races]. The subspecies of gorilla, for instance, are not nearly so different from one another as Sanids are from Europids, but they differ markedly in their modes of life” (426).

In other words, since human races differ significantly in their physiology, it is probable that they will also differ, to a roughly equivalent degree, in psychological traits, such as intelligence, temperament and personality.

Measuring Superiority?

In discussing the question of the intellectual and moral superiority of different racial groups, Baker focusses on two lines of evidence in particular:

  1. Different races’ performance in ability and attainment tests;
  2. Different races’ historical track record in founding civilizations.

Baker’s discussion of the former topic is now rather dated.

Recent findings unavailable to Baker include the discovery that East Asians score somewhat higher on IQ tests than do white Europeans (see Race Differences in Intelligence: reviewed here), and also that Ashkenazi Jews score higher still (see The Chosen People: review forthcoming).[46]

Evidence has also accumulated regarding the question of the relative contributions of heredity to racial differences in IQ, including the Minnesota transracial study (Scarr & Weinberg 1976; Weinberg et al 1992) and studies of the effects of racial admixture on IQ using blood-group data (Loehlin et al 1973; Scarr et al 1977), and, most recently, genome analysis (Lasker et al 2019). See also my review of Richard Lynns Race Difference in Intelligence: An Evolutionary Perspective’, posted here.

Readers interested in more recent research on this issue should consult Jensen and Rushton (2005) and Nisbett (2005), or Nicholas Mackintosh’s summary in Chapter Thirteen of his textbook, IQ and Human Intelligence (2nd Ed) (pp324-359).[47]

Criteria for Civilization and Moral Relativism

While his data on race differences in IQ is therefore now dated, Baker’s discussion of the track-record of different races in founding civilizations remains of interest today, if only because this is a topic studiously avoided by most contemporary authors, historians and anthropologists on account of its politically-incorrect nature – though Jared Diamond, in Guns, Germs and Steel, represents an important recent exception to this trend.[48]

The first question, of course, is precisely how one is to define ‘civilizations’ in the first place, itself a highly contentious issue.[49]

Thus, Baker identifies twenty-one criteria for recognising civilizations (p507-8).[50]

In general, these can be divided into two types:

  1. Scientific/technological criteria;
  2. Moral criteria.[51]

However, the latter are inherently problematic. What constitutes moral superiority itself involves a moral judgement that is necessarily subjective.

In other words, whereas technological and scientific superiority can be demonstrated objectively, moral superiority is a mere matter of opinion.

Thus, the ancient Romans, transported to our times, would surely accept the superiority of our technology – and, if they did not, we would, as a consequence of the superiority of our technology, outcompete them both economically and militarily and hence prove it ourselves.

However, they would view our social, moral and political values as decadent and we would have no way of proving them wrong.

Take, for example, Baker’s first requirement for civilization, namely that:

In the ordinary circumstances of life in public places they [i.e. members of the society under consideration] cover the external genitalia and greater part of the trunk with clothes” (p507).

This criterium is not only curiously puritanical, but also blatantly biased against tropical cultures. Whereas in temperate and arctic zones clothing is essential for survival, in the tropics the decision to wear clothing represents little more than an arbitrary fashion choice.

Meanwhile, the requirement that the people in question “do not practice severe mutilation or deformation of the body”, another moral criterion, could arguably exclude contemporary westerners from the ranks of the ranks of the civilized’, given the increasing prevalence of tattooing, flesh tunnel ear plugs and other forms of extreme bodily modification (not to mention genital mutilation) – or perhaps it is merely those among us who succumb to such fads who are not truly civilized.

The requirement that a civilization’s religious beliefs not be “purely or grossly superstitious” (p507) is also problematic. As a confirmed atheist, I suspect that all religions are, by very definition, superstitious. If some forms of Buddhism and Confucianism are perhaps exceptions, then they are perhaps simply not religions at all in the western sense.

At any rate, Christian beliefs  regarding miracles, resurrection, the afterlife, the Holy Spirit and so on surely rival those of any other religion when it comes to “gross superstition”.

As for his complaint that the religion of the Mayansdid not enter into the fields of ethics” (p526), a complaint he also raises in respect of indigenous black African religions (p384), contemporary moral philosophers generally see this as a good thing, believing that religion is best kept of moral debates.[52]

In conclusion, any person seeking to rank cultures on moral criteria will, almost inevitably, rank his own society as morally superior to all others – simply because he is judging these societies by the moral standards of his own society that he has internalized and adopted as his own.

Thus, Baker himself views Western civilization as superior to such pre-Columbian mesoamerican civilizations as the Aztecs due to the latter’s practice of mass ritual human sacrifice and cannibalism (p524-5).

However, in doing so, he is judging the cultures in question by distinctly Western moral standards. The Aztecs, in contrast, may have viewed human sacrifice as a moral imperative and may therefore have viewed European cultures as morally deficient precisely because they did not butcher enough of their people in order to propitiate the gods.

Likewise, whereas Baker views cannibalism as incompatible with civilization (p507), I personally view cannibalism as, of itself, a victimless crime. A dead person, being dead, is incapable of suffering by virtue of being eaten. Indeed, in this secular age of environmental consciousness, one might even praise cannibalism as a highly ‘sustainable’ form of recycling.

Sub-Saharan African Cultures

Baker’s discussion of different groups’ capacity for civilization actually begins before his final section on “Criteria for Superiority and Inferiority” in his four chapters on the race whom Baker terms Negrids – namely, black Africans from south of the Sahara, excluding Khoisan and Pygmies (p325-417).

Whereas his previous chapters discussing specific selected human populations focussed primarily, or sometimes exclusively, on their morphological peculiarities, in the last four of these chapters, focussing on African blacks, his focus shifts from morphology to culture.

Thus, Baker writes:

The physical characters of the Negrids are mentioned only briefly. Members of this race are studied in Chapters 18-21 mainly from the point of view of the social anthropologist interested in their progress towards civilization at a time when they were still scarcely influenced over a large part of their territory, by direct contact with members of more advanced ethnic taxa” (p184).

Unlike some racialist authors,[53] Baker acknowledges the widespread adoption of advanced technologies throughout much of sub-Saharan Africa prior to modern times. However, he attributes the adoption of these technologies to contact with, and borrowings from, outside non-Negroid civilizations (e.g. Arabs, Egyptians, Moors, Berbers, Europeans).

Therefore, in order to distinguish the indigenous, homegrown capacity of black Africans to develop advanced civilization, Baker relies on the reports of seven nineteenth century explorers of what he terms “the secluded area” of Africa, by which term Baker seems to mean the bulk of inland Southern, Eastern and Central Africa, excluding the Horn of Africa, the coast of West Africa and the Gulf of Guinea (p334-5).[54]

In these parts of Africa, at the time these early European explorers visited the continent, the influence of outside civilizations was, Baker reports, “non-existent or very slight” (p335). The cultural practices observed by these explorers therefore, for Baker, provide a measure of black Africans indigenous capacity for social, cultural and technological advancement.

On this perhaps dubious basis, Baker thus concludes that there is no evidence black Africans ever:

  • Fully domesticated any plants (354-6) or animals (p373-7); or
  • Invented the wheel (p373); or other ‘mechanical’ devices with interacting parts (p354).[55]

Also largely absent throughout ‘the secluded area’, according to Baker, were:

In respect of these last two indices of civilization, however, Baker admits a couple of partial, arguable exceptions, which he discusses in the next chapter (Chapter 21). These include the ruins of Great Zimbabwe (p401-9) and a script invented in the nineteenth century (p409-11).[56]

Domesticated Plants and Animals in Africa

Let’s review these claims in turn. First, it certainly seems to be true that few if any species of either animals or plants were domesticated in what Baker calls the “the secluded area” of sub-Saharan Africa.[57]

However, with respect to plants, there may be a reason for this. Many important, early domesticates were annuals. These are plants that complete their life-cycle within a single year, taking advantage of predictable seasonal variations in the weather.

As explained by Jared Diamond, annual plants are ideal for human consumption, and for domestication, because:

Within their mere one year of life, annual plants inevitably remain small herbs. Many of them instead put their energy into producing big seeds, which remain dormant during the dry season and are then ready to sprout when the rains come. Annual plants therefore waste little energy on making inedible wood or fibrous stems, like the body of trees and bushes. But many of the big seeds… are edible by humans. They constitute 6 of the modern world’s 12 major crops” (Guns, Germs and Steel: p136).

Yet sub-Saharan Africa, being located closer to the equator, experiences less seasonal variation in climate. As a result, relatively fewer plants are annuals.

However, it is far less easy to explain why why sub-Saharan Africans failed to domesticate any wild species of animal, with the possible exception of guineafowl.[58]

After all, Africa is popular as a tourist destination today in part precisely because it has a relative abundance of large wild mammals of the sort seemingly well suited for domestication.[59]

Jared Diamond argues that the African zebra, a close relative of other wild equids that were domesticated, was undomesticable because of its aggression and what Diamond terms its nasty disposition” (Guns, Germs and Steel: p171-2).[60]

However, this is unconvincing when one considers that Eurasians succeeded in domesticating such formidably powerful and aggressive wild species as wolves and aurochs.[61]

Thus, even domesticated bulls remain a physically-formidable and aggressive animal. Indeed, they were favoured adversaries in blood sports such as bullfighting and bull-baiting for precisely this reason.

However, the wild auroch, from whom modern cattle derive, was undoubtedly even more formidable, being, not only larger, more muscled and with bigger horns, but also surely even more aggressive than modern bulls. After all, one of the key functions of domestication is to produce more docile animals that are more amenable to control by human agriculturalists.[62]

Compared to the domestication of aurochs, the domestication of the zebra would seem almost straight forward. Indeed, the successful domestication of aurochs in ancient times might even cause us to reserve our judgement regarding the domesticability of such formidable African mammals as hippos and African buffalo, the possibility of whose domestication Diamond dismisses a priori as preposterous.

Certainly, the domestication of the auroch surely stands as one of the great achievements of ancient Man.

Reinventing the Wheel?

Baker also seems to be correct in his claim that black Africans never invented the wheel.

However, it must be borne in mind that the same is also true of white Europeans. Instead, Europeans simply copied the design of the wheel from other civilizations and peoples, namely those from the Middle East, probably Mesopotamia, where the wheel first seems to be have been developed.

Indeed, most cultures with access to the wheel never actually invented it themselves, for the simple reason that it is far easier to copy the invention of a third-party through simple reverse engineering than to independently invent afresh an already existing technology all by oneself.

This then explains why the wheel has actually been independently invented, at most, only a few times in history.

The real question, then, is not why the wheel was never invented in sub-Saharan Africa, but rather why it failed to spread throughout that continent in the same way it did throughout Eurasia.

Thus, if the wheel was known, as Baker readily acknowledges it was, in those parts of sub-Saharan Africa that were in contact with outside civilizations (notably in the Horn of Africa), then this raises the question as to why it failed to spread elsewhere in Africa prior to the arrival of Europeans. This indeed is acknowledged to remain a major enigma within the field of African history and archaeology (Law 2011; Chavez et al 2012).

After all, there are no obvious insurmountable geographical barriers preventing the spread of technologies across Africa other than the Sahara itself, and, as Baker himself acknowledges, black Africans in the ‘penetrated’ area had proven amply capable of imitating technological advances introduced from outside.

Why then did the wheel not spread across Africa in the same way it did across Eurasia? Is it possible that African people’s alleged cognitive deficiencies were responsible for the failure of this technology to spread and be copied, since the ability to copy technologies through reverse engineering itself requires some degree of intellectual ability, albeit less than that required for original innovation?

One might argue instead that the African terrain was unsuitable for wheeled transport. However, one of the markers of civilization is surely its very ability to alter the terrain by large, cooperative public works engineering projects, such as the building of roads.

Thus, most of Eurasia is now suitable for wheeled transport in large part only because we, or more specifically our ancestors, have made it so.

Another explanation sometimes offered for the failure of African to develop wheeled transportation is that they lacked a suitable draft animal, horses being afflicted with sleeping sickness spread by the tsetse fly.

However, as we have seen above, Baker argues a race’s track record in successfully domesticating wild animals is itself indicative of the intellectual ability and character of that race. For Baker, then, the failure of sub-Saharan African to successfully domesticate any suitable species of potential draft animal (e.g. the zebra) is itself indicative of, and a factor in, their inability to successfully develop advanced civilization.

At any rate, even in the absence of a suitable draft animal, wheels are still useful (e.g. wheel barrows, pulled rickshaws; also the potter’s wheel).

After all, humans can themselves be employed as a draft animal, whether by choice or by force, and, if there is one arguable marker for civilization for which Africa did not lack, and which did not await introduction by Europeans, Moors and Arabs, it was, of course, the institution of slavery.

African Writing Systems?

What then of the alleged failure of sub-Saharan Africans to develop a system of writing? Baker refers to only a single writing system indigenous to sub-Saharan Africa, namely the Vai syllabary, invented in what is today Liberia in the nineteenth century in imitation of foreign scripts. Was this indeed the only writing system indigenous to sub-Saharan Africa?

Of course, writing has long been known in North Africa, and ancient Egypt even lays claim to have invented the first written script, namely hieroglyphs, although most archaeologists believe that they were beaten to the gun, once again, by Mesopotamia, with its cuneiform script.

However, this is obviously irrelevant to the question of black African civilization, since the populations of North Africa, including the ancient Egyptians, were largely Caucasoid.[63]

Thus, the Sahara Desert, as a relatively impassable obstacle to human movement throughout most of human history and prehistory (a geographic filter”, according to Sarich and Miele) that hence impeded gene flow, has long represented, and to some extent still represents, the boundary between the Caucasoid and Negroid races (Race: The Reality of Human Differences: p210).

What then of writing systems indigenous to sub-Saharan Africa? The wikipedia entry on writing systems of Africa lists several indigenous African writing systems of sub-Saharan Africa.

However, save for those of recent origin, almost all of these writing systems seem, from the descriptions on their respective wikipedia pages, to have been restricted to areas outside of ‘the secluded area’ of Africa as defined by Baker (p334-5).

Thus, excluding the writing systems of North Africa (i.e. Meroitic, Tifinagh and  ancient Egyptian hieroglyphs), Geze seems to have been restricted to the area around the Horn of Africa; Nsibidi to the area around the Gulf of Guinea in modern Nigeria; Adrinka to the coast of West Africa, while the other scripts mentioned in the entry are, like the Vai syllabary, of recent origin.

The only ancient writing system mentioned on this wikipedia page that was found in what Baker calls ‘the secluded area’ of Africa is Lusona. This seems to have been developed deep in the interior of sub-Saharan Africa, in parts of what is today eastern Angola, north-western Zambia and adjacent areas of the Democratic Republic of the Congo. Thus, it is almost certainly of entirely indigenous origin.

However, Lusona is described by its wikipedia article as only an ideographic tradition, that function[s] as mnemonic devices to help remember proverbs, fables, games, riddles and animals, and to transmit knowledge”.

It therefore appears to fall far short of a fully developed script in the modern sense.

Indeed, the same seems to be true, albeit to a lesser extent, of most of the indigenous writing systems of sub-Saharan Africa listed on the wikipedia page, namely Nsibidi and Adrinka, which each seem to represent only a form of proto-writing.

Only Geze seems to have been a fully-developed script, and this was used only in the Horn of Africa, which not only lies outside ‘the secluded area’ as defined by Baker, but whose population is, again according to Baker, predominantly Caucasoid (p225).

Also, Geze seems to have developed from an earlier Middle Eastern script. It is therefore not of entirely indigenous African origin.

It therefore seems to indeed be true that sub-Saharan Africans never produced a fully-developed script in those parts of Africa where they developed beyond the influence of foreign empires.

However, it must here be emphasized that the same is again probably also true of indigenous Europeans.

Thus, as with the wheel, Europeans themselves probably never independently invented a writing system, the Latin alphabet being derived from Greek script, which was itself developed from the Phoenician alphabet, which, like the wheel, first originated in the Middle East.[64]

Indeed, most writing systems were developed, if not directly from, then at least in imitation of, pre-existing scripts. Like the wheel, writing has only been independently reinvented afresh a few times in history.[65]

The question, then, as with the wheel, is, not so much why much of sub-Saharan Africa failed to invent a written script, but rather why those written scripts that were in use in certain parts of the continent south of the Sahara,  nevertheless failed to spread or be imitated over the remainder of that continent.

African Culture: Concluding Thoughts

In conclusion, it certainly seems clear that much of sub-Saharan Africa was indeed backward in those aspects of technology, social structure and culture which Baker identifies as the key components of civilization. This much is true and demands an explanation.

However, blanket statements regarding the failure of sub-Saharan Africans to develop a writing system or two-storey buildings seem, at best, a misleading simplification.

Indeed, Baker’s very notion of what he calls ‘the secluded area’ of Africa is vague and ill-defined, and he never provides a clear definition, or, better still, a map precisely delineating what he means by the term (p334-5).

Indeed, the very notion of a ‘secluded area’ is arguably misconceived, since even relatively remote and isolated areas of the continent that did not have any direct contact with non-Negroid peoples, will presumably have had some indirect influence from outside of sub-Saharan Africa, if only by contact with peoples from those regions of the continent south of the Sahara which had been influenced by foreign peoples and civilizations.

After all, as we have seen, Europeans also failed to independently develop either the wheel or a writing system for themselves, instead simply copying these innovations from the neighbouring civilizations of the Middle East.

Why then were black Africans south of the Sahara, who were indeed exposed to these technologies in certain parts of their territory, nevertheless unable to do the same?

Pre-Columbian Native American Cultures

Baker’s discussion of status of the pre-Columbian civilizations, or putative civilizations, of America is especially interesting. Of these, the Mayans definitely stand out, in Baker’s telling, as the most impressive in terms of their scientific and technological achievements.

Baker ultimately concludes, however, that even the Maya do not qualify as a true civilization, largely on moral grounds – namely, their practice of mass sacrifices and cannibalism.

Yet, as we have seen, this is to judge the Mayans by distinctly western moral standards.

No doubt if western cultures were to be judged by the moral values of the Mayans, we too would be judged just as harshly. Perhaps they would condemn us precisely for not massacring enough of our citizens in order to propitiate the gods.

However, even seeking to rank the Mayans based solely on their technological and scientific achievements, they still represent something of a paradox.

On the one hand, their achievements in mathematics and astronomy were impressive.

Baker educates us that it is was Mayans, not the Hindus or Arabs more often credited with the innovation, who first invented the concept of zero – or rather, to put the matter more precisely, “invent[ed] a ‘local value’ (or ‘place notational’) system of numeration that involved zero: that is to say, a system in which the value of each numberical symbol depended on its position in a series of such symbols, and the zero, if required, took its place in this series ” (p552).

Thus, Baker writes:

The Maya had invented the idea [of zero] and applied it to their vegisimal system [i.e. using a base of twenty] before the Indian mathematicians had thought of it and used it in denary [i.e. decimal] notation” (p522).[66]

Thus, Baker concludes:

The mathematics, astronomy, and calendar of the Middle Americans suggest unqualified acceptance into the ranks of the civilized” (p525).

However, on the other hand, according to Baker’s account:

They had no weights… no metal-bladed hoes or spades and no wheels (unless a few toys were actually provided with wheels and really formed part of the Mayan culture)” (p524).

Yet, as Baker alludes to in his rather disparaging reference to “a few toys”, it now appears the these toys were indeed part of the Maya culture.

Thus, far from failing to invent the wheel, Native Americans are one of the few peoples in the world with an unambiguous claim to having indeed invented the wheel entirely independently, since the possibility of wheels being introduced through contact with Eurasian civilizations is exceedingly remote.

Thus, the key question is, not why Native American civilizations failed to invent the wheel, for they did indeed invent the wheel, but rather why they failed to make full use of this remarkably useful invention, seemingly only employing it for seemingly frivolous items resembling toys (but whose real purpose is unknown) rather than for transport, or indeed ceramics.

Terrain may have been a factor. As mentioned above, one of the markers of a true civilization is arguably its very ability to alter its terrain by large-scale engineering projects such as the building of roads. However, obviously some terrains pose greater difficulties in this respect, and the geography of mesoamerica is particularly uninviting.

As in respect of sub-Saharan Africa, another factor sometimes cited is the absence of a draft animal. The Inca, but not the Aztecs and Maya, did have the llama. However, llama are not strong enough to carry humans, or to pull large carts.

Of course, for Baker, as we have seen, the domestication of suitable species of non-human animal is itself indicative of a peoples capacity for advanced civilization.

However, in the Americas, most large wild mammals of the sort possibly suited for domestication as a draft animal were wiped out by the first humans to arrive on the continent, the former having evolved in isolation from humans, and hence being completely evolutionarily unprepared for the sudden influx of humans with their formidable hunting skills.[67]

However, as noted in respect of Africa, the wheel is useful even in the absence of a draft animal, since humans themselves can be employed for this purpose (e.g. the wheel barrow and pulled rickshaw).

As for the Mayan script, this was also, according to Baker, quite limited. Thus, Baker reports:

There was no way of writing verbs, and abstract ideas (apart from number) could not be inscribed. It would not appear that the technique even of the Maya lent itself to a narrative form, except in a very limited sense. Most of the Middle Americans conveyed non-calendrical information only by speech or by the display of a series of paintings” (p524).

Indeed, he reports that “nearly all their inscriptions were concerned with numbers and the calendar” (p524).

The Middle Americans had nothing that could properly be called a narrative script” (p523-4).

Baker vs Diamond: The Rematch

However,departing from Baker’s conclusions, I regard the achievements of the Mesoamerican civilizations as, overall, quite impressive.

This is especially so, not only when one takes into account, not only their complete isolation from the Old World civilizations of Eurasia, but also of other factors identified by Jared Diamond in his rightly-acclaimed Guns, Germs and Steel.

Thus, whereas Eurasia is oriented largely on an east-to-west axis, spreading from China and Japan in the East, to western Europe and North Africa in the West, America is a tall, narrow continent that spreads instead from north-to-south, quite narrow in places, especially at the Isthmus of Panama, where the North American continent meets South America, which, at the narrowest point, is less than fifty miles across. 

As Diamond emphasizes, because climate varies with latitude (i.e. distance from the equator), this means that different parts of the Americas have very different climates, making the movement and transfer of crops, domesticated animals and people much more difficult.

This, together with the difficulty of the terrain, might explain why even the Incas and Aztecs, though contemporaraneous, seem to have been largely if not wholly unaware of one another’s existence, and certainly had no direct contact.

As a result, Native American cultures developed, not only in complete isolation from Old World civilizations, but even largely in isolation even from one another.

Moreover, the Americas had few large domesticable mammals, almost certainly because the first settlers of the continent, on arriving, hunted them to extinction on first arrival, and the mammals, having evolved in complete isolation from humans, were entirely unprepared for the arrival of humans, with their formidable hunting skills, to whom they were wholly unadapted.

In these conditions, the achievements of the Mesoamerican civilizations, especially the Mayans, seem to me quite impressive, all things considered – certainly far more impressive than the achievements of, say, sub-Saharan Africans or Australian Aboriginals.

If these latter groups can then indeed be determined to possess lesser innate intellectual capacity as compared to, say, Europeans or East Asians, then I feel it is premature to say the same of the indigenous peoples of the Americas.

Artistic Achievement

In addition to ranking cultures on scientific, technological and moral criteria, Baker also assesses the quality of their artwork (p378-81; p411-17; p545-549). However, judgements of artistic quality, like moral judgements, are necessarily subjective.

Thus, Baker disparages black African art as non-naturalistic (p381) yet also extols the decorative art of the Celtics, which is mostly non-figurative and abstract (p261-2).

However, interestingly, with regard to styles of music, Baker recognises the possibility of cultural bias, suggesting that European explorers, looking for European-style melody and harmony, failed to recognise the rhythmical qualities of African music which are, Baker remarks, perhaps unequalled in the music of any other race of mankind (p379).[68]

A Reminder of What Was Possible”?

The fact that Race’ remains a rewarding some read forty years after first publication, is an indictment of the hold of politically-correctness over both science and the publishing industry.

In the intervening years, despite all the advances of molecular genetics, the scientific understanding of race seems to have progressed but little, impeded by political considerations.

Meanwhile, the study of morphological differences between races seems to have almost entirely ceased, and a worthy successor to Baker’s ‘Race’, incorporating the latest genetic data, has, to my knowledge, yet to be published.

At the conclusion of the first section of his book, dealing with what Baker calls “The Historical Background”, Baker, bemoaning the impact of censorship and what would today be called political correctness and cancel culture on both science and the publishing industry, recommends the chapter on race from a textbook published in 1928 (namely, Contemporary Sociological Theories by Pitirim Sorokin) as “well worth reading”, even then, over forty years later, if only “as a reminder of what was still possible before the curtain went down” (p61).

Today, some forty years after Baker penned these very words and as the boundaries of acceptable opinion have narrowed yet further, I recommend Baker’s ‘Race’ in much the same spirit – as both an historical document and “a reminder of what was possible”.

__________________________

Endnotes

[1] Genetic studies often allow us distinguish homology from analogy, because the same or similar traits in different populations often evolve through different genetic mutations. For example, Europeans and East Asians evolved lighter complexions after leaving Africa, in part, by mutations in different genes (Norton et al 2007). Similarly, lactase persistence has evolved through mutations in different genes in Europeans than among some sub-Saharan Africans (Tishkoff et al 2009). Of course, at least in theory, the same mutation in the same gene could occur in different populations, thus providing an example of convergent evolution even at the genetic level. However, with the analysis of a large number of genetic loci, especially in non-coding DNA, where mutations are unlikely to be selected for or against and hence are lost or retained at random in different populations, this problem is unlikely to lead to errors in determining the relatedness of populations. 

[2] In his defence, the Ainu are not one of the groups upon whom Baker focuses in his discussion, and are only mentioned briefly in passing (p158; p173; p424) and at the very end of the book, in his “Table of Races and Subraces”, where he attempts to list, and classify by race, all the groups mentioned in the book, howsoever briefly (p624-5).

[3] Although we no longer need to rely on morphological criteria in order to determine the relatedness between populations, differences between racial groups in morphology and bodily structure remain an interesting, and certainly a legitimate, subject for scientific study in their own right. Unfortunately, however, the study and measurement of such differences seems to have all but ceased among anthropologists. One result is that much of the data on these topics is quite old. Thus, HBDers, Baker included, are sometimes criticized for citing studies published in the nineteenth and early-twentieth century. In principle, there is, however, nothing wrong with citing data from the nineteenth or early-twentieth century, unless critics can show that the methodology adopted have subsequently been shown to be flawed. However, it must be acknowledged that the findings of such studies with respect to morphology may no longer apply to modern populations, as a result of recent population movements and improvements in health and nutrition, among other factors. At any rate, the reason for the paucity of recent data is the taboo associated with such research.

[4] This is a style of formatting I have not encountered elsewhere. It makes it difficult to bring oneself to skip over the material rendered in smaller typeface since it is right there in the main body of the text, and indeed Baker himself claims that this material is “more technical and more detailed than the rest (but not necessarily less interesting)” (pix).

[5] Yet another source of potential terminological confusion results from the fact that, as will be apparent from many passages from the book quoted in this review, Baker uses the word “ethnic” to refer to differences that would better to termed “racial” – i.e. when referring to biologically-inherited physical and morphological differences between populations. Thus, for example, he uses the term “ethnic taxon” as “a comprehensive term that can be used without distinction for any of the taxa that are minor to species: that is to say, races, subraces and local forms” (p4). Similarly, he uses the phrase “the ethnic problem” to refer to the “whole subject of equality and inequality among the ethnic taxa of man” (p6). However, as Baker acknowledges, “English words derived from the Greek ἔθνος (ethnic, ethnology, ethnography, and others) are used by some authors in reference to groups of mankind distinguished by cultural or national features, rather than descent from common ancestors” (p4). However, in defending his adoption of this term, he notes “this usage is not universal” (p4). This usage has, I suspect, become even more prevalent in the years since the publication of Bakers book. However, in my experience, the term ethnic’ is sometimes also used as politically correct euphemism for the word ‘race’, both colloquially and in academia.

[6] In both cases, the source of potential confusion is the same, since both terms, though referring to a race, are derived from geographic terms (Europe and the Caucasus region, respectively), yet the indigenous homelands of the races in question are far from identical to the geographic region referred to by the term. The term Asian, when used as an ethnic or racial descriptor, is similarly misleading. For example, in British-English, Asian, as an ethnic term, usually refers to South Asians, since South Asians form a larger and more visible minority ethnic group in the UK than do East Asians. However, in the USA, the term Asian is usually restricted to East Asians and Southeast Asians – i.e. those formerly termed Mongoloid. The British-English usage is more geographically correct, but racially misleading, since populations of the Indian subcontinent, like those from the Middle East (also part of the Asian continent) are actually genetically closer to southern Europeans than to East Asians and were generally classed as Caucasian by nineteenth and early-twentieth century anthropologists, and are similarly classed by Baker himself. This is one reason that the term Mongoloid, despite pejorative connotations, remains useful.

[7] Moreover, the term Mongoloid is especially confusing given that it has also been employed to refer to people suffering from a developmental disability and chromosomal abnormality (Down Syndrome), and, while both usages are dated, and the racial meaning is actually the earlier one from which the later medical usage is derived, it is the latter usage which seems, in my experience, to retain greater currency, the word ‘Mongoloid’ being sometimes employed as a rather politically-incorrect insult, implying a mental handicap. Therefore, while I find annoying the euphemism treadmill whereby terms once quite acceptable terms (e.g. ‘negro’, ‘coloured people’) are suddenly and quite arbitrarily deemed offensive, the term ‘Mongoloid’ is, unlike these other etymologically-speaking, quite innocent terms, understandably offensive to people of East Asian descent given this dual meaning.

[8] For example, the word Asia, the source of the ethnonym, Asian, derives from the Greek Ἀσία, which originally referred only to Anatolia, at the far western edge of what would now be called Asia, the inhabitants of which region are not now, nor have ever likely been, Asian in the current American sense. Indeed, the very term Asia is a Eurocentric concept, grouping together many diverse peoples, fauna, flora and geographic zones, and whose border with Europe is quite arbitrary.

[9] The main substantive differences between the rival taxonomies of different racial theorists reflect the perennial divide between lumpers and splitters. There is also the question of precisely where the line is to be drawn between one race and another in clinal variation between groups, and whether a hybrid or clinal population sometimes constitutes a separate race in and of itself.

[10] For example, in Nicholas Wade’s A Troublesome Inheritance, this history of the misuse of the race concept comes in Chapter Two, titled ‘Perversions of Science’; in Philippe Rushton’s Race, Evolution and Behavior: A Life History Perspective (which I have reviewed here), this historical account is postponed until Chapter Five, titled ‘Race and Racism in History’; in Jon Entine’s Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About it, it is delayed until Chapter Nine, titled ‘The Origins of Race Science’; whereas, in Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here), these opening chapters discussing the history of racial science expand to fill almost half the entire book.

[11] Indeed, somewhat disconcertingly, even Hitler’s Mein Kampf is taken seriously by Baker, the latter acknowledging that “the early part of [Hitler’s] chapter dealing with the ethnic problem is quite well-written and not uninteresting” (p59) – or perhaps this is only to damn with faint praise.

[12] Thus, at the time Stoddard authored The Rising Tide of Color Against White World-Supremacy in 1920, with a large proportion of the world under the control of European colonial empires, a contemporary observer might be forgiven for assuming that what Stoddard called White World-Supremacy, was a stable, long-term, if not permanent arrangement. However, Stoddard accurately predicted the demographic transformation of the West, what some have termed The Great Replacement or A Third Demographic Transition, almost a century before this began to become a reality.

[13] The exact connotations of this passage may depend on the translation. Thus, other translators translate the passage that Manheim translates as The mightiest counterpart to the Aryan is represented by the Jew instead as The Jew offers the most striking contrast to the Aryan”, which alternative translation has rather different, and less flattering, connotations, given that Hitler famously extols the Aryan as the master race. The rest of the passage quoted remains, when taken in isolation, broadly flattering, however.

[14] To clarify, both Boas and Montagu are briefly mentioned in later chapters. For example, Boass now largely discredited work on cranial plasticity is discussed by Baker at the end of his chapter on ‘Physical Differences Between the Ethnic Taxa of Man: Introductory Remarks’ (p201-2). However, this is outside of Baker’s chapters on “The Historical Background”, and therefore Boas’s role in (allegedly) shaping the contemporary consensus of race denial is entirely unexplored by Baker. For discussion on this topic, see Carl Degler’s In Search of Human Nature; see also Chapter Two of Kevin Macdonald’s The Culture of Critique (which I have reviewed here) and Chapter Three of Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here).

[15] Thus, there was no new scientific discovery that presaged or justified the abandonment of biological race as an important causal factor in the social and behavioural sciences. Later scientific developments, notably in genetics, were certainly later co-opted in support of this view. However, there is no coincidence in time between these two developments. Therefore, whatever the true origins of the theory of racial egalitarianism, whether one attributes it to horror at the misuse of race science by the Nazi regime, or the activism of certain influential social scientists such as Boas and Montagu, one thing is certain – namely, the abandonment, or at least increasing deemphasis, of the race category in the social and behavioural sciences was originally motivated by political rather than scientific considerations. See Carl Degler’s In Search of Human Nature; see also Chapter 2 of Kevin Macdonald’s Culture of Critique (which I have reviewed here) and Chapter Three of Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here).

[16] That OUP gave up the copyright is, of course, to be welcomed, since it means, rather than gathering dust on the shelves of university libraries, while the few remaining copies still in circulation from the first printing rise in value, it has enabled certain dissident publishing houses to release new editions of this now classic work.

[17] Baker suggests that, at the time he wrote, behavioural differences between pygmy chimpanzees and other chimpanzees had yet to be demonstrated (p113-4). Today, however, pygmy chimpanzees are known to differ behaviourally from other chimps, being, among other differences, less prone to intra-specific aggression and more highly sexed. However, they are now usually referred to as bonobos rather than pygmy chimpanzees, and are recognized as a separate species from other chimpanzees, rather than a mere subspecies.

[18] This is, at least, how Baker describes this species complex and how it was traditionally understood. Researching the matter on the internet, however, suggests whether this species complex represents a true ring species is a matter of some dispute (e.g. Liebers et al 2006).

[19] In cases of matings between sheep and goats that result in offspring, the resulting offspring themselves are usually, if not always, infertile. Moreover, actually, according to the wikipedia page on the topic, the question of when sheep and goats can ever successfully interbreed is more complex than suggested by Baker.

[20] I have found no evidence to support the assertion in some of the older nineteenth-century literature that women of lower races have difficulty birthing offspring fathered by European men, owing to the greater brain- and head-size of European infants. Summarizing this view, contemporary Russian racialist Vladimir Avdeyev in his impressively encyclopaedic Raciology: The Science of the Hereditary Traits of Peoples, claims:

The form of the skull of a child is directly connected with the characteristics of the structure of the mother’s pelvis—they should correspond to each other in the goal of eliminating death in childbirth. The mixing of the races unavoidably leads to this, because the structure of the pelvis of a mother of a different race does not correspond to the shape of the head of [the] mixed infant; that leads to complications during childbirth” (Raciology: p157).

Thus, Avdeyev claims, owing to race differences in brain size:

Women on lower races endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth. But this can never be said of women of lower races who birth children of white fathers” (Raciology: p157).

Thus, he quotes an early-twentieth century Russian race theorist as claiming:

American Indian women… often die in childbirth from pregnancies with a child of mixed blood from a white father, whereas pure-blooded children within them are easily born. Many Indian women know well the dangers [associated with] a pregnancy from a white man, and therefore, they prefer a timely elimination of the consequence of cross-breeding by means of fetal expulsion, in avoidance of it” (Raciology: p157-8).

This, interestingly, accords with the claim of infamous late-twentieth century race theorist J Philippe Rushton, in the ‘Preface to the Third Edition’ of his book Race, Evolution and Behavior (which I have reviewed here), that, as compared to whites and Asians, blacks have narrower hips, giving them a more efficient stride”, which provides an advantage in many athletic events, and that:

The reason Whites and East Asians have wider hips than Blacks, and so make poorer runners, is because they give birth to larger brained babies” (Race, Evolution and Behavior: p11-12).

Thus, Rushton explains elsewhere:

Increasing brain size [over the course of hominid evolution] was associated with a broadening of the pelvis. The broader pelvis provides a wider birth canal, which in turn allows for delivery of larger-brained offspring” (Odyssey: My Life as a Controversial Evolutionary Psychologist: p284-5).

However, contrary to the claim of Avdeyev, I find support from contemporary delivery room data, for the claim that women from so-called lower-races’ experience greater birth complications, and mortality rates, when birthing offspring fathered by European males.
On the contrary, it is only differences in overall body-size, not brain-size, that seem to be the key factor, with East Asian women having greater difficulties birthing offspring fathered by European males because of the smaller frames of East Asian women, even though East Asians have brains as large as or larger than those of Europeans
 (Nystrom et al 2008).
Neither is it true that, where inter-racial mating has not occurred, then, on account of the small brain-size of their babies, Women on lower races endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth(Raciology: p157).
On the contrary. data from the USA seems to indicate a somewhat higher rate of caesarean delivery among African-American women as compared to white American women (Braveman et al 1995; Edmonds et al 2013; Getahun et al 2009; Valdes 2020.

[21] Examining the effects of interracial hybridization on other traits besides fertility, there are mixed results. Thus, one study reported what the authors interpreted as a hybrid vigour effect on g-factor of general intelligence among the offspring of white-Asian unions in Hawaii, as compared to the offspring of same-race couples matched for educational and occupational levels (Nagoshi & Johnson 1986). Similarly, Lewis (2010) attributed the higher attractiveness ratings accorded to the faces of mixed-race people to heterosis. Meanwhile, another study found that height was positively correlated with the distance between the birthplaces of one’s parents, itself presumably a correlate of their relatedness (Koziel et al 2011). On the other hand, however, behavioural geneticist Glayde Whitney suggests that hybrid incompatibility may explain the worse health outcomes, and shorter average life-spans, of African Americans as compared to whites in the contemporary USA, owing to the former’s mixed African and European ancestry (Whitney 1999). One specific negative health outcome for some African-Americans resulting from a history racial admixture is also suggested by Helgadottir et al (2006). It is notable that, whereas recent studies tend to emphasize the (supposed) positive genetic effects resulting from interracial unions, the older literature tends to focus on (supposed) negative effects of interracial hybridization (see Frost 2020). No doubt this reflects the differing zeitgeister of the two ages (Provine 1976; Khan 2011c).

[22] While they did not directly interbreed with one another, both Northern Europeans and sub-Saharan Africans may, however, each have interbred, to some extent, with their immediate neighbours, who, in turn, interbred with their intermediate neighbours who may, in turn, have interbred indirectly with the other group. There may therefore have been some indirect gene flow even between distantly related populations as Northern Europeans and sub-Saharan Africans, even if no Nordic European ever encountered, let alone mated with, a black African. This creates a situation somewhat analogous to the ring species discussed above. Thus, there was probably some geneflow even across some of the geographic barriers that circumscribe and delineate the ancient boundaries of the great continental macro-races (e.g. the Sahara and the Himalayas). Indeed, there may even have been gene flow between Eurasia and the Americas at the Bering Strait. Only perhaps Australian Aboriginals may to have been completely reproductively isolated for millennia.

[23] Interestingly, while languages and cultures vary in the number of colours that they recognise and have words for, both the ordering of the colours recognised, and the approximate boundaries between different colours, seems to be cross-culturally universal. Thus, some languages have only two colour terms, which are always equivalent to ‘light’ and ‘dark’. Then, if a third colour terms is used, it is always equivalent to ‘red’. Next come either ‘green’ or ‘yellow’. Experimental attempts to teach colour terms not matching the familiar colours show that individuals learn these terms much less quickly than they do the colour familiar terms recognised in other languages. This, of course, suggests that our colour perception is both innately programmed into the mind and cross-culturally universal (see Berlin & Kay, Basic Color Terms: Their Universality and Evolution). 

[24] Indeed, as I discuss later, with respect to what Baker calls subraces, we may already have long previously passed this point, at least in Europe and North America. While morphological differences certainly continue to exist, at the aggregate, statistical level, between populations from different regions of Europe, there is such overlap, such a great degree of variation even within families, and the differences are so fluid, gradual and continuous, that I suspect such terms as the Nordic race, Alpine race, Mediterranid race and Dinaric race have likely outlived whatever usefulness they may once have had and are best retired. The differences are now best viewed as continuous and clinal.

[25] While Ethiopians and other populations from the Horn of Africa are indeed a hybrid or clinal population, representing an intermediate position between Caucasians and other black Africans, Baker perhaps goes too far in claiming:

Aethiopids (‘Eastern Hamites’ or ‘Erythriotes’) of Ethiopia and Somaliland are an essentially Europid subrace with some Negrid admixture (p225).

Thus, summarizing the findings of one study from the late-1990s, Jon Entine reports:

Ethiopians [represent] a genetic mixture of about 60 percent African and 40 percent Caucasian” (Taboo: Why Black Athletes Dominate Sports And Why We’re Afraid To Talk About It: p115)

The study upon which Entine based this conclusion looked only at mitochondrial DNA and Y chromosome data. More recent studies have incorporated autosomal DNA as well. However, while eschewing terms such as Caucasian’, such studies broadly confirm that there exist substantial genetic affinities between populations from the Horn of Africa and the Middle East (e.g. Ali et al 2020Khan 2011aKhan 2011bHodgson 2014).

[26] Thus, Lewontin famously showed that, when looking at individual genetic loci, most variation is within a single population, rather than between populations, or between races (Lewontin 1972). However, when looking at phenotypic traits that are caused by polygenes, it is easy to see that there are many such traits in which the variation within the group does not dwarf that between groups – for examp7e, differences in skin colour as between Negroes and Nordics, or differences in stature between as Pygmies and even neighbouring tribes of Bantu.

[27] In addition to discussing morphological differences between races, Baker also discusses differences in scent (170-7). This is a particularly emotive issue, given the negative connotations associated with smelling bad. However, given the biochemical differences between races, and the fact that even individuals of the same race, even the same family, are distinguishable by scent, it is inevitable that persons of different races will indeed differ in scent, and unsurprising that people would generally prefer the scent of their own group. There is substantial anecdotal evidence that this is indeed the case. In general, Baker reports that East Asians have less body odour, whereas both Caucasoids and blacks have greater body odour. Partly this is explained by the relative prevalence of dry and wet ear wax, which is associated with body odour, varies by population and is one of the few easily detectable phenotypic traits in humans that is determined by simply Mendelian inheritance (see McDonald, Myths of Human Genetics). Intriguingly, Nicholas Wade speculates that dry earwax, which is associated with less strong body-odour, may have evolved through sexual selection in colder climates where, due to the cold, more time is spent indoors, in enclosed spaces, where body odour is hence more readily detectable, and producing less scent may have conferred a reproductive advantage (A Troublesome Inheritance: p91). This may explain some of the variation in the prevalence of dry and wet ear wax respectively, with dry earwax predominating only in East Asia, but also being found, albeit to a lesser degree, among Northern Europeans. On the other hand, however, although populations inhabiting colder climates may spend more time indoors, populations inhabiting tropical climates might be expect to sweat more due to the greater heat and hence build up greater bodily odour.

[28] A few exceptions include where Baker discusses the small but apparently statistically significant differences between the skulls of ‘Celts’ and Anglo-Saxons (p257), and where he mentions statistically significant differences between ancient Egypian skulls and those of Negroes (p518).

[29] Baker does, however, acknowledge that:

Some Jewish communities scattered over the world are Jews simply in the sense that they adhere to a particular religion (in various forms); they are not definable on an ethnic basis” (p246).

Here, Baker has in mind various communities that are not either Ashkenazi or Sephardic (or Mizrahi), such as the Beta Israel of Ethiopia, the Lemba of Southern Africa and the Kaifeng Jews resident in China. Although Baker speaks of communities”, the same is obviously true of recent converts to Judaism

[30] Thus, of the infamous Khazar hypothesis, now almost wholly discredited by genetic data, but still popular among some anti-Zionists, because it denies the historical connection between (most) contemporary Jews and the land of Israel, and among Christian anti-Semites, because it denies that the Ashkenazim are indeed chosen people’ of the Old Testament, Baker writes:

It is clear they [the Khazars] were not related, except by religion, to any modern group of Jews” (p34).

[31] Baker thus puts the intellectual achievements of the Ashkenazim in the broader context of other groups within this same subrace, including the Assyrians, Hittites and indeed Armenians themselves. Thus, he concludes:

The contribution of the Armenid subrace to civilization will bear comparison with that of any other” (p246-7).

Some recent genetic studies have indeed suggested affinities between Ashkenazim and Armenian populations (Nebel et al 2001; Elhaik 2013).

[32] In Baker’s defence, the illustration in question is actually taken from the work of a Jewish anthropologist, Joseph Jacobs (Jacobs 1886). Jacobs findings this topic are summarized in this entry in the 1906 Jewish Encyclopedia, entitled Nose, authored by Jacobs and Maurice Fishberg, another Jewish anthropologist, which reports that the ‘hook nose’ stereotypically associated with Jewish people is actually found in only a minority of European Jews (Jacobs & Fishberg 1906).
However, such noses do seem to be more common among Jews than among at least some of the host populations among whom they reside. The
wikipedia article on Jewish noses cites this same entry from the Jewish Encyclopaedia as suggesting that the prevalence of this shape of nose is actually no greater among Jews than among populations from the Mediterranean region (hence the supposed similar shape of so-called Roman noses). However, the Jewish Encyclopaedia entry itself does not actually seem to say any such thing. Instead, it reports:

“[As compared with] non-Jews in Russia and Galiciaaquiline and hook-noses are somewhat more frequently met with among the Jews” (Jacobs & Fishberg 1906). 

The entry also reports that, measured in terms of their nasal index, “Jewish noses… are mostly leptorhine, or narrow-nosed” (Jacobs & Fishberg 1906). Similarly, Joseph Jacobs reports in On the Racial Characteristics of Modern Jews’:

Weisbach‘s nineteen Jews vied with the Patagonians in possessing the longest nose (71 mm.) of all the nineteen races examined by him … while they had at the same time the narrowest noses (34 mim)” (Jacobs 1886).

This data, suggesting that Jewish noses are indeed long but are also very narrow, contradicts Baker’s claim that the characteristic Ashkenazi nose is “large in all dimensions [emphasis added]” (p239). However, such a nose shape is consistent Jews having evolved in an arid desert environment, such as the Nagev or other nearby deserts, or in the Judean mountains, where the earliest distinctively Jewish settlements are thought to have developed. Thus, anthropologist Stephen Molnar writes:

Among desert and mountain peoples the narrow nose is the predominant form” (Human Variation: Races, Types and Ethnic Groups: p196).

As Baker himself observes, the nose width characteristic of a population correlates with both the temperature and humidity of the environment in which they evolved (p310-311). However, he reports, the correlations are much weaker among the indigenous populations of the American continent, presumably because humans only relatively recently populated that continent, and therefore have yet to become wholly adapted to the different environments in which they find themselves (p311).
A further factor affecting nose width is jaw size. This might explain why Australian Aboriginals have extremely wide noses despite much of the Australian landmass being dry and arid (Human Variation: Races, Types and Ethnic Groups: p196).

[33] Hans Eysenck refers in his autobiography to a study supposedly conducted by one of his PhD students that ostensibly demonstrated statistically that people, both Jewish and Gentile, actually perform at no better than chance when attempting to distinguish Jews from non-Jews, even after extended interaction with one another (Rebel with a Cause: p35). However, since he does not cite a source or reference for this study, it was presumably unpublished, and must be interpreted with caution. Eysenck himself, incidentally, was of closeted half-Jewish ancestry, practising what Kevin Macdonald calls crypsis, which may be taken to suggest he was not entirely disinterested with regard to to question of the extent to which Jews can be recognized by sight. The only other study I have found addressing the quite easily researchable, if politically incorrect, question of whether some people can or cannot identify Jews from non-Jews on the basis of phenotypic differences is Andrzejewski et al (2009).

[34] This is one of the few occasions in the book where I recall Baker actually mentioning whether the morphological differences between racial groupings that he describes are statistically significant.

[35] Interestingly, Stephen Oppenheimer, in his book Origins of the British, posits a link between the so-called Celtic regions of the British Isles and populations from one particular area of the Mediterranean, namely the Iberian peninsula, especially the Basques, themselves probably the descendants of the original pre-Indo-European inhabitants of the peninsula (see Oppenheimer 2006; see also Blood of the Isles). This seemingly corroborates the otherwise implausible mythological account of the peopling of Ireland provided in Lebor Gabála Érenn, which claims that the last major migration to, and invasion of, Ireland, from which most modern Irish descend, arrived from Spain in the form of the Milesians. This mythological account may derive from the similarity between the Greek and Latin words for the two regions, namely Iberia and Hibernia respectively, and between the words Gael and Galicia, and the belief of some ancient Roman writers, notably Orosius and Tacitus, that Ireland lay midway between Britain and Spain (Carey 2001). However, while some early population genetic studies were indeed interpreted to suggest a connection between populations from Iberia and the British Isles, this interpretation has largely been discredited by more recent research.

[36] Actually, the position with regard to hair and eye colour is rather more complicated. On the one hand, hair colour does appear to be darkest in the ostensibly Celtic’ regions of the British Isles. Thus, Carleton Coon in his 1939 book, The Races of Europe, reports that, with regard to hair colour:

England emerges as the lightest haired of the four major divisions of the British Isles, and Wales as the darkest” (The Races of Europe: p385).

Likewise, Coon reports, that in Scotland:

“Jet black hair is commoner in the western highlands than elsewhere, and is statistically correlated with the greatest survival of Gaelic speech” (The Races of Europe: p387).

However, patterns of eye colour diverge from and complicate this picture. Thus, Coon reports:

“Whereas the British are on the whole lighter-haired than the Irish, they are at the same time darker-eyed” (The Races of Europe: p388).

Indeed, contrary to the notion of the Irish as a people with substantial Mediterranean racial affinities, Coon claims:

There is probably no population of equal size in the world which is lighter eyed, and bluer eyed, than the Irish” (The Races of Europe: p381).

On the other hand, the Welsh, in addition to being darker-haired than the English, are also darker-eyed, with a particularly high prevalence of dark eyes being found in certain more isolated regions of Wales (The Races of Europe: p389).
Interestingly, as far back as the time of the Roman Empire, the Silures, a Brittonic tribe occupying most of South-East Wales and known for their fierce resistance to the Roman conquest, were described by Roman writers Tacitus and Jordanes (the Romans themselves being, of course, a Mediterranean people) as “swarthy” in appearance and as possessing black curly hair.
The same is true of the, also until recently Celtic-speaking, Cornish people, who are, Coon reports, the darkest eyed of the English” (The Races of Europe: p389). Dark hair is also more common in Cornwall (The Races of Europe: p386). Cornwall is, Coon therefore reports, the darkest county in England(The Races of Europe: p396). (However, with the historically unprecedented mass migration of non-whites into the UK in the latter half of the twentieth century and beyond, this is, of course, no doubt no longer true.)
Yet another complicating factor is the prevalence of red hair, which is also associated with the Celtic’ regions of the British Isles, but is hardly a Mediterranean character, and which, like dark hair, reaches its highest prevalence in Wales (The Races of Europe: p385). Baker, for his part, does not dwell on this point, but does acknowledge
, “there is rather a high proportion of people with red hair in Wales”, something for which, he claims “no satisfactory explanation… has been provided” (p265).
Interestingly, Baker is skeptical regarding the supposed association of the ancient Celts with ginger or auburn hair. He traces this belief to a single casual remark of Tacitus. However, he suggests that the Latin word used rutilai is actually better translated as red (inclining to golden yellow), and was, he observes, also used to refer to the Golden Fleece and to gold coinage (p257). 

[37] The genetic continuity of the British people is, for example, a major theme of Stephen Oppenheimer’s The Origins of the British (see also Oppenheimer 2006). It is also a major conclusion of Bryan Sykes’s Blood of the Isles, which concludes:

We are an ancient people, and though the [British] Isles has been the target of invasion and opposed settlement from abroad ever since Julius Caesar first stepped onto the shingle shores of Kent, these have barely scratched the topsoil of our deep rooted ancestry” (Blood of the Isles: p338).

However, population genetics is an extremely fast moving science, and recent research has revised this conclusion, suggesting a replacement of around 90% of the population of the British Isles, albeit in very ancient times (around 2000BCE) associated with the spread of the Bell Beaker culture and Steppe-related ancestry, presumably deriving from the Indo-European expansion (Olalde et al 2018). Also, recent population genetic studies suggest that the Anglo-Saxons actually made a greater genetic contribution to the ancestry of the English, especially those from Eastern England, than formerly thought (e.g. Martiniano et al 2016; Schiffels et al 2016).

[38] However, in The Origins of the British, Stephen Oppenheimer proposes an alternative route of entry and point of initial disembarkation, suggesting that the people whom we today habitually refer to as ‘Celts’ arrived, not from Central Europe as traditionally thought, but rather up the Atlantic seaboard from the west coasts of France and Iberia. This is consistent with some archaeological evidence (e.g. the distribution of passage graves) suggesting longstanding trade and cultural links up the Atlantic seaboard from the Mediterranean region, through the Basque country, into Brittany, Cornwall, Wales and Ireland. This would also provide an explanation for what Baker claims is a Mediterranid component in the ancestry of the Welsh and Irish, as supposedly evidenced in distribution of blood groups and the prevalence dark hair and eye colours as recorded by Beddoe.

[39] Interestingly, in addition gracialization having occurred least, if at all, in Fuegians and Aboriginals, Wade also reports that:

Gracialization of the skull is most pronounced in sub-Saharan Africans and East Asians, with Europeans retaining considerable robustness (A Troublesome Inheritance: p167).

This is an exception to what Steve Sailer calls ‘Rushton’s Rule of Three (see here) and, given that Wade associates gracialization with domestication and pacification (as well as neoteny), suggests that, at least by this criteria, Europeans evince less evidence of pacification and domestication than do black Africans.

[40] Actually, the meaning of the two terms is subtly different. ‘Paedomorphy’ refers to the retention of juvenile or infantile traits into adulthood. ‘Neoteny refers to one particular process whereby this end-result is achieved, namely slowing some aspects of physiological development. However, ‘paedomorphy’ can also result from another process, namely progenesis’, where, instead, some aspects of development are actually sped up, such that the developing organism reaches sexual maturity earlier, before reaching full maturity in other respects. In humans, most examples of paedomorphy result from the former process, namely ‘neoteny.

[41] These genitalia, of course, contrast with those of neighbouring Negroids, at least according to popular stereotype. For his part, Baker accepts the stereotype that black males have large penes. However, he cites no quantitative data, remarking only:

That Negrids have large penes is somtimes questioned, but those who doubt it are likely to change their minds if they will look at photographs 8, 9, 20, 23, 29, and 37 in Bernatzig’s excellently illustrated book Zwischen Weissem Nil und Belgisch-Kongo’. They represent naked male Nilotids and appear convincing” (p331).

But five photos, presumably representing just five males, hardly represents a convincing sample size. (I found several of the numbered pictures online by searching for the book’s title, and each showed only a single male.) Interestingly, Baker is rightly skeptical regarding claims of differences in the genitalia between European subraces, given the intimate nature of the measurements required, writing:

It is difficult to obtain reliable measurements of theses parts of the body and statements about subracial differences in them must not be accepted without confirmation” (p219).

[42] Among the traits that have been associated with neotenty in humans are our brain size, growth patterns, hairlessness, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into adulthood.

[43] Thus, anthropologist Carleton Coon, in Racial Adaptations: A Study of the Origins, Nature, and Significance of Racial Variations in Humans, does not even consider sexual selection as an explanation for the evolution of Khoisan steatopygia, despite their obviously dimorphic presentation. Instead, he proposes:

“[Bushman’s] famous steatopygia (fat deposits that contain mostly fibrous tissue) may be a hedge against scarce nutrients and draught during pregnancy and lactation” (Racial Adaptations: p105). 

[44] Others, however, notably Desmond Morris in The Naked Ape (which I have reviewed here and here), have implicated sexual selection in the evolution of the human female’s permanent breasts. The two hypotheses are not, however, mutually exclusive. Indeed, they may be complementary. Thus, Nancy Etcoff in Survival of the Prettiest (which I have reviewed here and here) proposes that breasts may be perceived as attractive by men precisely because they honestly advertise the presence of the fat reserves needed to sustain a pregnancy” (Survival of the Prettiest: p187). By analogy, the same could, of course, also be true of fatty buttocks.

[45] Thus, Baker demands rhetorically:

Who could conceivably fail to distinguish between a Sanid and a Europid, or between an Eskimid [Eskimo] and a Negritid [Negrito], or between a Bambutid (African Pygmy) or an Australid [Australian Aboriginal]?

[46] Baker does discuss the performance of East Asians on IQ tests, but his conclusions are ambivalent (p490-492). He concludes, for example, “the IQs of Mongolid [i.e. East Asian] children in North America are generally found to be about the same as those of Europids” (p490). Yet recent studies have revealed a slight advantage for East Asians in general intelligence. Baker also mentions the relatively higher scores of East Asians on tests of spatio-visual ability, as compared to verbal ability. However, he attributes this to their lack of proficiency in the language of their host culture, as he relied mostly on American studies of first and second-generation immigrants, or the descendants of immigrants, who were often raised in non-English-speaking homes, and hence only learnt English as a second-language (p490). However, recent studies suggest that East Asians score relatively lower on verbal ability, as compared to their scores on spatio-visual ability, even when tested in a language in which they are wholly proficient (see Race Differences in Intelligence: reviewed here).

[47] Rushton and Jensen (2005) favour the hereditarian hypothesis vis a vis race differences in intelligence, and their presentation of the evidence is biased somewhat in this direction. Nisbett’s rejoinder therefore provides a good balance, being very much biased in the opposite direction. Macintosh’s chapter is perhaps more balanced, but he still clearly favours an environmental explanation with regard to population differences in intelligence, if not with regard to individual differences. 

[48] Indeed, in proposing tenable environmental-geographical explanations for the rise and fall of civilizations in different parts of the world, Jared Diamond’s Guns, Germs and Steel represents a substantial challenge to Baker’s conclusions in this chapter and the two books are well worth reading together. Another recent work addressing the question of why civilizations rise and fall among different races and peoples, but reaching less politically-correct conclusions, is Michael Hart’s Understanding Human History, which seems to have been conceived of as a rejoinder to Diamond, drawing heavily upon, but also criticizing the former work.

[49] Interestingly, Baker quotes Toynbee as suggesting that:

An ‘identifying mark’ (but not a definition) [of] civilization might be equated with ‘a state of society in which there is a minority of the population, however small, that is free from the task, nor merely of producing food, but of engaging in any other form of economic activities-e.g. industry or trade” (p508).

Yet a Marxist would view this, not as a marker of civilization, but rather of exploitation. Those free from engaging in economic activity are, from a Marxist perspective, clearly extracting surplus value, and hence exploiting the labour of others. Toynbee presumably had in mind the idle rich or leisure class, as well perhaps as those whom the latter patronize, e.g. artists, though the latter, if paid for their work, are surely engaging in a form of economic activity, as indeed are the patrons who subsidize them. (Indeed, even the idle rich or leisure class engage in economic activity, if only as consumers.) However, this criterion, at least as described by Baker, is at least as capable of applying to the opposite end of the social spectrum – i.e. the welfare-dependent underclass. Did Toynbee really intend to suggest that the existence of the long-term unemployed is a distinctive marker of civilization? If so, is Baker really agreeing with him?

[50] The full list of criteria for civilization provided by Baker is as follows:

  1. In the ordinary circumstances of life in public places they cover the external genitalia and greater part of the trunk with clothes” (p507);
  2. They keep the body clean and take care to dispose of its waste elements” (p507);
  3. They do not practice severe mutilation or deformation of the body” (p507);
  4. They have knowledge of building in brick or stone, if the necessary materials are available in their territory” (p507);
  5. Many of them live in towns or cities, which are linked by roads” (p507);
  6. “They cultivate food plants” (p507);
  7. They domesticate animals and use some of the larger ones for transportif suitable species are available (p507);
  8. They have knowledge of the use of metals, if these are available” (p507);
  9. They use wheels” (p507);
  10. They exchange property by the use of money” (p507);
  11. They order their society by a system of laws, which are enforced in such a way that they ordinarily go about their various concerns in times of peace without danger of attack or arbitrary arrest” (p507);
  12. They permit accused people to defend themselves and call witnesses” (p507);
  13. They do not use torture to extract information or punishment” (p507);
  14. They do practice cannibalism” (p507);
  15. The religious systems include ethical elements and are not purely or grossly superstitious” (p507);
  16. They use a script… to communicate ideas” (p507);
  17. There is some facility in the abstract use of numbers, without consideration of actual objects” (p507);
  18. A calendar is in use” (p508);
  19. “[There are] arrangements for the instruction of the young in intellectual matters” (p508);
  20. There is some appreciation of the fine arts” (p508);
  21. Knowledge and understanding are valued as ends in themselves” (p508).

[51] Actually, some of the criteria include both technological and moral elements. For example, the second requirement, namely that the culture in question keep the body clean and take care to dispose of its waste elements”, at first seems a purely moral requirement. However, the disposal of sewage is, not only essential for the maintenance of healthy populations living at high levels of population density, but also often involves impressive feats of engineering (p507). Similarly, the requirement that some people live in towns or cities” seems quite arbitrary. However, to sustain populations at the high population density required in towns and cities usually requires substantial technological, not to mention social and economic, development. Likewise, the building and maintenance of roads linking these settlements, also mentioned by Baker as part of the same criterion, is a technological achievement, often requiring, like the building of facilities for sewage disposal, substantial coordination of labour.

[52] Indeed, even the former Bishop of Edinburgh apparently agrees (see his book, Godless Morality: Keeping Religion out of Ethics). The classic thought-experiment used to demonstrate that morality does not derive from God’s commandments is to ask devout believers whether, if, instead of commanding Thou shalt not kill, God had instead commanded Thou shalt kill, would they then consider killing a moral obligation? Most people, including devout believers, apparently concede otherwise. In fact, however, the hypothetical thought-experiment is not as hypothetical as many moral philosophers, and many Christians, seem to believe, as various passages in the Bible do indeed command mass killing and genocide (e.g. Deuteronomy 20: 16-17; Samuel 15:3; Deuteronomy 20: 13-14), and indeed rape too (Numbers 31:18).

[53] For example, in IQ and Racial Differences (1973), former president of the American Psychological Association and confirmed racialist Henry E Garrett claims:

Until the arrival of Europeans there was no literate civilization in the continent’s black belt. The Negro had no written language, no numerals, no calendar, no system of measurement. He never developed a plow or wheel. He never domesticated any animal. With the rarest exceptions, he built nothing more elaborate than mud huts and thatched stockades” (IQ and Racial Differences: p2).

[54] These explorers included David Livingston, the famous missionary, and Francis Galton, the infamous eugenicist, celebrated statistician and all-round Victorian polymath, in addition to Henry Francis FlynnPaul Du ChailluJohn Hanning Speke, Samuel Baker (the author John R Baker’s own grand-uncle) and George August Schweinfurth (p343).

[55] This, of course, depends on precisely how we define the words machine and ‘mechanical’. Thus, many authorities, especially military historians, class the simple bow as the first true ‘machine’. However, the only indigenous people known to lack even the bow and arrow at the time of their first contact with Europeans were the Australian Aboriginals of Australia and Tasmania.

[56] With regard to the ruins of Great Zimbabwe, Baker emphasizes that “the buildings in question are in no sense houses; the great majority of them are simply walls” (p402). Nor do they appear to have been part of a two-storey building (p402). Unlike some other racialist authors who have attributed their construction to the possibly part-Jewish Lemba people, Baker attributes their construction and design to indigenous Africans (p405). However, he suggests their anomalous nature reflected that they had been constructed in (crude) imitation of buildings constructed outside of the “secluded area” of Africa by non-Negro peoples with whom the former were in a trading relationship (p407-8). This would explain why the structures, though impressive by the standards of other constructions within the “secluded zone” of Africa from the same time-period, where buildings of brick or stone were rare and tended to be on a much smaller scale (so impressive, indeed, that, in the years since Baker’s book was published, they have even had an entire surrounding country named after them), are, by European or Middle Eastern standards of the same time period, quite shoddy. Baker also emphasizes:

The splendour and ostentation were made possible by what was poured into the country from foreign lands. One must acknowledge the administrative capacity of the rulers, but may question the utility of the ends to which much of it was put” (p409).

[57] Several plants seem to have been domesticated in the Sahel region, and the Horn of Africa, both of which are part of sub-Saharan Africa. However, these areas lie outside of what Baker calls the “secluded area”, as I understand it. Also, populations from the Horn of Africa are, according to Baker predominantly Caucasoid (p225).

[58] The sole domestic animal that was perhaps first domesticated by black Africans is the guineafowl. Guineafowl are found wild throughout sub-Saharan Africa, but not elsewhere. It has therefore been argued, plausibly enough, that it was first domesticated in sub-Saharan Africa. However, Baker reports that the nineteenth-century explorers whose work he relies on “nowhere mention its being kept as a domestic animal by Negrids” (p375). Instead, he proposes it was probably first domesticated in Ethiopia, outside the “secluded area” as defined by Baker, and whose population are, according to Baker, predominantly Caucasoid (p225). However, he admits that there are no “early record of tame guinea-fowl in Ethiopia” (p375).

[59] This may partly be because other continents have depleted their numbers of many wild mammalian species in recent times (e.g. wolves have been driven to extinction in Britain and Ireland, bison to the verge of extinction in North America). However, it is likely that Africa had a comparatively large number of large wild mammalian species even in ancient times. This is because outside of Africa (notably in the Americas), many wild mammals were wiped out by the sudden arrival of humans with their formidable hunting skills to whom indigenous fauna were wholly unadapted. However, Africa is where humans first evolved. Therefore, prey species will have gradually evolved fear and avoidance of humans at the same time as humans themselves first evolved to become formidable hunters. Thus, Africa, unlike other continents, never experienced a sudden influx of human hunters to whom its prey species were wholly unadapted. It therefore retains many of large wild game animals into modern times.

[60] Of course, rather conveniently for Diamonds theory, the wild ancestors of many modern domesticated animals, including horses and aurochs, are now extinct, so we have no way of directly assessing their temperament. However, we have every reason to believe that aurochs, at least, posed a far more formidable obstacle to domestication than does the zebra.

[61] Actually, a currently popular theory of the domestication of wolves/dogs holds that humans did not so much domesticate wolves/dogs as wolves/dogs domesticated themselves.

[62] Aurochs, and contemporary domestic cattle, also evince another trait that, according to Diamond, precludes their domestication – namely, it is not usually possible to keep two adult males of this species in the same field enclosure. Yet, according to Diamond, the social antelope species for which Africa is famous” could not be domesticated because:

The males of [African antelope] herds space themselves into territories and fight fiercely with one another when breeding. Hence, those antelope cannot be maintained in crowded enclosures in captivity” (Guns, Germs and Steel: p174).

Evidently, the ancient Eurasians who successfully domesticated the auroch never got around to reading Diamonds critially acclaimed bestseller. If they had, they could have learnt in advance to abandon the project as hopeless and hence save themselves the time and effort. It is fortunate for us that they did not.

[63] With regard to the racial affinities of the ancient Egyptians, a source of some controversy in recent years, Baker concludes that, contrary to the since-popularized Afrocentrist Black Athena hypothesis, the ancient Egyptians were predominantly, but not wholly, Caucasoid, and that “the Negrid contribution to Egyptian stock was a small one” (p518). Indeed, there is presumably little doubt on this question, since, according to Baker, there is an abundance of well-preserved skulls from Egypt, not least due to the practice of mummifying corpses and thus:

More study has been devoted to the craniology of ancient Egypt than to that of any other country in the world” (p517).

From such data, Baker reports:

Morant showed that all the sets of ancient Egyptian skills that he analysed statistically were distinguishable by each of six criteria from Negrid skulls” (p518).

For what it’s worth, this conclusion is also corroborated by their self-depiction in artwork:

In their monuments the dynastic Egyptians represented themselves as having a long face, pointed chin with scanty beard, a straight or somewhat aquiline nose, black irises, and a reddish-brown complexion” (p518).

Similarly, in Race: the Reality of Human Differences (reviewed here, here and here), Sarich and Miele, claiming that Egyptian monuments are not mere ‘portraits but an attempt at classification’”, report that the Egyptians painted themselves as red, Asiatics or Semites as yellow, Southerns or Negroes” as black, and “Libyans, Westerners or Northerners” as “white, with blue eyes and fair beards” (Race: the Reality of Human Differences: p33).
Thus, if not actually black, neither were the ancient Egyptians exactly white either, as implausibly claimed by contemporary Nordicist Arthur Kemp, in his books, Children of Ra: Artistic, Historical, and Genetic Evidence for Ancient White Egypt and March of the Titans: The Complete History of the White Race.
In the latter work, Kemp contends that the ancient Egyptians were originally white, being part-Mediterranean (the Mediterranean race itself being now largely extinct, according to Kemp), but governed by a Nordic elite. Over time, however, he contends that they interbred with imported black African slaves and Semitic populations from the Middle East and hence the population was gradually transformed and hence Egyptian civilization degenerated.
This is, of course, a version of de Gobineau’s infamous theory that great empires inevitably decline because, through their imperial conquests, they subjugate, and hence ultimately interbreed with, the inferior peoples whom they have conquered (as well as with inward migrants attracted by higher living standards), which interbreeding supposedly dilutes the very racial qualities that permitted their original imperial glories.
Interestingly, consistent with Kemp’s theory, there is indeed some evidence that of an increase in the proportion of sub-Saharan African ancestry in Egypt since ancient times (Schuenemann et al 2017).
However, this same study demonstrating an increase in the proportion of sub-Saharan African ancestry in Egypt also showed that, contrary to Kemp’s theory, Egyptian populations always had close affinities to Middle Eastern populations (including Semites), and, in fact, owing to the increase in sub-Saharan African ancestry, and despite the Muslim conquest, actually had closer affinities to Near Eastern populations in ancient times than they do today (Schuenemann et al 2017).
Importantly, this study was based on DNA extracted from mummies, and, since mummification was a costly procedure that was almost always restricted to the wealthy, it therefore indicates that even the Egyptian elite were far from Nordic even in ancient times, as implausibly claimed by Kemp.
To his credit, Kempt does indeed amass some remarkable photographic evidence of Egyptian tomb paintings and monuments depicting figures, according to Kemp intended to represent Egyptians themselves, with blue eyes and light hair and complexions.
Admitting that Egyptian men were often depicted with reddish skin, he dismisses this as an artistic convention:

It was a common artistic style in many ancient Mediterranean cultures to portray men with red skins and women with white skins. This was done, presumably to reflect the fact that the men would have been outside working in the fields” (Children of Ra: p33). 

Actually, according to anthropologist Peter Frost, this artistic convention reflects real and innate differences, as well as differing sexually selected ideals of male and female beauty (see Dark Men, Fair Women).
Most interestingly, Kemp also includes photographs of some Egyptian mummies, including Ramses II, apparently with light-coloured hair. 
At first, I suspected this might reflect loss of pigmentation owing to the process of decay occurring after death, or perhaps to some chemical process involved in mummification.
Robert Brier, an expert on mummification, confirms that Ramses’s “strikingly blond” hair was indeed a consequence of its having been “dyed as a final step in the mummification process so that he would be young forever” (The Encyclopedia of Mummies: p153). However, he also reports in the next sentence that:

Microscopic inspection of the roots of the hair revealed that Ramses was originally a redhead” (The Encyclopedia of Mummies: p153).

Brier also confirms, again as claimed by Kemp, that one especially ancient predynastic mummy, displayed in the British Museum, was indeed nicknamed Ginger on account of its hair colour (The Encyclopedia of Mummies: p64). However, whether this was the natural hair colour of the person when alive is not clear.
At any rate, even if both Ginger and Ramses the Great were indeed natural redheads, in this respect they appear to have been very much the exception rather than the rule. Thus, Baker himself reports that
:

It would appear that their head-hair was curly, wavy, or almost straight, and very dark brown or black” (p518).

This conclusion is again based on the evidence of their mummies, and, since mummification was a costly procedure largely restricted to the wealthy, it again contradicts Kemp’s notion of a ‘Nordic elite’ ruling ancient Egypt. On this and other evidence, Baker therefore concludes:

There is general agreement… that the Europid element in the Egyptians from predynastic times onwards has been primarily Mediterranid, though it is allowed that Orientalid immigrants from Arabia made a contribution to the stock” (p518).

In short, ancient Egyptians, including Pharaohs and other elites, though certainly not black, were not really exactly white either, and certainly far from Nordic. Despite the increase in sub-Saharan African ancestry and the probable further influx of Middle Eastern DNA owing the Muslim conquest, they probably resembled modern Egyptians, especially the indigenous Copts.

[64] The same is true of the earlier runic alphabets of the Germanic peoples, the Paleohispanic scripts of the Iberian peninsula, and presumably also of the undeciphered Linear A alphabet that was in use at the outer edge of the European continent during the Bronze Age.

[65] Writing appears to have been developed first in Mesopotamia, then shortly afterwards in Egypt (though some Egyptologists claim priority on behalf of Egypt). However the relative geographic proximity of these two civilizations, their degree of contact with one anther and the coincidence in time, make it likely that one writing system was copied from the other. It then seems to have been independently developed in China. Writing was also developed, almost certainly entirely independently, in Mesoamerica. Other possible candidates for the independent development of writing include the Indus Valley civilization, and Easter Island, though, since neither script has been deciphered, it is not clear that they represent true writing systems, and the Easter Island script has also yet to be reliably dated.

[66] Actually, it is now suggested that both the Mayans and Indians may have been beaten to this innovation by the Babylonians, although, unlike the later Indians and Muslims, neither the Mayans nor the Babylonians went on to take full advantage of this innovation, by developing mathematics in a way made possible by their innovation. For this, it is Indian civilization that deserves credit. The invention of the concept by both the Maya and the Babylonians was, of course, entirely independent of one another, but the Indians, the Islamic civilization and other Eurasian civilizations probably inherited the concept ultimately from Babylonia.

[67] Interestingly, this excuse is not available in Africa. There, large mammals survived, probably because, since Africa was where humans first evolved, prey species evolved in concert with humans, and hence gradually evolved to fear and avoid humans, at the same time as humans themselves gradually evolved to be formidable predators. In contrast, the native species of the Americas would have been totally unprepared to protect themselves from human hunters, to whom they were completely ill-adapted, owing to the late, and, in evolutionary terms, sudden, peopling of the continent. This may be why, to this day, Africa has more large animals than any other continent.

[68] Baker also uses the complexity of a people’s language in order to assess their intelligence. Today, there seems to be an implicit assumption among many linguists that all languages are equal in their complexity. Thus, American linguists rightly emphasize the subtlety and complexity of, for example, African-American vernacular, which is certainly, by no means, merely a impoverished or corrupted version of standard English, but rather has grammatical rules all of its own, which often convey information that is lost on white Americans not conversant in this dialect. However, there is no a priori reason to assume that all languages are equal in their capacity to express complex and abstract ideas. The size of vocabularies, for example, differs in different languages, as does the number of different tenses that are recognised. For example, the Walpiri language of some Australian Aboriginals is said to have only a few number terms, namely words for just onetwo’ and ‘many, while the Pirahã language of indigenous South Americans is said to get by with no number terms at all. Thus, Baker contends that certain languages, notably the Arunta language of indigenous Australians, as studied by Alf Sommerfelt, and also the Akan language of Africa, are inherently impoverished in their capacity to express abstract thought. He may well be right.

________________________

References

Ali et al (2020) Genome-wide analyses disclose the distinctive HLA architecture and the pharmacogenetic landscape of the Somali population. Science Reports 10:5652.
Andrzejewski, Hall & Salib (2009) Anti-Semitism and Identification of Jewish Group Membership from Photographs Journal of Nonverbal Behavior 33(1):47-58.
Beals et al (1984) Brain size, cranial morphology, climate and time machines. Current Anthropology 25(3): 301–330
Bhatia et al (2014) Genome-wide Scan of 29,141 African Americans Finds No Evidence of Directional Selection since Admixture. American Journal of Human Genetics 95(4): 437–444.
Braveman et al (1995) Racial/ethnic differences in the likelihood of cesarean delivery, California. American Journal of Public Health 85(5): 625–630.
Carey (2001) Did the Irish come from Spain? History Ireland 9(3).
Chavez (2002) Reinventing the Wheel: The Economic Benefits of Wheeled Transportation in Early Colonial British West Africa. Africa’s Development in Historical Perspective. New York: Cambridge University Press.
Diamond (1994) Race without Color Discover Magazine, November 1st.
Edmonds et al (2013) Racial and ethnic differences in primary, unscheduled cesarean deliveries among low-risk primiparous women at an academic medical center: a retrospective cohort study. BMC Pregnancy Childbirth 13, 168.
Elhaik (2013). The missing link of Jewish European ancestry: contrasting the Rhineland and the Khazarian hypotheses. Genome Biology and Evolution 5 (1): 61–74.
Frost (2020) The costs of outbreeding what do we know? Evoandproud.blogspot.com, January 14th.
Getahun et al (2009) Racial and ethnic disparities in the trends in primary cesarean delivery based on indications. American Journal of Obstetrics and Gynecology 201(4):422.e1-7.
Helgason et al (2008) An association between the kinship and fertility of human couples. Science 319(5864):813-6.
Helgadottir et al (2006) A variant of the gene encoding leukotriene A4 hydrolase confers ethnicity-specific risk of myocardial infarction. Nature Genetics 38(1):68-74.
Hodgeson et al (2014) Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics 10(6): e1004393.
Jacobs & Fishberg (1906) ‘Nose, entry in The Jewish Encyclopedia.
Jacobs (1886) On the Racial Characteristics of Modern Jews, Journal of the Anthropological Institute, 1886, xv. 23-62.
Kay, K (2002). Morocco’s miracle muleBBC News 2 October.
Khan (2011a) The genetic affinities of Ethiopians. Discover Magazine, January 10.
Khan (2011b) A genomic sketch of the Horn of AfricaDiscover Magazine, June 10
Khan (2011c) Marry Far and Breed Tall Sons. Discover Magazine, July 7th.
Koziel et al (2011) Isolation by distance between spouses and its effect on children’s growth in height 146(1):14-9.
Labouriau & Amorim (2008) Comment on ‘An association between the kinship and fertility of human couples’ Science 12;322(5908):1634.
Lasker et al (2019) Global ancestry and cognitive abilityPsych 1(1), 431-459.
Law (2011) Wheeled transport in pre-colonial West Africa. Africa 50(3):249 – 262.
Lewis (2010) Why are mixed-race people perceived as more attractive? Perception 39(1):136-8.
Lewis et al (2015) Lumbar curvature: a previously undiscovered standard of attractiveness. Evolution and Human Behavior 36(5): 345-350.
Lewontin, (1972) The Apportionment of Human Diversity. In: Dobzhansky et al (eds) Evolutionary Biology. Springer, New York, NY.
Liebers et al (2004). “The herring gull complex is not a ring species”Proceedings of the Royal Society B: Biological Sciences 271 (1542): 893–901.
Loehlin et al (1973) Blood group genes and negro-white ability differences Behavior Genetics 3(3):263-270.
Martiniano et al (2016) Genomic signals of migration and continuity in Britain before the Anglo-Saxons. Nature Communications 7: 10326.
Nagoshi & Johnson (1986) The ubiquity of g. Personality and Individual Differences 7(2): 201-207.
Nebel et al (2001). The Y chromosome pool of Jews as part of the genetic landscape of the Middle East. American Journal of Human Genetics. 69 (5): 1095–112.
Nisbett (2005). Heredity, environment, and race differences in IQ: A commentary on Rushton and Jensen (2005). Psychology, Public Policy, and Law 11:302-310.
Norton et al (2009) Genetic evidence for the convergent evolution of light skin in Europeans and East Asians. Molecular Biology & Evolution 24(3): 710-722.
Nystrom et al (2008) Perinatal outcomes among Asian–white interracial couples. American Journal of Obstetrics and Gynecology 199(4), p382.e1-382.e6.
Olalde et al (2018) The Beaker phenomenon and the genomic transformation of northwest Europe. Nature 555: 190–196
Oppenheimer (2006) Myths of British Ancestry. Prospect Magazine, October 21.
Provine (1976) Geneticists and the biology of race crossing. Science 182(4114):790-796.
Relethford (2009) Race and global patterns of phenotypic variation. American Journal of Physical Anthropology 139(1):16-22.
Rong et al (1985) Fertile mule in China and her unusual foal. Journal of the Royal Society of Medicine. 78 (10): 821–25.
Rushton & Jensen (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law, 11:235-294.
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population Human Genetics 39(1):69-86.
Scarr & Weinberg (1976).IQ test performance of black children adopted by White families. American Psychologist 31:726-739.
Schiffels et al (2016) Iron Age and Anglo-Saxon genomes from East England reveal British migration history. Nature Communications 7: 10408.
Schuenemann et al (2017) Ancient Egyptian mummy genomes suggest an increase of Sub-Saharan African ancestry in post-Roman periods. Nature Communications 8:15694
Stanford University Medical Center (2008) Asian-white couples face distinct pregnancy risks, Stanford/Packard Eurekaltert.org, 1 October.
Tishkoff et al (2007) Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics (1): 31-40.
Valdes (2020) Examining Cesarean Delivery Rates by Race: a Population-Based Analysis Using the Robson Ten-Group Classification System Journal of Racial and Ethnic Health Disparities.
Weinberg et al (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence. Intelligence, 16, 117-135.
Whitney (1999) The Biological Reality of Race American Renaissance, October 1999.

Peter Singer’s ‘A Darwinian Left’

Peter Singer, ‘A Darwinian Left: Politics, Evolution and Cooperation’, London: Weidenfeld & Nicolson 1999.

Social Darwinism is dead. 

The idea that charity, welfare and medical treatment ought to be withheld from the poor, the destitute and the seriously ill so that they perish in accordance with the process of natural selection and hence facilitate further evolutionary progress survives only as a straw man sometimes attributed to conservatives by leftists in order to discredit them, and a form of guilt by association sometimes invoked by creationists in order to discredit the theory of evolution.[1]

However, despite the attachment of many American conservatives to creationism, there remains a perception that evolutionary psychology is somehow right-wing

Thus, if humans are fundamentally selfish, as Richard Dawkins is taken, not entirely accurately, to have argued, then this surely confirms the underlying assumptions of classical economics. 

Of course, as Dawkins also emphasizes, we have evolved through kin selection to be altruistic towards our close biological relatives. However, this arguably only reinforces conservatives’ faith in the family, and their concerns regarding the effects of family breakdown and substitute parents

Finally, research on sex differences surely suggests that at least some traditional gender roles – e.g. women’s role in caring for young children, and men’s role in fighting wars – do indeed have a biological basis, and also that patriarchy and the gender pay gap may be an inevitable result of innate psychological differences between the sexes

Political scientist Larry Arnhart thus champions what he calls a new ‘Darwinian Conservatism’, which harnesses the findings of evolutionary psychology in support of family values and the free market. 

Against this, however, moral philosopher and famed animal liberation activist Peter Singer, in ‘A Darwinian Left’, seeks to reclaim Darwin, and evolutionary psychology, for the Left. His attempt is not entirely successful. 

The Naturalistic Fallacy 

At least since David Hume, it has an article of faith among most philosophers that one cannot derive values from facts. To do otherwise is to commit what some philosophers refer to as the naturalistic fallacy

Edward O Wilson, in Sociobiology: The New Synthesis was widely accused of committing the naturalistic fallacy, by attempting to derive moral values form facts. However, those evolutionary psychologists who followed in his stead have generally taken a very different line. 

Indeed, recognition that the naturalistic fallacy is indeed a fallacy has proven very useful to evolutionary psychologists, since it has enabled them investigate the possible evolutionary functions of such morally questionable (or indeed downright morally reprehensible) behaviours as infidelityrape, warfare and child abuse while at the same time denying that they are somehow thereby providing a justification for the behaviours in question.[2] 

Singer, like most evolutionary psychologists, also reiterates the sacrosanct inviolability of the fact-value dichotomy

Thus, in attempting to construct his ‘Darwinian Left’, Singer does not attempt to use Darwinism in order to provide a justification or ultimate rationale for leftist egalitarianism. Rather, he simply takes it for granted that equality is a good thing and worth striving for, and indeed implicitly assumes that his readers will agree. 

His aim, then, is not to argue that socialism is demanded by a Darwinian worldview, but rather simply that it is compatible with such a worldview and not contradicted by it. 

Thus, he takes leftist ideals as his starting-point, and attempts to argue only that accepting the Darwinian worldview should not cause one to abandon these ideals as either undesirable or unachievable. 

But if we accept that the naturalistic fallacy is indeed a fallacy then this only raises the question: If it is indeed true that moral values cannot be derived from scientific facts, whence can moral values be derived?  

Can they only be derived from other moral values? If so, how are our ultimate moral values, from which all other moral values are derived, themselves derived? 

Singer does not address this. However, precisely by failing to address it, he seems to implicitly assume that our ultimate moral values must simply be taken on faith. 

However, Singer also emphasizes that rejecting the naturalistic fallacy does not mean that the facts of human nature are irrelevant to politics. 

On the contrary, while Darwinism may not prescribe any particular political goals as desirable, it may nevertheless help us determine how to achieve those political goals that we have already decided upon. Thus, Singer writes: 

An understanding of human nature in the light of evolutionary theory can help us to identify the means by which we may achieve some of our social and political goals… as well as assessing the possible costs and benefits of doing so” (p15). 

Thus, in a memorable metaphor, Singer observes: 

Wood carvers presented with a piece of timber and a request to make wooden bowls from it do not simply begin carving according to a design drawn up before they have seen the wood, Instead they will examine the material with which they are to work and modify their design in order to suit its grain…Those seeking to reshape human society must understand the tendencies inherent within human beings, and modify their abstract ideals in order to suit them” (p40). 

Abandoning Utopia? 

In addition to suggesting how our ultimate political objectives might best be achieved, an evolutionary perspective also suggests that some political goals might simply be unattainable, at least in the absence of a wholesale eugenic reengineering of human nature itself. 

In watering down the utopian aspirations of previous generations of leftists, Singer seems to implicitly concede as much. 

Contrary to the crudest misunderstanding of selfish gene theory, humans are not entirely selfish. However, we have evolved to put our own interests, and those of their kin, above those of other humans. 

For this reason, communism is unobtainable because: 

  1. People strive to promote themselves and their kin above others; 
  2. Only coercive state apparatus can prevent them so doing; 
  3. The individuals in control of this coercive apparatus themselves seek to promote the interests of themselves and their kin and corruptly use this coercive apparatus to do so. 

Thus, Singer laments: 

What egalitarian revolution has not been betrayed by its leaders?” (p39). 

Or, alternatively, as HL Mencken put it:

“[The] one undoubted effect [of political revolutions] is simply to throw out one gang of thieves and put in another.” 

In addition, human selfishness suggests, if complete egalitarianism were ever successfully achieved and enforced, it would likely be economically inefficient – because it would remove the incentive of self-advancement that lies behind the production of goods and services, not to mention of works of art and scientific advances. 

Thus, as Adam Smith famously observed: 

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.” 

And, again, the only other means of ensuring goods and services are produced besides economic self-interest is state coercion, which, given human nature, will always be exercised both corruptly and inefficiently. 

What’s Left? 

Singer’s pamphlet has been the subject of much controversy, with most of the criticism coming, not from conservatives, whom one might imagine to be Singer’s natural adversaries, but rather from other self-described leftists. 

These leftist critics have included both writers opposed to evolutionary psychology (e.g. David Stack in The First Darwinian Left), but also some other writers claiming to be broadly receptive to the new paradigm but who are clearly uncomfortable with some of its implications (e.g.  Marek Kohn in As We Know It: Coming to Terms with an Evolved Mind). 

In apparently rejecting the utopian transformation of society envisioned by Marx and other radical socialists, Singer has been accused by other leftists for conceding rather too much to the critics of leftism. In so doing, Singer has, they claim, in effect abandoned leftism in all but name and become, in their view, an apologist for and sell-out to capitalism. 

Whether Singer can indeed be said to have abandoned the Left depends, of course, on precisely how we define ‘the Left’, a rather more problematic matter than it is usually regarded as being.[3]

For his part, Singer certainly defines the Left in unusually broad terms.

For Singer, leftism need not necessarily entail taking the means of production into common ownership, nor even the redistribution of wealth. Rather, at its core, being a leftist is simply about being: 

On the side of the weak, not the powerful; of the oppressed, not the oppressor; of the ridden, not the rider” (p8). 

However, this definition is obviously problematic. After all, few conservatives would admit to being on the side of the oppressor. 

On the contrary, conservatives and libertarians usually reject the dichotomous subdivision of society into oppressed’ and ‘oppressor groups. They argue that the real world is more complex than this simplistic division of the world into black and white, good and evil, suggests. 

Moreover, they argue that mutually beneficial exchange and cooperation, rather than exploitation, is the essence of capitalism. 

They also usually claim that their policies benefit society as a whole, including both the poor and rich, rather than favouring one class over another.[4]

Indeed, conservatives claim that socialist reforms often actually inadvertently hurt precisely those whom they attempt to help. Thus, for example, welfare benefits are said to encourage welfare dependency, while introducing, or raising the level of, a minimum wage is said to lead to increases in unemployment. 

Singer declares that a Darwinian left would “promote structures that foster cooperation rather than competition” (p61).

Yet many conservatives would share Singer’s aspiration to create a more altruistic culture. 

Indeed, this aspiration seems more compatible with the libertarian notion of voluntary charitable donations replacing taxation than with the coercively-extracted taxes invariably favoured by the Left. 

Nepotism and Equality of Opportunity 

Yet selfish gene theory suggests humans are not entirely self-interested. Rather, kin selection makes us care also about our biological relatives.

But this is no boon for egalitarians. 

Rather, the fact that our selfishness is tempered by a healthy dose of nepotism likely makes equality of opportunity as unattainable as equality of outcome – because individuals will inevitably seek to aid the social, educational and economic advancement of their kin, and those individuals better placed to do so will enjoy greater success in so doing. 

For example, parents with greater resources will be able to send their offspring to exclusive fee-paying schools or obtain private tuition for them; parents with better connections may be able to help their offspring obtain better jobs; while parents with greater intellectual ability may be able to better help their offspring with their homework. 

However, since many conservatives and libertarians are as committed to equality of opportunity as socialists are to equality of outcome, this conclusion may be as unwelcome on the right as on the left. 

Indeed, the theory of kin selection has even been invoked to suggest that ethnocentrism is innate and ethnic conflict is inevitable in multi-ethnic societies, a conclusion unwelcome across the mainstream political spectrum in the West today, where political parties of all persuasions are seemingly equally committed to building multi-ethnic societies. 

Unfortunately, Singer does not address any of these issues. 

Animal Liberation After Darwin 

Singer is most famous for his advocacy on behalf of what he calls animal liberation

In ‘A Darwinian Left’, he argues that the Darwinian worldview reinforces the case for animal liberation by confirming the evolutionary continuity between humans other animals. 

This suggests that there are unlikely to be fundamental differences in kind as between humans and other animals (e.g. in the capacity to feel pain) sufficient to justify the differences in treatment currently accorded humans and animals. 

It sharply contrasts account of creation in the Bible and the traditional Christian notion of humans as superior to other animals and as occupying an intermediate position between beasts and angels. 

Thus, Singer concludes: 

By knocking out the idea that we are a separate creation from the animals, Darwinian thinking provided the basis for a revolution in our attitudes to non-human animals” (p17). 

This makes our consumption of animals as food, our killing of them for sport, our enslavement of them as draft animals, or even pets, and our imprisonment of them in zoos and laboratories all ethically suspect, since these are not things generally permitted in respect of humans. 

Yet Singer fails to recognise that human-animal continuity cuts two ways. 

Thus, anti-vivisectionists argue that animal testing is not only immoral, but also ineffective, because drugs and other treatments often have very different effects on humans than they do on the animals used in drug testing. 

Our evolutionary continuity with non-human species makes this argument less plausible. 

Moreover, if humans are subject to the same principles of natural selection as other species, this suggests, not the elevation of animals to the status of humans, but rather the relegation of humans to just another species of animal. 

In short, we do not occupy a position midway between beasts and angels; we are beasts through and through, and any attempt to believe otherwise is mere delusion. 

This is, of course, the theme of John Gray’s powerful polemic Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed hereherehere and here). 

Finally, acceptance of the existence of human nature surely entails recognition of carnivory as a part of that nature. 

Of course, we must remember not to commit the naturalistic or appeal to nature fallacy.  

Thus, just because meat-eating may be natural for humans, in the sense that meat was a part of our ancestors diet in the EEA, this does not necessarily mean that it is morally right or even morally justifiable. 

However, the fact that meat is indeed a natural part of the human diet does suggest that, in health terms, vegetarianism is likely to be nutritionally sub-optimal. 

Thus, the naturalistic fallacy or appeal to nature fallacy is not always entirely fallacious, at least when it comes to human health. What is natural for humans is indeed what we are biologically adapted to and what our body is therefore best designed to deal with.[5]

Therefore, vegetarianism is almost certainly to some degree sub-optimal in nutritional terms. 

Moreover, given that Singer is an opponent of the view that there is a valid moral distinction between acts and omissions, then we must ask ourselves: If he believes it is wrong for us to eat animals, does he also believe we should take positive measures to prevent lions from eating gazelles? 

Economics 

Thus, bemoaning the emphasis of neoliberals on purely economic outcomes, he protests:

From an evolutionary perspective, we cannot identify wealth with self-interest… Properly understood self-interest is broader than economic self-interest” (p42). 

Singer is right. The ultimate currency of natural selection is not wealth, but rather reproductive success – and, in evolutionarily novel environments, wealth may not even correlate with reproductive success (Vining 1986). 

Thus, as discussed by Laura Betzig in Despotism and Differential Reproduction, a key difference between Marxism and sociobiology is the relative emphasis on production versus reproduction

Whereas Marxists see societal conflict and exploitation as reflecting competition over control of the means of production, for Darwinians, all societal conflict ultimately concerns control over, not the means of production, but rather what we might term the means of reproduction – in other words, women, their wombs and vaginas

Thus, sociologist-turned-sociobiologist Pierre van den Berghe observed: 

“The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon**: p165). 

Production is ultimately, in Darwinian terms, merely by which to gain the necessary resources to permit successful reproduction. The latter is the ultimate purpose of life. 

Thus, for all his ostensible radicalism, Karl Marx, in his emphasis on economics (‘production’) at the expense of sex (‘reproduction’), was just another Victorian sexual prude

Competition or Cooperation: A False Dichotomy? 

In Chapter  Four, entitled “Competition or Cooperation?”, Singer argues that modern western societies, and many modern economists and evolutionary theorists, put too great an emphasis on competition at the expense of cooperation. 

Singer accepts that both competition and cooperation are natural and innate facets of human nature, and that all societies involve a balance of both. However, different societies differ in their relative emphasis on competition or cooperation, and that it is therefore possible to create a society that places a greater emphasis on the latter at the expense of the former. 

Thus, Singer declares that a Darwinian left would: 

Promote structures that foster cooperation rather than competition” (p61) 

However, Singer is short on practical suggestions as to how a culture of altruism is to be fostered.[6]

Changing the values of a culture is not easy. This is especially so for a liberal democratic (as opposed to a despotic, totalitarian) government, let alone for a solitary Australian moral philosopher – and Singer’s condemnation of “the nightmares of Stalinist Russia” suggests that he would not countenance the sort of totalitarian interference with human freedom to which the Left has so often resorted in the past, and continues to resort to in the present (even in the West), with little ultimate success, in the past. 

But, more fundamentally, Singer is wrong to see competition as necessarily in conflict with cooperation. 

On the contrary, perhaps the most remarkable acts of cooperation, altruism and self-sacrifice are those often witnessed in wartime (e.g. kamikaze pilotssuicide bombers and soldiers who throw themselves on grenades). Yet war represents perhaps the most extreme form of competition known to man. 

In short, soldiers risk and sacrifice their lives, not only to save the lives of others, but also to take the lives of other others. 

Likewise, trade is a form of cooperation, but are as fundamental to capitalism as is competition. Indeed, I suspect most economists would argue that exchange is even more fundamental to capitalism than is competition. 

Thus, far from disparaging cooperation, neoliberal economists see voluntary exchange as central to prosperity. 

Ironically, then, popular science writer Matt Ridley also, like Singer, focuses on humans’ innate capacity for cooperation to justify political conclusions in his book, The Origins of Virtue

But, for Ridley, our capacity for cooperation provides a rationale, not for socialism, but rather for free markets – because humans, as natural traders, produce efficient systems of exchange which government intervention almost always only distorts. 

However, whereas economic trade is motivated by self-interested calculation, Singer seems to envisage a form of reciprocity mediated by emotions such as compassiongratitude and guilt
 
However, sociobiologist Robert Trivers argues in his paper that introduced the concept of reciprocal altruism to evolutionary biology that these emotions themselves evolved through the rational calculation of natural selection (Trivers 1971). 

Therefore, while open to manipulation, especially in evolutionarily novel environments, they are necessarily limited in scope. 

Group Differences 

Singer’s envisaged ‘Darwinian Left’ would, he declares, unlike the contemporary left, abandon: 

“[The assumption] that all inequalities are due to discrimination, prejudice, oppression or social conditioning. Some will be, but this cannot be assumed in every case” (p61). 

Instead, Singer admits that at least some disparities in achievement may reflect innate differences between individuals and groups in abilities, temperament and preferences. 

This is probably Singer’s most controversial suggestion, at least for modern leftists, since it contravenes the contemporary dogma of political correctness

Singer is, however, undoubtedly right.  

Moreover, his recognition that some differences in achievement as between groups reflect, not discrimination, oppression or even the lingering effect of past discrimination or oppression, but rather innate differences between groups in psychological traits, including intelligence, is by no means incompatible with socialism, or leftism, as socialism and leftism were originally conceived. 

Thus, it is worth pointing out that, while contemporary so-called ‘cultural Marxists‘ may decry the notion of innate differences in ability and temperament as between different racessexesindividuals and social classes as anathema, the same was not true of Marx himself

On the contrary, in famously advocating from each according to his ability, to each according to his need, Marx implicitly recognized that people differed in “ability” – differences which, given the equalization of social conditions envisaged under communism, he presumably conceived of as innate in origin.[7]

As Hans Eysenck observes:

“Stalin banned mental testing in 1935 on the grounds that it was ‘bourgeois’—at the same time as Hitler banned it as ‘Jewish’. But Stalin’s anti-genetic stance, and his support for the environmentalist charlatan Lysenko, did not derive from any Marxist or Leninist doctrine… One need only recall The Communist Manifesto: ‘From each according to his ability, to each according to his need’. This clearly expresses the belief that different people will have different abilities, even in the communist heaven where all cultural, educational and other inequalities have been eradicated” (Intelligence: The Battle for the Mind: p85).

Thus, Steven Pinker, in The Blank Slate, points to the theoretical possibility of what he calls a “Hereditarian Left”, arguing for a Rawlsian redistribution of resources to the, if you like, innately ‘cognitively disadvantaged’.[8] 

With regard to group differences, Singer avoids discussing the incendiary topic of race differences in intelligence, a question too contentious for Singer to touch. 

Instead, he illustrates the possibility that not “all inequalities are due to discrimination, prejudice, oppression or social conditioning” with the marginally less incendiary case of sex differences.  

Here, it is sex differences, not in intelligence, but rather in temperament, preferences and personality that are probably more important, and likely explain occupational segregation and the so-called gender pay gap

Thus, Singer writes: 

If achieving high status increases access to women, then we can expect men to have a stronger drive for status than women” (p18). 

This alone, he implies, may explain both the universalilty of male rule and the so-called gender pay gap

However, Singer neglects to mention another biological factor that is also probably important in explaining the gender pay gap – namely, women’s attachment to infant offspring. This factor, also innate and biological in origin, also likely impedes career advancement among women. 

Thus, it bears emphasizing that never-married women with no children actually earn more, on average, than do unmarried men without children of the same age in both Britain and America.[9]

For a more detailed treatment of the biological factors underlying the gender pay gap, see Biology at Work: Rethinking Sexual Equality by professor of law, Kingsley Browne, which I have reviewed here and here.[10] ;See also my ;review of Warren Farrell’s Why Men Earn More, which can be found here, here and here.

Dysgenic Fertility Patterns? 

It is often claimed by conservatives that the welfare system only encourages the unemployed to have more children so as to receive more benefits and thereby promotes dysgenic fertility patterns. In response, Singer retorts:

Even if there were a genetic component to something as nebulous as unemployment, to say that these genes are ‘deleterious’ would involve value judgements that go way beyond what the science alone can tell us” (p15).

Singer is, of course, right that an extra-scientific value judgement is required in order to label certain character traits, and the genes that contribute to them, as deleterious or undesirable. 

Indeed, if single mothers on welfare do indeed raise more surviving children than do those who are not reliant on state benefits, then this indicates that they have higher reproductive success, and hence, in the strict biological sense, greater fitness than their more financially independent, but less fecund, reproductive competitors. 

Therefore, far from being deleterious’ in the biological sense, genes contributing to such behaviour are actually under positive selection, at least under current environmental conditions.  

However, even if such genes are not ‘deleterious’ in the strict biological sense, this does not necessarily mean that they are desirable in the moral sense, or in the sense of contributing to successful civilizations and societal advancement. To suggest otherwise would, of course, involve a version of the very appeal to nature fallacy or naturalistic fallacy that Singer is elsewhere emphatic in rejecting. 

Thus, although regarding certain character traits, and the genes that contribute to them, as undesirable does indeed involve an extra-scientific “value judgement”, this is not to say that the “value judgement” in question is necessarily mistaken or unwarranted. On the contrary, it means only that such a value judgement is, by its nature, a matter of morality, not of science. 

Thus, although science may be silent on the issue, virtually everyone would agree that some traits (e.g. generosity, health, happiness, conscientiousness) are more desirable than others (e.g. selfishness, laziness, depression, illness). Likewise, it is self-evident that the long-term unemployed are a net burden on society, and that a successful society cannot be formed of people unable or unwilling to work. 

As we have seen, Singer also questions whether there can be “a genetic component to something as nebulous as unemployment”. 

However, in the strict biological sense, unemployment probably is indeed partly heritable. So, incidentally, are road traffic accidents and our political opinions – because each reflect personality traits that are themselves heritable (e.g. risk-takers and people with poor physical coordination and slow reactions probably have more traffic accidents; and perhaps more compassionate people are more likely to favour leftist politics). 

Thus, while it may be unhelpful and misleading to talk of unemployment as itself heritable, nevertheless traits of the sort that likely contribute to unemployment (e.g. intelligenceconscientiousnessmental and physical illness) are indeed heritable

Actually, however, the question of heritability, in the strict biological sense, is irrelevant. 

Thus, even if the reason that children from deprived backgrounds have worse life outcomes is entirely mediated by environmental factors (e.g. economic or cultural deprivation, or the bad parenting practices of low-SES parents), the case for restricting the reproductive rights of those people who are statistically prone to raise dysfunctional offspring remains intact. 

After all, children usually get both their genes and their parenting from the same set of parents – and this could be changed only by a massive, costly, and decidedly illiberal, policy of forcibly removing offspring from their parents.[11]

Therefore, so long as an association between parentage and social outcomes is established, the question of whether this association is biologically or environmentally mediated is simply beside the point, and the case for restricting the reproductive rights** of certain groups remains intact.  

Of course, it is doubtful that welfare-dependent women do indeed financially benefit from giving birth to additional offspring. 

It is true that they may receive more money in state benefits if they have more dependent offspring to support and provide for. However, this may well be more than offset by the additional cost of supporting and providing for the dependent offspring in question, leaving the mother with less to spend on herself. 

However, even if the additional monies paid to mothers with dependent children are not sufficient as to provide a positive financial incentive to bearing additional children, they at least reduce the financial disincentives otherwise associated with rearing additional offspring.  

Therefore, given that, from an evolutionary perspective, women probably have an innate desire to bear additional offspring, it follows that a rational fitness-maximizer would respond to the changed incentives represented by the welfare system by increasing their reproductive rate.[12]

A New Socialist Eugenics

If we accept Singer’s contention that an understanding of human nature can help show us how achieve, but not choose, our ultimate political objectives, then eugenics could be used to help us achieve the goal of producing the better people and hence, ultimately, better societies. 

Indeed, given that Singer seemingly concedes that human nature is presently incompatible with communist utopia, perhaps then the only way to revive the socialist dream of equality is to eugenically re-engineer human nature itself so as to make it more compatible. 

Thus, it is perhaps no accident that, before World War Two, eugenics was a cause typically associated, not with conservatives, nor even, as today, with fascism, but rather with the political left

Thus, early twentieth century socialist-eugenicists like H.G. Wells, Sidney Webb, Margaret Sanger and George Bernard Shaw may then have tentatively grasped what eludes contemporary leftists, Singer very much included – namely that re-engineering society necessarily requires as a prerequisite re-engineering Man himself.[13]

_________________________

Endnotes

[1] Indeed, the view that the poor and ill ought to be left to perish so as to further the evolutionary process seems to have been a marginal one even in its ostensible late nineteenth century heyday (see Bannister, Social Darwinism Science and Myth in Anglo-American Social Thought). The idea always seems, therefore, to have been largely, if not wholly, a straw man.

[2] In this, the evolutionary psychologists are surely right. Thus, no one accuses biomedical researchers of somehow ‘justifying disease’ when they investigate how infectious diseases, in an effort maximize their own reproductive success, spread form host to host. Likewise, nobody suggests that dying of a treatable illness is desirable, even though this may have been the ‘natural’ outcome before such ‘unnatural’ interventions as vaccination and antibiotics were introduced.

[3] The convenional notion that we can usefully conceptualize the political spectrum on a single dimensional left-right axis is obviously preposterous. For one thing, there is, at the very least, a quite separate liberal-authoritarian dimension. However, even restricting our definition of the left-right axis to purely economic matters, it remains multi-factorial. For example, Hayek, in The Road to Serfdom classifies fascism as a left-wing ideology, because it involved big government and a planned economy. However, most leftists would reject this definition, since the planned economy in question was designed, not to reduce economic inequalities, but rather, in the case of Nazi Germany at least, to fund and sustain an expanded military force, a war economy, external military conquest and grandiose vanity public works architectural projects. The term ’right-wing‘ is even more problematic, including everyone from fascists, to libertarians to religious fundamentalists. Yet a Christian fundamentalist who wants to outlaw pornography and abortion has little in common with either a libertarian who wants to decriminalize prostitution and child pornography, nor with a eugenicist who wants to make abortions, for certain classes of person, compulsory. Yet all three are classed together as ’right-wing’ even though they share no more in common with one another than any does with a raving unreconstructed Marxist.

[4] Thus, the British Conservatives Party traditionally styled themselves one-nation conservatives, who looked to the interests of the nation as a whole, rather than what they criticized as the divisive ‘sectionalism’ of the trade union and labour movements, which favoured certain economic classes, and workers in certain industries, over others, just as contemporary leftists privilege the interests of certain ethnic, religious and culturally-defined groups (e.g. blacks, Muslims, feminists) over others (i.e. white males).

[5] Of course, some ‘unnatural’ interventions have positive health benefits. Obvious examples are modern medical treatments such as penicillin, chemotherapy and vaccination. However, these are the exceptions. They have been carefully selected and developed by scientists to have this positive effect, have gone through rigorous testing to ensure that their effects are indeed beneficial, and are generally beneficial only to people with certain diagnosed conditions. In contrast, recreational drug use almost invariably has a negative effect on health.

[6] It is certainly possible for more altruistic cultures to exist. For example, the famous (and hugely wasteful) potlatch feasts of some Native American cultures exemplify a form of competitive altruism, analogous to conspicuous consumption, and may be explicable as a form of status display in accordance with Zahavi’s handicap principle. However, recognizing that such cultures exist does not easily translate into working out how to create or foster such cultures, let alone transform existing cultures in this direction.

[7]  Indeed, by modern politically-correct standards, Marx was a rampant racist, not to mention an anti-Semite

[8] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ’veil of ignorance’. 

[9] As Warren Farrell documents in his excellent Why Men Earn More (which I have reviewed here, here and here), in the USA, women who have never married and have no children actually earn more than men who have never married and have no children and have done since at least the 1950s (Why Men Earn More: pxxi). More precisely, according to Farrell, never-married men without children on average earn only about 85% of their childless never-married female counterparts (Ibid: pxxiii). The situation is similar in the UK. Thus, economist JR Shackleton reports:

“Women in the middle age groups who remain single earn more than middle-aged single males” (Should We Mind the Gap? p30).

The reasons unmarried, childless women earn more than unmarried childless men are multifarious and include:

  1. Married women can afford to work less because they appropriate a portion of their husband’s income in addition to their own
  2. Married men and men with children are thus obliged to earn even more so as to financially support, not only themselves, but also their wife, plus any offspring;
  3. Women prefer to marry richer men and hence poorer men are more likely to remain single;
  4. Childcare duties undertaken by women interfere with their earning capacity.

[10]  Incidentally, Browne has also published a more succinct summary of the biological factors underlying the pay-gap that was first published in the same ‘Darwinism Today’ series as Singer’s ‘A Darwinian Left’, namely Divided Labors: An Evolutionary View of Women at Work. However, much though I admire Browne’s work, this represents a rather superficial popularization of his research on the topic, and I would recommend instead Browne’s longer Biology at Work: Rethinking Sexual Equality (reviewed here) for a more comprehenseive treatment of the same, and related, topics. 

[11] A precedent for just such a programme, enacted in the name of socialism, albeit imposed consensually, was the communal rearing practices in Israeli Kibbutzim, since largely abandoned. Another suggestion along rather different lines comes from Adolf Hitler, who, believing that nature trumped nurture, is quoted as proposing: 

The State must also teach that it is the manifestation of a really noble nature and that it is a humanitarian act worthy of all admiration if an innocent sufferer from hereditary disease refrains from having a child of his own but bestows his love and affection on some unknown child whose state of health is a guarantee that it will become a robust member of a powerful community” (quoted in: Parfrey 1987: p162). 

[12] Actually, it is not entirely clear that women do have a natural desire to bear offspring. Other species probably do not have any such natural desire. Since they are almost certainly are not aware of the connection between sex and child birth, such a desire would serve no adaptive purpose and hence would never evolve. All an organism requires is a desire for sex, combined perhaps with a tendency to care for offspring after they are born. (Indeed, in principle, a female does not even require a desire for sex, only a willingness to submit to the desire of a male for sex.) As Tooby and Cosmides emphasize: 

Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.” 

There is no requirement for a desire for offspring as such. Nevertheless, anecdotal evidence of so-called broodiness, and the fact that most women do indeed desire children, despite the costs associated with raising children, suggests that, in human females, there is indeed some innate desire for offspring. Curiously, however, the topic of broodiness is not one that has attracted much attention among evolutionists.

[13] However, there is a problem with any such case for a ‘Brave New Socialist Eugenics’. Before the eugenic programme is complete, the individuals controlling eugenic programmes (be they governments or corporations) would still possess a more traditional human nature, and may therefore have less than altruistic motivations themselves. This seems to suggest then that, as philosopher John Gray concludes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):  

“[If] human nature [is] scientifically remodelled… it will be done haphazardly, as an upshot of the struggles in the murky world where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

References  

Parfrey (1987) Eugenics: The Orphaned Science. In Parfrey (Ed.) Apocalypse Culture (New York: Amoc Press). 

Trivers 1971 The evolution of reciprocal altruism Quarterly Review of Biology 46(1):35-57 

Vining 1986 Social versus reproductive success: The central theoretical problem of human sociobiologyBehavioral and Brain Sciences 9(1), 167-187.

The Decline of the Klan and of White (and Protestant) Identity in America

Wyn Craig Wade, The Fiery Cross: The Ku Klux Klan in America New York: Simon and Schuster, 1987

Given the infamy of the organization, it is surprising that there are so few books that cover the entire history of the Ku Klux Klan in America. 

Most seem to deal only with only one period (usually, but not always, either the Reconstructionera Klan or the Second Klan that reached its apotheosis during the twenties), one locality or indeed only a single time and place

On reflection, however, this is not really surprising. 

For, though we habitually refer to the Ku Klux Klan, or the Klan (emphasis on ‘the’), as if it were a single organization that has been in continuous existence since its first formation in the Reconstruction-era, there have in fact been many different groups calling themselves ‘the Ku Klux Klan’, or some slight variant upon this name (e.g. ‘Knights of the Ku Klux Klan’, ‘United Klans of America’), that have emerged and disappeared over the century and a half since the name was first coined in the aftermath of the American Civil War.

Most of these groups had small memberships, recruited and were active in only a single locality and soon disappeared altogether. Yet even those incarnations of the Klan name that had at least some claim to a national, or at least a pan-Southern, membership invariably lacked effective centralized control over local klaverns.

Thus, Wade observes: 

After the Klan had spread outwards from Tennessee, there wasn’t the slightest chance of central control over it – a problem that would characterize the Klan throughout its long career” (p58). 

It is perhaps for this reason that most historians authoring books about the Klan have focussed on Klan activity in only a single time-frame or geographic locality.

Indeed, it is notable, besides Wynn Wade’s ‘The Fiery Cross’, the only other work of which I am aware that even purports to cover the entirety of the Klan’s history (apart from the recently published White Robes and Burning Crosses, which I have not yet read) is David Chambers’ Hooded Americanism: The History of the Ku Klux Klan

Yet even this latter work (‘Hooded Americanism’), though it purports in its blurb to be “The only work that treats Ku Kluxism for the entire period of it’s [sic] existence”, actually devotes only a single, short, cursory chapter to the Reconstruction-era Klan, when the group was first founded, arguably at its strongest, and certainly at its most violent.

Moreover, ‘Hooded Americanism’ is composed of separate chapters recounting the history of the Klan in different states in each time period, such that the book lacks an overall narrative structure and is difficult to read. 

In contrast, for those with an interest in the topic, Wade’s ‘The Fiery Cross’ is both readable and informative, and somehow manages to weave the story of the various Klan groups in different parts of the country into a single overall narrative. 

A College Fraternity Turned Terrorist? 

If, today, the stereotypical Klansman is an illiterate redneck, it might come as some surprise that the group’s name actually bears an impressively classical etymology. It derives from the ancient Greek kuklos, meaning ‘circle’. To this was added ‘Klan’, both for alliterative purposes, and in reference to the ostensible Scottish ancestry of the group’s founders.[1]

This classical etymology reflected the social standing and educational background of its founders, who, far from being illiterate rednecks, were, Wade reports, “well educated for their day” (p32). 

Thus, he reports, of the six founder members, two would go on to become lawyers, another would become editor of a local newspaper, and yet another a state legislator (p32). 

Neither, seemingly, was the group formed with any terroristic, or even any discernible political, aspirations in mind. Instead, one of these six founder members, the, in retrospect, perhaps ironicallynamed James Crow, claimed their intention was initially: 

Purely social and for our amusement” (p34). 

Since, as a good white Southerner and Confederate veteran, Crow likely approved the politics with which the Klan later became associated, he had no obvious incentive to downplay a political motive. Certainly, Wade takes him at his word. 

Thus, if the various Klan titles – Grand GoblinImperial Wizard etc. – sound more like what one might expect in, say, a college fraternity than a serious political or terrorist group, then this perhaps reflects the fact that the organization was indeed conceived with just such adolescent tomfoolery in mind. 

Indeed, although it is not mentioned by Wade, it has even been suggested that a then-defunct nineteenth-century fraternity, Kuklos Adelphon, may even have provided a partial model for the group. Thus, Wade writes: 

It has been said that, if Pulaski had had an Elks Club, the Klan would never have been born” (p33). 

White Sheets and Black Victims 

However, from early on, the group’s practical jokes increasingly focussed on the newly-emancipated, and already much resented, black population of Giles County

Yet, even here, intentions were initially jocular, if mean-spirited. Thus, the white sheets famously worn by Klansmen were, Wade informs us, originally conceived in imitation of ghosts, the wearers ostensibly posing as: 

The ghosts of the Confederate dead, who had risen from their graves to wreak vengeance on [the blacks]” (p35). 

This accorded with the then prevalent stereotype of black people as being highly superstitious. 

However, it is likely that few black victims were taken in. Instead, the very real fear that the Klan came to inspire in its predominantly black victims reflected instead the also very real acts of terror and cruelty with which the group became increasingly associated. 

The sheets also functioned, of course, as a crude disguise.  

However, it was only when the Klan name was revived in the early twentieth century, and through the imagination of its reviver, William Joseph Simmons, that this crude disguise was transformed into a mysterious ceremonial regalia, the sale of which was jealously guarded, and an important source of revenue for the Klan leadership. 

Indeed, in the Reconstruction-era Klan, the sheets, though a crude disguise, would not even qualify as a uniform, as there was no standardization whatsoever. Instead:  

Sheets, pillowcases, handkerchiefs, blankets, sacks… paper masks, blackened faces, and undershirts and drawers were all employed” (p60).  

Thus, Wade reports the irony whereby one: 

Black female victim of the Klan was able to recognise one of her assailants because he wore a dress she herself had sewed for his wife” (p60). 

Chivalry – or Reproductive Competition? 

Representing perhaps the original white knights, Klansmen claimed to be acting in order to protect the ostensible virtue and honour of white women. 

However, at least in Wade’s telling, the rapes of white women by black males, upon which white Southern propaganda so pruriently dwelt (as prominently featured, for example, in the movie, Birth of a Nation, and the book upon which the movie was based, The Clansman: A Historical Romance of the Ku Klux Klan) were actually very rare. 

Indeed, he even quotes a former Confederate General, and alleged Klansman, seemingly admitting as much when, on being asked whether such assaults were common, he acknowledged: 

Oh no sir, but one case of rape by a negro upon a white woman was enough to alarm the whole people of the state” (p20). 

Certainly, the Emmett Till case demonstrates that even quite innocuous acts could indeed invite grossly disproportionate responses in the Southern culture of honour, at least where the perceived malfeasors were black. Thus, Wade claims: 

“Sometimes a black smile or the tipping of a hat were sufficient grounds for prosecution for rape. As one southern judge put it, ‘I see a chicken cock drop his wings and take after a hen; my experience and observation assure me that his purpose is sexual intercourse, no other evidence is needed’” (p20). 

Likewise, such infamous cases as the Scottsboro boys and Groveland four illustrate that false allegations were not unknown in the American South. Indeed, false rape allegations remain common to this day

However, I remain skeptical of Wade’s claim that black-on-white rape were quite as rare as he makes out. 

After all, American blacks have had high rates of violent crime ever since records began, and, as contemporary racists are fond of pointing out, today, black-on-white rape is actually quite common, at least as compared to other victim-offender dyads. 

Thus, in Paved with Good Intentions: The Failure of Race Relations in Contemporary America, published in 1992, Jared Taylor reports: 

In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93) 

Indeed, the authors of one recent textbook on criminology even claim that: 

Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544).[2] 

At any rate, Southern chivalry was rather selectively accorded, and certainly did not extend to black women. 

Indeed, Wade claims that Klansmen themselves, employing a blatant double-standard and rank hypocrisy, actually themselves regularly raped black women during their raids: 

The desire for group intercourse was sometimes sufficient reason for a den to go out on a raid…. Sometimes during a political raid, Klansmen would rape the female members of the household as a matter of course” (p76). 

As someone versed in sociobiological theory who has studied evolutionary psychology, I tempted to see these double-standards in sociobiological terms as a form of reproductive competition, designed to maximize the reproductive success of the white males involved, and indeed of the white race in general.

Thus, for white men, it was open season on black women, but white women were strictly off-limits to black men: 

In Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation. White men soon learned that women placed on a pedestal acted like statues in bed, and they came to prefer the female slave whom they found open and uninhibited… The more white males turned to female slaves, the more they exalted their own women, who increasingly became a mere ornament and symbol of the Southern way of life” (p20). 

Klan Success? 

The Klan came to stand for the reestablishment of white supremacy and the denial of voting rights to blacks. 

In the short-term, at least, these aims were to be achieved, with the establishment of segregation and effective disenfranchisement of blacks throughout much of the South. Wade, however, denies the Klan any part in this victory: 

The Ku-Klux Klan… didn’t weaken Radical Reconstruction nearly as much as they nurtured it. So long as an organized secret conspiracy swore oaths and used cloak and dagger methods in the South, Congress was willing to legislate against it… Not until the Klan was beaten and the former confederacy turned to more open methods of preserving the Southern way of life did Reconstruction and its Northern support decline” (p109-110). 

Thus, it was, Wade reports, not the Klan, but rather other groups, today largely forgotten, such as Louisiana’s White League and South Carolina’s Red Shirts, that were responsible for successfully scaring blacks away from the polls and ensuring the return of white supremacy in the South. Moreover, he reports that they were only able to do so only because the federal laws enacted to tackle the Klan had ceased to be enforced precisely because the Klan itself had ceased to represent a serious threat. 

On this telling, then, the First Klan was, politically, a failure. In this respect, it was to set the model for later Klans, which would fight a losing rearguard action against Catholic immigration and the civil rights movement. 

Resurrection 

If the First Klan was a failure, why then was it remembered, celebrated and ultimately revived, while other groups, such as the White LeagueRed Shirts and Knights of the White Camelia, which employed similar terrorist tactics in pursuit of the same political objectives, are today largely forgotten? 

Wade does not address this, but one suspects the outlandishness of the group’s name and ceremonial titles contributed, as did the fact that the Klan seems to have been the only such group active throughout the entirety of the former Confederacy

The reborn Klan, founded in the early twentieth century, was the brainchild of William Joseph Simmons, a self-styled professional ‘fraternalist’, alumni of countless other fraternal organizations, Methodist preacher, strict prohibitionist and rumoured alcoholic. 

It is him to whom credit must go for inventing most of the ritualism (aka ‘Klancraft’) and terminology (including the very word ‘Klancraft’) that came to be associated with the Klan in the twentieth century. 

Birth of a Nation’ and the Rebirth of the Klan 

Two further factors contributed to the growth and success of the reborn Klan. First, was the spectacularly successful 1915 release of the movie, The Birth of a Nation

Both deplored for its message yet also grudgingly admired for its technical and artistic achievement, this film occupies a curious place in film history, roughly comparable to Leni Riefenstahl’s Nazi propaganda film, Triumph of the Will. (Sergei Eisenstein’s Communist and Stalinist propaganda films curiously, but predictably, receive a free pass.) 

In this movie, pioneering filmmaker DW Griffith is credited with largely inventing much of the grammar of modern moviemaking. If, today, it seems distinctly unimpressive, if not borderline unwatchable, this is, not only because of the obvious technological limitations of the time period, but also precisely because it invented many of the moviemaking methods that cinema-goers, and television viewers, have long previously learnt to take for granted (e.g. cross-cutting). 

Yet, if its technical and artistic innovations have won the grudging respect of film historians, its message is, of course, wholly anathema to modern western sensibilities. 

Thus, portraying the antebellum American South with the same pair of rose-tinted spectacles as those donned by the author of Gone with the Wind, ‘Birth of a Nation’ went even further, portraying blacks during the Reconstruction period as rampant rapists salivating after the flesh of white women, and Klansmen as heroic white knights who saved white womanhood, and indeed the South itself, from the ravages of both reconstruction and of Southern blacks. 

Yet, though it achieved unprecedented box-office success, even being credited as the first modern blockbuster, the movie was controversial even for its time. 

It even became the first movie to be screened in the White House, when, as a favour to Thomas Dixon, the author of the novel upon which the movie was based, the film received an advance, pre-release screening for the benefit of the then-President, Woodrow Wilson, a college acquaintance of Dixon – though what the President thought of it is a matter of dispute.[3]

Indeed, such was the controversy that the movie was to provoke that the nascent NAACP, itself formed only a few years earlier, even launched a campaign to have the film banned outright (p127-8). 

This, of course, puts the lie to the notion that the political left was, until recent times, wholly in favour of freedom of speech and artistic expression

Actually, even then, the Left’s commitment to freedom of expression was, it seems, highly selective, just as it is today. Thus, it was one thing to defend the rights of raving communists, quite another to apply the same principle to racists. 

The Murders of Mary Phagan and Leo Frank 

Another factor in the successful resurrection of the Klan were two murders that galvanized popular opinion in the South, and indeed the nation. 

First was the rape and murder of Mary Phagan, a thirteen-year-old factory girl in Atlanta, Georgia. Second was the lynching of Leo Frank, her boss and ostensible murderer, who was convicted of her murder and sentenced to death, only to have this sentence commuted to life-imprisonment, only to be lynched by outraged locals. 

His lynching was carried out by a group styling themselves ‘The Knights of Mary Phagan’, many of whom would go on to become founder members of the newly reformed Klan. 

It was actually this group, not the Klan itself, which would establish a famous Klan ritual, namely the ascent of Stone Mountain to burn a cross, a ritual Simmons would repeat to inaugurate his nascent Klan a few months later.[4]

Yet, in the history of alleged miscarriages of justice in the American South, the lynching of Leo Frank stands very much apart. 

For one thing, most victims of such alleged miscarriages of justice were, of course, black. Yet Leo Frank was a white man. 

Moreover, most of his apologists insist that the real perpetrator was, in fact, a black man. They are therefore in the unusual position of claiming racism caused white Southerners to falsely convict a white man when they should have pinned the blame on a black instead.

It is true, of course, that Frank was also Jewish. However, there was little history of anti-Semitism in the South. Indeed, I suspect there was more prejudice against him as a wealthy Northerner who had come south for business purposes, and hence as, in Southern eyes, a ‘Yankee carpetbagger’.

Moreover, although his lynching was certainly unjustified, and his conviction possibly unsafe, it is still not altogether clear that Frank was indeed innocent of the murder of which he stood accused.[5]

Wade himself admits that there was some doubt as to his innocence at the time. However, he refers to a deathbed statement by an elderly witness some seventy years later in 1982 as finally proving his innocence: 

Not until 1982 would Frank’s complete innocence come to light as a result of a witness’s deathbed statement” (p143). 

However, a claim made, not in court under oath, but rather to the press for a headline (albeit also in a signed affidavit under oath), by an elderly, dying man, regarding things he had supposedly witnessed some seventy years earlier when he was himself little more than a child, is obviously open to question.

At any rate, it is interesting to note that Frank’s lynching played an important role, not only in the founding of the Second Klan, but also in the genesis of another political pressure group whose influence on American social, cultural and political life has far outstripped that of the Klan and which, unlike the Second Klan, survives to this day – namely the Anti-Defamation League of of B’nai B’rith or ADL

The parallels abound. Just as the Second Klan was a fraternal organization for white protestants, so B’nai B’rith, the organization which birthed the ADL, was a fraternal order for Jews, and Frank himself, surely not uncoincidentally, was president of the Atlanta chapter of the group. 

The organizational efforts of B’nai B’rith to protect Frank, a local chapter president, from punishment can therefore be viewed as analogous to the way in which the Klan itself sought to protect its own members from successful prosecution through its own corrupt links in law enforcement and government and on juries. 

Moreover, just as the Klan was formed to defend and promote the interests of white Christian protestants, so the ADL was formed to protect the interests of Jews.

However, the ADL was to prove far more successful in this endeavour than the Klan had ever been, and, unlike the Second Klan, very much survives, and prospers, to this day.[6]

Klan Enemies 

Jews were not, however, the primary objects of Klan enmity during the twenties – and neither, perhaps surprisingly, were blacks. 

This was, after all, the period that later historians have termed ‘the nadir of American race relations’, when, throughout the South, blacks were largely disenfranchised, and segregation firmly entrenched. 

Yet, from a white racialist perspective, the era is misnamed.[7] Far from a nadir, for white racialists the period represented something like a utopia, lost Eden or Golden Age.[8] 

White supremacy was firmly entrenched and not, it seemed, under any serious threat. The so-called civil rights movement had barely begun.

Of course, then as now, race riots did periodically puncture the apparent peace – at Wilmington in 1898Springfield in 1908Tulsa in 1912Rosewood in 1923, and throughout much of America in 1919

However, unlike contemporary American race riots, these typically took the form of whites attacking blacks rather than vice versa, and, even when the latter did occur, white solidarity was such that the whites invariably gave at least as good as they got.[9]

Thus, in early-twentieth century America, unlike during Reconstruction, there was no need for a Klan to suppress ‘uppity’ blacks. On the contrary, blacks were already adequately suppressed.  

Thus, if the Second Klan was to have an enemy worthy of its enmity, and a cause sufficient to justify its resurrection, and, more important, sufficient to persuade prospective inductees to hand over their membership dues, it would have to look elsewhere. 

To some extent the enemy selected varied on a regional basis, depending on the local concerns of the population. The Klan thus sought, like Hitler’s later NSDAP, to be ‘all things to all men’, and, for some time before it hit upon a winning strategy, the Klan flitted from one issue to another, never really finding its feet. 

However, to the extent the Second Klan, at the national level, was organized in opposition to a single threat or adversary, it was to be found neither in Jews nor blacks, but rather in Catholics. 

Anti-Catholicism 

To modern readers, the anti-Catholicism of the Second Klan seems bizarre. Modern Americans may be racist and homophobic in ever decreasing numbers, but they at least understand racism and homophobia. However, anti-Catholicism of this type, especially in so relatively recent a time period, seems wholly incomprehensible.

Indeed, the anti-Catholicism of the Second Klan is now something of an embarrassment even to otherwise unreconstructed racists and indeed to contemporary Klansmen, and is something they very much disavow and try to play down. 

Thus, anti-Catholicism, at least of this kind, is now wholly obsolete in America, and indeed throughout the English-speaking world outside of Northern Ireland – and perhaps Ibrox Football stadium for ninety minutes on alternate Saturdays for the duration of the Scottish football season. 

It seems something more suited to cruel and barbaric times, such as England in the seventeenth century, or Northern Ireland in the 1970s… or, indeed, Northern Ireland today. But in twentieth century America? Surely not. 

How then can we make sense of this phenomenon? 

Partly, the Klan’s anti-Catholicism reflected the greater religiosity of the age. In particular, the rise of the Second Klan was, at least in Wade’s telling, intimately linked with the rise of Christian fundamentalism in opposition to reforming practices (the so-called Social Gospel) in the early twentieth century.

Indeed, under its first Imperial Wizard, William Joseph Simmons, a Methodist preacher, the new Klan was initially more of a religious organization than it was a political one, and Simmons himself was later to lament the Klan’s move into politics under his successor.[10]

There was, however, also a nativist dimension to the Klan’s rabid anti-Catholicism, since, although Catholics had been present among the first settlers of North America and numbered even among the founding fathers, Catholicism was still associated with recent immigrants to the USA, especially Italians, Irish and Poles, who had yet to fully assimilate into the American mainstream. 

Catholics were also seen as inherently disloyal, as the nature of their religious affiliation (supposedly) meant that they owed ultimate loyalty, not to America, but rather to the Pope in Rome.  

This idea seems to have been a cultural inheritance from the British Isles.[11] In England, Catholics had long been viewed as inherently disloyal, and as desirous to overthrow the monarchy and restore Britain to Catholicism, as, in an earlier age, many had indeed sought to do

This view is, of course, directly analogous to the claim of many contemporary Islamophobes and counter-Jihadists today that the ultimate consequence of Muslim immigration into Europe will be the imposition of Shariah law across Europe.

However, even in the twenties, during the Second Klan’s brief apotheosis, their anti-Catholicism already seemed, in Wade’s words, “strangely anachronistic”, to the point of being “almost astounding” (p179).

Thus, as anti-Catholicism waned as a serious organizing force in American social and political (or even religious) life, it soon became clear that the Klan had nailed their colours to a sinking ship. Thus, as anti-Catholic sentiments declined among the American population at large, so the Klan attempted to distance itself from its earlier anti-Catholicism.[12]

First, anti-Catholicism was simply deemphasized by the Klan in favour of new enemies like communism, trade unionism and the burgeoning civil rights movement. 

Eventually, in the Sixties, the United Klans of America, the then dominant Klan faction in America, announced, during “an all-out crusade for new members”, that: 

Catholics were now welcome to join the Klan – the Communist conspiracy more than made up for the Klan’s former anti-Catholic fears of Americans loyal to a foreign power” (p328). 

Today, meanwhile, the Second Klan’s anti-Catholicism is seen as an embarrassment even by otherwise unreconstructed racists and Klansmen. 

The decline of anti-Catholicism provides, then, an optimistic case-study of the remarkable speed with which (some) prejudices can be overcome.[13]

It also points to an ironic side-effect of the gradual move towards greater tolerance and inclusivity in American society – namely, even groups ostensibly opposed to this process have nevertheless been affected by it. 

In short, even the Klan has become more tolerant and inclusive

Losing Land and Territory

For many nationalists, racial and ethnic conflict is ultimately a matter of competition for territory and land.

It is therefore of interest that the decline of the Klan, and of white protestant identity in the USA, was itself presaged and foreshadowed by two land sales, one in the early-twenties, when Klan membership was at a peak, and a second just over a decade later, when the decline was already well underway.

First, in the early-twenties, the Klan’s boldly envisaged Klan University had gone bankrupt. The land was sold and a synagogue was constructed on the site. 

Then, under financial pressure in the 1930s as the Depression set in, the Klan was even forced to sell even its main headquarters in Atlanta. 

If selling a Klan university only to see a synagogue constructed on the same site was an embarrassment, then the eventual purchaser of the Klan headquarters was to be an even greater Klan enemy – the Catholic Church. 

Thus, the erstwhile site of the Klan’s grandly-titled Imperial Palace became a Catholic cathedral

Perhaps surprisingly, and presumably in an effort at rapprochement and reconciliation, the new cathedral’s hierarchy reached out to the Klan by inviting the then-Grand Wizard, Hiram Evans, who had outmanoeuvred Simmons for control of the then-lucrative cash-cow during the Klan’s twenties heyday, to the new Cathedral’s inaugural service. 

Perhaps even more surprisingly, Evans actually accepted the invitation. Afterwards, even more surprisingly still, he was quoted as observing: 

It was the most ornate ceremony and one of most beautiful services I ever saw” (p265). 

More beautiful even than a cross-burning!

Evans was forced to resign immediately afterwards. However, in deemphasizing anti-Catholicism, he correctly gaged the public mood and the Klan was later, if belatedly, to follow his lead. 

The Turn to Terror 

The Klan is seemingly preadapted to terror. However benign the intentions of its successive founders, each Klan descended into violence. 

If the First Klan was formed, as a sort of college fraternity, the Second Klan seems to have been conceived primarily as a money-making venture, and hence, in principle, no more inherently violent than the Freemasons or the Elks

Yet the turn to terror was perhaps, in retrospect, inevitable. After all, this new Klan had been modelled on what had been, or at least become, a terrorist group (namely, the First Klan), employed masks, and, from the lynching of Leo Frank, had associated itself with vigilantism from the very onset. 

Interestingly, although precise data is not readily available, one gets the distinct impression that, during this era of Klan activity, most of the victims of its violence were, not blacks nor even Catholics, but rather the very white protestant Christians whom the Klan ostensibly existed to protect, or, more specifically, those among this community who had somehow offended against the values of the community, or simply offended Klansmen themselves. 

Of course, lynchings of blacks continued, at least in the South. But these were rarely conducted under the auspices of the Klan, since these were a longstanding tradition that long predated the Klan’s re-emergence, and the perpetrators of such acts rarely felt the need to wear masks to conceal their identities, let alone don the elaborate apparel, and pay the requisite membership dues, of the upstart Klan.[14]

But Klan violence per se did not always deter new members. On the contrary, some seem to have been attracted by it. Thus, Klan recruiters (‘Kleagles’) at first maintained that newspaper exposés amounted to free publicity and only helped them in their recruitment drive. 

Instead, Wade claims, more than violence, it was the perceived hypocrisy of Klan leaders which ultimately led to the group’s demise (p254).  

Thus, it purported to champion prohibition, temperance and Christian values, but had been founded by Simmons, a rumoured alcoholic, while its (hugely successful) marketing and recruitment campaign was headed by Edward Young Clarke and Mary Elizabeth Tyler of the Southern Publicity Association, who were openly engaged in an extra-marital affair with one another. 

However, the most damaging scandal to hit the Klan, which, as we have seen, purported to champion Prohibition and the protection of the sanctity of white womanhood, combined both violence, drunkenness and hypocrisy, and occurred when DC ‘Steve’ Stephenson, a hugely successful Indianna Grand Dragon, was convicted of the rape, kidnap and murder of Madge Oberholtzer, herself a white protestant woman, during a drunken binge. 

In fact, by the time of the assault, Stephenson had already split from the national Klan to form his own rival, exclusively Northern, Klan group. However, his former prominence in the organization meant that, though they might disclaim him, the Klan could never wholly disassociate themselves from him.  

It seems to have been this scandal more than any other which finally discredited the Klan in the minds of most Americans. Thus, Wade concludes: 

The Klan in the twenties began and ended with the death of an innocent young girl. The Mary Phagan-Leo Frank case had been the spark that ignited the Klan. And the Oberholtzer-Stephenson case had put out the fire” (p247). 

Decline 

Thenceforth, the Klan’s decline was as rapid and remarkable as its rise. Thus, Wade reports: 

In 1924 the Ku Klux Klan had boasted more than four million members. By 1930, that number had withered to about forty-five thousand… No other American movement has ever risen so high and fallen so low in such a short period” (p253). 

Indeed, in Wade’s telling, even its famous 1925 march on Washington “proved to be its most spectacular last gasp”, attracting, Wade reports, “only half of the sixty thousand expected” (p249) 

The National gathering of thirty thousand was less than what [DC Stephenson] could have mustered in Indiana alone during the Klan’s heyday” (p250). 

Not only did numbers decline, so did the membership profile. 

Thus, initially, the new group had attracted members from across the socioeconomic spectrum of white protestant America, or at least among all those who could afford the membership dues. Indeed, analyses of surviving membership rolls suggest that the Klan in this era was, at first, a predominantly middle-class group representing what was then the heart of Middle America

However, probably as a consequence of the revelations of violence, the respectable classes increasingly deserted the group.

Klan defections began with the prominent, the educated and the well-to-do, and proceeded down through the middle-class” (p252). 

Thus, the stereotype of the archetypal Klansman as an uneducated, semi-literate, tattooed, beer-swilling redneck gradually took hold. 

Indeed, from 1926 or so, the Klan even sought to reclaim this image as a positive attribute, portraying themselves as, in their own words, “a movement of plain people” (p252). 

But this marketing strategy, in Wade’s telling, badly backfired, since even less well-off, but ever aspirant, Americans hardly wanted to associate themselves with a group that admitted to being uneducated hicks (Ibid.). 

As well as the membership narrowing in its socioeconomic profile, Klan membership also retreated geographically. 

Thus, in its brief heyday, the Second Klan, unlike its Reconstruction-era predecessor, had had a truly national membership. 

Indeed, the state with the largest membership was said to be Indiana, where DC ‘Steve’ Stephenson, in the few years before his dramatic downfall, was said to have built up a one-man political machine that briefly came to dominate politics in the Hoosier State. 

However, in the aftermath of the fall of Stephenson and his Indiana Klan, the Klan was to haemorrhage members in not just Indiana, but throughout the North. The result was that: 

By 1930, the Klan’s little strength was concentrated in the South. Over the next half-century the Klan would gradually lose its Northern members, regressing more and more closely towards its Reconstruction ancestor until, by the 1960s, it would stand as a near-perfect replica” (p252) 

Thenceforth, the Klan was to remain, once again, a largely Southern phenomenon, with what little numerical strength it retained overwhelmingly concentrated in the states of the former Confederacy. 

Death and Taxes – The Only Certainties in Life 

The Second Klan was finally destroyed, however, not by declining membership, violent atrocities, bad publicity and inept brand-management, nor even by government prosecution, though all these factors did indeed play a part.  

Rather, the final nail in the Klan’s coffin was dealt by the taxman. 

In 1944, the Inland Revenue demanded restitution in respect of unpaid taxes due on the profits earnt from subscription dues during the Klan’s brief but lucrative 1920s membership boom (p275). 

The Klan, which had been haemorrhaging members even before the 1930s Depression, and, unlike the economy as a whole, had yet to recover, was already in a dire financial situation. Therefore, it could never hope to pay the monies demanded by the government, and instead was forced to declare bankruptcy (p275). 

Thenceforth, the Klan was no more. 

Ultimately, then, the government destroyed the Klan the same way had did Al Capone – failure to pay their taxes! 

The Klan and the Nazis – A Match Made in Hell? 

In between recounting the Klan’s decline, Wade also discusses its supposed courtship of, or by, the pro-Nazi German-American Bund

Actually, however, a careful reading of Wade’s account suggests that he exaggerates the extent of any such association. 

Thus, it is notable, if bizarre, that, in Wade’s own telling, the Bund’s leader, German-born Fritz Julius Kuhn, in seeking the “merging of the Bund with some native American organization who would shield it from charges of being a ‘foreign’ agency”, had first set his sights on that most native of “native American organizations” – namely, Native Americans (p269-70). 

When this quixotic venture inevitably ended in failure, if only due to “profound indifference on the Indians’ part”, only then did the rebuffed Kuhn turn his spurned attentions to the Klan (p270). 

Yet the Klan seemed to have been almost as resistant to Kuhn’s advances as the Native Americans had been. Thus, Wade quotes Kuhn as admitting, somewhat ambiguously:

The Southern Klans did not want to be known in it… So the negotiations were between representatives of the Klans in New Jersey and Michigan, but it was understood that the Southerners were in” (p270). 

Yet, by this time, in Wade’s own telling, the Klan was extremely weak in Northern states such as New Jersey and Michigan, and what little numerical strength it retained was concentrated in the Southern states of the former Confederacy. 

This suggests that it was only the already marginalized northern Klan groups who, bereft of other support, were willing to entertain the notion of an alliance with Bund. 

If the Southern Klan leadership was indeed aware of, and implicitly approved, the link, it was nevertheless clear that they wanted to keep any such association indirect and at an arm’s length, hence maintaining plausible deniability

This is perhaps the only way we can make sense of Kuhn’s acknowledgement, on the one hand, that “the Southern Klans did not want to be known in it”, while, on the other, that “it was understood that the Southerners were in” (p270). 

Thus, when negative publicity resulted from the joint Klan-Bund rally in New Jersey, the national (i.e. Southern) Klan leadership was quick to distance itself from and disavow any notion of an alliance, promptly relieving the New Jersey Grand Dragon of his office.

On reflection, however, this is little surprise.

For one thing, German-Americans, especially those who willing to flagrantly flaunt their ‘dual loyalty’ by joining a group like the German-American Bund, were themselves exactly the type of hyphenated-Americans that the 100% Americans of the Klan professed to despise.

Indeed, though they may have been white and (mostly) protestant, German-Americans own integration into the American mainstream was, especially after the anti-German sentiment aroused during the First World War, still very much incomplete. 

Today, of course, we might think of Nazis and the Klan as natural allies, both being, after all, that most reviled species of humanity – namely, white racists.

However, besides racialism, the Klan and the Nazis actually had surprisingly little in common. 

After all, the Klan was a Protestant fundamentalist group opposed to Darwinism and the teaching of evolutionary theory in schools.

Hitler, in contrast, was an ardent social Darwinist, who was reported by his confidents as harbouring a profound antipathy to the Christian faith, albeit one he kept out of his public pronouncements for reasons of political expediency, and some of whose followers even championed a return to Germanic paganism.[15]

Indeed, even their shared racialism was directed primarily towards different targets.

In Germany, blacks, though indeed persecuted by the Nazis, were few in number, and hence not a major target of Nazi propaganda, animosity or persecution – and nor were Catholics among the groups targeted for persecution by the Nazis, Hitler himself having been raised as a Catholic in his native Austria.[16]

Yet, if Catholics were not among the groups targeted for persecution by the Nazis, members of secret societies like the Klan very much were. 

Thus, among the less politically-fashionable targets for persecution by the Nazis were both the Freemasons and indeed the closest thing Germany had to a Ku Klux Klan. 

Thus, in 1923 a Klan-like group, “the German Order of the Fiery Cross”, had been founded in Germany in imitation of the Klan, by an expatriate German on his return to the Fatherland from America (p266). 

Yet, ironically, it was Hitler himself who ultimately banned and suppressed this German Klan imitator (p267). 

The Third Klan/s 

The so-called Third Klan was really not one Klan, but many different Klans, each not only independent of one another, but also often in fierce competition with one another for members and influence. 

They filled the vacuum left by the defunct Second Klan and competed to match its size, power and influence – though none were ever to succeed. 

From this point, it is no longer really proper to talk about the Klan, since there was not one Klan but rather many separate Klans, with little if any institutional connections with one another. 

Moreover, the different Klan groups varied more than ever in their ethos and activity. Thus, Wade reports: 

Some Klans were quietly ineffective, some were violent and some were borderline psychotic” (p302) 

With no one group maintaining a registered trademark over the Klan ‘brand’, inevitably the atrocities committed by one group ended up discrediting even other groups with no connection to them. The Klan ‘brand’ was irretrievably damaged, even among those who might otherwise be attracted to its ideology and ethos.[17] 

Indeed, the plethora of different groups was such that even Klansmen themselves were confused, one Dragon complaining: 

The old countersigns and passwords won’t work because all Klansmen are strangers to each other” (p302). 

Increasingly, opposition to the burgeoning Civil Rights Movement, rather than to Catholicism, now seems to have become the Klan’s chief preoccupation and the primary basis upon which Klaverns, and Kleagles, sought to attract recruits. 

However, respectable opposition to desegregation throughout the South was largely monopolized by the Citizens’ Councils.

Indeed, in Wade’s telling, “preventing a build-up of the Ku Klux Klan” was, quite as much as opposing desegregation, one of the principal objectives for which the Citizens Councils had been formed, since “violence was bad for business, and most of the council leaders were businessmen” (p299). 

If this is true, then perhaps the Citizens Councils were more successful in achieving their objectives than they are usually credited as having been. Segregation, of course, was gone and did not come back – but, then again, neither did the Klan. 

Yet, in practice, Wade reports, the main impact of the Citizens Councils on the Klan was: 

Not so much eliminating the Klan as leaving it with nothing but nothing but the violence prone dregs of Southern white society” (p302). 

Thus, the Klan’s image, and the characteristic socioeconomic status of its membership profile, declined still further. 

The electoral campaigns of the notorious segregationist and governor of Alabama George Wallace also had a similar effect. Thus, Wade reports: 

Wallace’s campaigns… swallowed a lot of disaffected Klansmen. In fact, Wallace’s campaigns offered them the first really viable alternative to the Klan” (p364). 

Political Cameos and Reinventions 

Here in Wade’s narrative, the myriad of disparate Klan groups inevitably fade into the background, playing a largely reactive, and often violent but nevertheless largely ineffective, and often outright counterproductive, role in opposing desegregation. 

Instead, the starring role is taken, in Wade’s own words, by: 

Two men who were masters of the electronic media: an inspired black minister, Martin Luther King, and a pragmatic white politician, JFK, who would work in an uneasy but highly productive tandem” (p310). 

Actually, in my view, it would be more accurate to say that the starring role was taken by two figures who are today vastly overrated on account of their respective early deaths by assassination, and consequent elevation to martyr status. 

In fact, however, while Wade’s portrait of King is predictably hagiographic, that of Kennedy is actually refreshingly revisionist. 

Far from the liberal martyr of contemporary left-liberal imagining, Kennedy was, in Wade’s telling, only a “pragmatic white politician”, and moreover only a rather late convert to the African-American civil rights movement

Indeed, before he first took office, Wade reports, Kennedy had actually endorsed the the Dunning School of historiography regarding the Reconstruction-era, was critical of Eisenhower having sent the National guard into Arkansas to enforce desegregation, and only reluctantly, when his hand was forced, himself sent the National Guard into Alabama (p317-22). 

Meanwhile, another political figure making a significant cameo appearance in Wade’s narrative, ostensibly on the opposite side of the debate over desegregation, is the notorious segregationist governor of Alabama, George Wallace

Yet Wade’s take on Wallace is, in many respects, as revisionist as his take on Kennedy. Thus, far from a raving racist and staunch segregationist, Wade argues: 

In retrospect… no one used and manipulated the Klansmen more than Wallace. He gave them very few rewards for their efforts on his behalf: often his approval was enough. And in spite of his fiery cant and cries of ‘Never!’ that so thrilled Klansmen, Wallace was a former judge who well understood the law – especially how far he could bend it” (p322). 

Thus, Wade reports, while it is well-known that Wallace famously blocked the entrance to the University of Alabama preventing black students from entering, what is less well-known is that: 

When the marshals asked for the black students to be admitted in the afternoon, Wallace quietly stepped aside. Instead of being recognized, at best, as a practical politician or, at worst, a pompous coward, Wallace was instead hailed by Klansmen as a dauntless hero” (p322). 

Thus, if Kennedy was, in Wade’s telling, “a pragmatic white politician”, then Wallace emerges as an outright political chameleon and shameless opportunist. 

As further evidence for this interpretation, what Wade does not get around to mentioning is that, in his first run for the governorship of Alabama in 1958, Wallace had actually spoken against the Klan and been backed by the NAACP, only after his defeat vowing, as he was eloquently quoted as observing, ‘never to be outniggered again’ again, and hence reinventing himself as an (ostensible) arch-segregationist. 

Neither does Wade mention that, in his last run for governor in 1982, reinventing himself once again as a born-again Christian, Wallace actually managed to win over 90% of the black vote

Yet even Wallace’s capacity for political reinvention is outdone by that of one of his supporters and speech-writers, former Klan leader Asa ‘Ace’ Carter, a man so notorious for his racism that even the Wallace denied employing him, but who was supposedly responsible for penning the words to Wallace’s infamous segregation now, segregation tomorrow, segregation forever” speech

Expelled from a Citizens’ Council for extremism, Carter had then founded and briefly reigned as tin pot führer of one of the most violent Klan outfits – “the Original Ku Klux Klan of the Confederacy, which resembled a cell of Nazi storm troopers” (p303). 

This group was responsible for one of the worst Klan atrocities of the period, namely the literal castration of a black man, whom they: 

Castrated… with razor blades; and then tortured… with by pouring kerosene and turpentine over his wounds” (p303). 

This gruesome act was, according to a Klan informant, performed for no better reason than as a “test of one of the members’ mettle before being elected ‘captain of the lair” (p303). 

The group was also, it seems, too violent even for its own good. Thus, it subsequently broke up when, in a dispute over financing and the misappropriation of funds, Carter was to shoot two fellow members, yet, for whatever reason, never stood trial (Ibid.). 

Yet what Wade does not get around to mentioning is Asa ‘Ace’ Carter was also, like Wallace, to later successfully reinvent himself, and achieve fame once again, this time as Forrest Carter, an ostensibly half-Native American author who penned such hugely successful novels as The Rebel Outlaw: Josey Wales (subsequently made into the successful motion picture, The Outlaw Josey Wales, directed by and starring Clint Eastwood) and The Education of Little Tree, an ostensible autobiography of a growing up on an Indian reservation, and a book so sickeningly sentimental that it was even recommended and championed by none other than Oprah Winfrey! 

The David Duke Show” 

By the 1970s, open support for white supremacy and segregation was in decline, even among white Southerners. This, together with Klansmen’s involvement in such atrocities such as the 16th Street Baptist Church bombing, might have made it seem that the Klan brand was irretrievably damaged and in terminal decline, never again to play a prominent role in American social or political life again. 

Yet, perhaps surprisingly, the Klan brand did manage one last hurrah in the 1970s, this time through the singular talents of one David Duke

Duke was to turn the Klan’s infamy to his own advantage. Thus, his schtick was to use the provocative imagery of the Klan (white sheets, burning crosses) to attract media attention, but then, having attracted that attention, to come across as much more eloquent, reasonable, intelligent and clean-cut than anyone ever expected a Klansman to be – which, in truth, isn’t difficult. 

The result was a media circus that one disgruntled Klansmen aptly dismissed as “The David Duke Show” (p373). 

It was the same trick that George Lincoln Rockwell had used a generation before, though, whereas Rockwell used Nazi imagery (e.g. swastikas, Nazi salutes) to attract media attention, Duke instead used the imagery of the Klan (e.g. white sheets, burning crosses).

If Duke was a successor to Rockwell, then Duke’s own contemporary equivalent, fulfilling a similar niche for the contemporary American media as the handsome, eloquent, go-to face of white nationalism, is surely Richard Spencer. Indeed, if rumours are to be believed, Spencer even has a similar penchant to Duke for seducing the wives and girlfriends of his colleagues and supporters.. 

Such behaviour, along with his lack of organizational ability, were among the reasons that Duke alienated much of his erstwhile support, haemorrhaging members almost as fast as he attracted them. 

Many such defectors would go on to form rival groups, including Tom Metzger, a TV repairman, who split from Duke to form a more openly militant group calling itself White Aryan Resistance (known by the memorable backronym ‘WAR’), and who achieved some degree of media infamy by starring in multiple television documentaries and talk-shows, before being bankrupted by a legal verdict in which he was held liable for involvement in a murder in which he seems to have had literally no involvement.

However, for Wade, the most important defector was, not Metzger, but rather Bill Wilkinson, perhaps because, unlike Metzger, who, on splitting from Duke, abandoned the Klan name, Wilkinson was to set up a rival Klan group, successfully poaching members from Duke. 

However, lacking Duke’s eloquence and good-looks, Wilkinson had instead to devise to another strategy in order to attract media attention and members. The strategy he hit upon was that of “taking a public stance of unbridled violence” (p375). 

This, together with the fact the fact that he was nevertheless able to evade prosecution, led to the allegation that he was a state agent and his Klan an FBI-sponsored honey trap, an allegation only reinforced by the recent revelation that he is now a multimillionaire in the multiracial utopia of Belize

Besides openly advocating violence, Wilkinson also hit upon another means of attracting members. Thus, Wade reports, he “perfected a technique that other Klan leaders belittled as ‘ambulance chasing’” (p384): 

Wilkinson… traversed the nation seeking racial ‘hot spots’… where he can come into a community, collect a large amount of initiation fees, sell a few robes, sell some guns… collect his money and be on his way to another ‘hot spot’” (p384). 

This is, of course, ironically, the exact same tactic employed by contemporary black race-baiters like Al Sharpton and the Black Lives Matter movement

Owing partly to the violent activities of rival Klan groups from whom he could never hope to wholly disassociate himself, Duke himself eventually came to see the Klan baggage as a liability. 

One by one, he jettisoned these elements, styling himself National Director rather than Imperial Wizard, wearing a suit rather than a white sheet and eventually giving up even the Klan name itself. Finally, in what was widely perceived as an act of betrayal, Duke was recorded offering to sell his membership rolls to Wilkinson, his erstwhile rival and enemy (p389-90). 

In place of the Klan, Duke sought to set up what he hoped would be a more mainstream and respectable group, namely the National Assocation for the Advancement of White People or NAAWP, one of the many short-lived organizations to adopt this rather unimaginative name.[18]

Yet on abandoning the provocative Klan imagery that had first brought him to the attention of the media, Duke suddenly found media attention much harder to come by. Wade concludes:

Duke had little chance at making a go of any Klan-like organization without the sheets and ‘illuminated crosses’. Without the mumbo-jumbo the lure of the Klan was considerably limited. Five years later the National Association for the Advancement of White People hadn’t got off the ground” (p390). 

Duke was eventually to re-achieve some degree of notoriety as a perennial candidate for elective office, initially with some success, even briefly holding a seat in the Louisiana state legislature and winning a majority of the white vote in his 1991 run for Governorship of Louisiana.

However, despite abandoning the Klan, Duke was never to escape its shadow. Thus, even forty years after abandoning the Klan name, Duke was to still find his name forever prefixed with the title former Klansman or former Grand Wizard David Duke, an image he was never able to jettison. 

Today, still railing against “the Jews” to anyone still bothering to listen, his former good looks having long previously faded, he cuts a lonely, rather pathetic figure, marginal even among the already marginal alt-right, and in his most recent electoral campaign, an unsuccessful run for a Senate seat, he managed to pick up only a miserly three percent of the vote. 

Un-American Americanism 

Where once Klansmen could unironically claim to stand for 100% Americanism, now, were not the very word ‘un-American‘ so tainted by McCarthyism as to sound almost un-American in itself, the Klan could almost be described as a quintessentially un-American organization. 

Indeed, interestingly, Wade reports that there was pressure on the House Un-American Activities Committee to investigate the Klan from even before the committee was first formed. Thus, Wade laments: 

The creation of the Dies Committee had been urged and supported by liberals and Nazi haters who wanted it used as a congressional forum against fascism. But in the hands of chairman Martin Dies of Texas, an arch-segregationist and his reactionary colleagues… the committee instead had become an anachronistic pack of witch hunters who harassed labor leaders… and discovered ‘communists’ in every imaginable shape and place” (p272).

Thus, Wade’s chief objection to the House Un-American Activities Committee seems to be, not that they became witch hunters, but that they chose to hunt, to his mind, the wrong coven of witches. Instead of going after the commies, they should have targeted the racists instead.

Yet what Wade does not mention is that perhaps the most prominent of the “liberals and nazi haters” who advocated for the formation of the HUAC in order persecute fascists and Klansmen, and who, as the joint-chairman of the ‘Special Committee on Un-American Activities’, the precursor to the HUAC, from 1934 to 1937, did indeed use the Committee to target fascists, albeit mostly imaginary ones, was congressman Samuel Dickstein, who was himself a paid Soviet agent, hence proving that McCarthyist concerns regarding communist infiltration and subversion at the highest level of American public life were no delusion.

Ultimately, however, Wade was to have his wish. Thus, the Klan did indeed fall victim to the same illiberal and sometimes illegal FBI cointelpro programme of harassment as more fashionable victims on the left, such as Martin Luther King, the Nation of Islam, and the Black Panther Party (p361-3).

Indeed, according to Wade, it was actually the Klan who were the first victims of this campaign of FBI harassment, with more fashionable victims of the left being targeted only later. Thus, Wade writes:

After developing Cointelpro for the Klan, the FBI also used it against the Black Panthers, civil rights leaders, and antiwar demonstrators” (p363).[19]

Licence to Kill?

The Klan formerly enjoyed a reputation something like that of the the Mafia, namely as a violent dangerous group whom a person crossed at their peril, since, again like the Mafia, they had a proven track record of committing violent acts and getting away with it, largely through their corrupt links with local law enforcement in the South, and the unwillingness of all-white Southern juries to hand down convictions.[20]

Today, however, this reputation is long lost.

Indeed, if today a suspect in a racist murder were outed as a Klansman, this would likely unfairly prejudice a jury of any ethnic composition, anywhere in the country, against him, arguably to the point of denying him any chance of a fair trial. 

Thus, when aging Klansmen, such as Edgar Ray KillenThomas Blanton and Bobby Frank Cherrywere belatedly put on trial and convicted in the 2000s for killings committed in the early 1960s, some forty years previously, I rather suspect that they received no fairer a trial then than they did, or would have had, when put on trial before all-white juries in the 1960s American South. The only difference was that now the prejudice was against them rather than in their favour. 

Thus, today, we have gone full circle. Quite when the turning point was reached is a matter of conjecture.

Arguably, the last incident of Klansmen unfairly getting away with murder was the so-called Greensboro massacre in 1979, when Klansmen and other white nationalist activists shot up an anti-Klan rally organized by radical left Maoist labour agitators in North Carolina. 

Here, however, if the all-white jury was indeed prejudiced against the victims of this attack, it was not because they were blacks (all but one of the five people killed were actually white), but rather that they were ‘reds’ (i.e. communists).[21]

Today, then, the problem is not with all-white juries in the South refusing to convict Klansmen, but rather with majority-black juries in urban areas across America refusing to convict black defendants, especially on police evidence, no matter how strong the case against them, for example in the OJ case (see also Paved with Good Intentions: p43-4; p71-3). 

Klans Today 

Wade’s ‘The Fiery Cross’ was first published in 1987. It is therefore not, strictly speaking, a history of the Klan for the entirety of its existence right up to the present day, since Klan groups have continued to exist since this date, and indeed continue to exist in modern America even today. 

However, Wade’s book nevertheless seems complete, because such groups have long previously ceased to have any real significance in American political, social and cultural life save as a media bogeyman and folk devils

In its brief 1920s heyday, the Second Klan could claim to play a key role in politics, even at the national level. 

Wade even claims, dubiously as it happens, that Warren G Harding was inducted into the organization in a special and secret White House ceremony while in office as President (p165).

Certainly, they helped defeat the candidacy of Al Smith, on account of his Catholicism, in 1924 and again in 1928 (p197-99). 

Some half-century later, during the 1980 presidential election campaign, the Klan again made a brief cameo, when each candidate sought to associate the Klan with their opponent, and thereby discredit him. Thus, Reagan was accused of insensitivity for praising “states’ rights, to which Reagan retorted by accusing his opponent, inaccurately as it happens, of opening his campaign in the city that “gave birth to and is the parent body of the Ku Klux Klan”. 

This led Grand Dragon Bill Wilkinson to declare triumphantly: 

We’re not an issue in this Presidential race because we’re insignificant” (p388). 

Yet what Wilkinson failed to grasp, or at least refused to publicly admit, was that the Klan’s role was now wholly negative. Neither candidate actually had any actual Klan links; each sought to link the Klan only with their opponent.

Whereas in the 1920s, candidates for elective office had actively and openly courted Klan votes, by the time of the 1980 Presidential election to have done so would have been electoral suicide. 

The Klan’s role, then, was as bogeymen and folk devils – roughly analogous to that played by Willie Horton in the 1988 presidential campaign; the role NAMBLA plays in the debate over gay rights; or, indeed, the role communists played during the First and Second Red Scares.[22]

Indeed, although in modern America lynching has fallen into disfavour, one suspects that, if it were ever to re-emerge as a popular American pastime and application of participatory democracy to the judicial process, then, among the first contemporary folk devils to be hoisted from a tree, alongside paedophiles and other classes of sex offender, would surely be Klansmen and other unreconstructed white racists. 

Likewise, today, if a group of Klansmen attempt to march in any major city in America then a police presence is required, not to protect innocent blacks, Jews and Catholics from rampaging Klansmen, but rather to protect the Klansmen themselves from angry assailants of all ethnicities, but mostly white. 

Indeed, the latter, styling themselves Antifa (an abbreviation of anti-fascist), despite their positively fascist opposition to freedom of speech, expression and assembly, have even taken, like Klansmen of old, to wearing masks to disguise their identities

Perhaps anti-masking laws, first enacted to defeat the First Klan, and later resurrected to tackle later Klan revivals, must be revived once again, but this time employed, without prejudice, against the contemporary terror, and totalitarianism, of the militant left. 

Endnotes

[1] The only trace of possible illiteracy in the name is found in the misspelling of ‘clan’ as ‘klan’, presumably, again, for alliterative purposes, or perhaps reflecting a legitimate spelling in the nineteenth century when the group was founded.

[2] The popular alt-right meme that there are literally no white-on-black rapes is indeed untrue, and reflects the misreading of a table in a government report that actually involved only a small sample. In fact, the government does not currently release data on the prevalence of interracial rape. However, there is no doubt that black-on-white rape is much more common than white-on-black rape. Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by a Human Rights Watch report.

[3] The then-president Woodrow Wilson (who, in addition to being a politican, was also a noted historian of the reconstruction period, of Southern background, and sympathies, whose five-volume book, A History of the American People, on the reconstruction period is actually quoted in several of the movie’s title cards) was later quoted as describing the movie, in some accounts the first moving picture that he had ever seen, as: 

History [writ] with lightning. My only regret is that it is all so terribly true” (p126). 

However, during the controversy following the film’s release, Wilson himself later issued a denial that he had ever uttered any such words, insisting that he had only agreed to the viewing as a “courtesy extended to an old acquaintance” and that:

The President was entirely unaware of the character of the play before it was presented and has at no time expressed his approbation of it” (p137).

This claim is, however, doubtful given the notoriety of the novel and play upon which the film had been based, and of its author, Thomas Dixon.

[4] Like so many other aspects of what is today considered Klan ritual, there is no evidence that cross-burning, or cross-lighting as devout Christian Klansmen prefer to call it, was ever practised by the original Reconstruction-era Klan. However, unlike other aspects of Klan ritualism, it had been invented, not by Simmons, but by novelist Thomas Dixson (by way of Walter Scott’s The Lady of the Lake), in imitation of an ostensible Scottish tradition, for his book, The Clansman: A Historical romance of the Ku Klux Klan, upon which novel the movie Birth of a Nation was based. The new Klan was eventually granted an easement in perpetuity over Stone Mountain, allowing it to repeat this ritual.

[5] A conviction may be regarded as unsafe, and even as a wrongful conviction, even if we still believe the defendant might be guilty of the crime with which s/he is charged. After all, the burden is on the prosecution to prove that the defendant is guilty beyond reasonable doubt. If there remains reasonable doubt, then the defendant should not have been convicted. Steve Oney, who researched the case intensively for his book, And the Dead Shall Rise, concedes that “the case [against Frank] is not as feeble as most people say it is”, but nevertheless concludes that Frank was probably innocent, “but there is enough doubt to leave the door ajar” (Berger, Leo Frank Case Stirs Debate 100 Years After Jewish Lynch Victim’s Conviction, Forward, August 30, 2013).

[6] The ADL ’s role in Wade’s narrative does not end here, since the ADL would later play a key role in fighting later incarnations of the Klan.

[7] Indeed, even from a modern racial egalitarian perspective, the era is arguably misnamed. After all, from a racial egalitarian perspective, the plantation era, when slavery was still practised, was surely worse, as surely was the period of bloody conflict between Native Americans and European colonists.

[8] Even among open racists, support for slavery is rare. Therefore, few American racists openly pine for a return to the plantation era. Segregation is, then, then next best thing, short of the actual expulsion of blacks back to Africa. Thus, it is common to hear white American racialists hold up early twentieth century America as lost Eden. For example, many blame the supposed decline of the US public education system on desegregation.

[9] It is thus a myth that oppressed peoples invariably revolt against their oppressors. In reality, truly oppressed peoples, like blacks in the South in this period, tend to maintain a low profile precisely so as to avoid incurring the animosity of their oppressors. It is only when they sense weakness in their oppressors, or ostensible oppressors, that insurrections tend to occur. This then explains the paradox that black militancy in America seems to be inversely proportional to the actual extent of black oppression. Thus, the preeminent black leader in America at the height of the Jim Crow era was Booker T Washington, by modern standards a conservative, if not an outright Uncle Tom. Yet, today, when blacks are the beneficiaries, not the victims of discrimination, in the form of what is euphemistically called affirmative action, and it is whites who are ‘walking on eggshells’ and in fear of losing their jobs if they say something offensive to certain protected groups, American blacks are seemingly more militant and belligerent than ever, as the recent BLM riots have shown only too well. 

[10] This disavowal may have been disingenuous and reflected the fact that, by this time, Simmons had lost control of the then-lucrative cash-cow.

[11] Thus, in Ireland, the Protestant minority opposed Home Rule’ for Ireland (a form of devolution, or self-government, that fell short of full independence) on the grounds that it would supposedly amount, in effect, to Rome Rule, due to the Catholic majority in Ireland.

[12] Interestingly, unlike the Klan, another initially anti-Catholic fraternal order, Junior Order of United American Mechanics, successfully jettisoned both its earlier anti-Catholicism, and a similar association with violence, to reinvent itself as a respectable, non-sectarian beneficent group. However, the Klan was ultimately unable to achieve the same feat. 

[13] Of course, other forms of intergroup prejudice have been altogether more intransigent and long-lasting. Indeed, even anti-Catholicism itself had a long history. Pierre van den Berghe, in his excellent The Ethnic Phenomenon (which I have reviewed here and here), argues that assimilation is possible on in specific circumstances, namely when the groups to be assimilated are: 

Similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (The Ethnic Phenomenon: p219). 

Thus, those hoping other forms of intergroup prejudice (e.g. anti-black sentiment in the USA, or indeed the continuing animosity between Catholics and Protestants in Northern Ireland) can be similarly overcome in such a short period of time in coming years are well-advised not to hold their breaths.

[14] In the many often graphic images of lynchings of black victims accessible via the internet, I have yet to find one in which the lynch-mobs are dressed in the ceremonial regalia of the Klan. On the contrary, far from wearing masks, the perpetrators often proudly face the camera, evidently feeling no fear of retribution or legal repercussions for their vigilantism.

[15] The question of the religious beliefs, if any, of Hitler is one of some controversy. Certainly, many leading  figures in the National Socialist regime, including Martin Bormann and Alfred Rosenberg, were hostile to Christianity. Likewise, Hitler is reported as making anti-Christian statements in private, in both Hitler’s Table Talk, and by such confidents as Speer in his memoirs. Hitler talked of postponing his Kirchenkampf, or settling of accounts with the churches, until after the War, not wishing to fight enemies on multiple fronts.

[16] To clarify, it has been claimed that the Catholic Church faced persecution in National Socialist Germany. However, this persecution did not extend to individual Catholics, save those, including some priests, who opposed the regime and its policies, in which case the persecution reflected their political activism rather than their religion as such. Although Hitler was indeed hostile to Christianity, Catholicism very much included, Nazi conflict with the Church seems to have reflected primarily the fact that the Nazis, as a totalitarian regime, sought to control all aspects of society and culture in Germany, including those over which the Church had formerly claimed hegemony (e.g. education).

[17] In a later era, this was among the reasons given by David Duke in his autobiography for his abandonment of the Klan brand, since his own largely non-violent Klan faction was, he complained, invariably confused with, and tarred with the same brush as, other violent Klan factions through guilt by association

[18] Duke later had a better idea for a name for his organization – namely, the National Organization For European American Rights, which he intended to be known by the memorable acronym, NO-FEAR. Unfortunately for him, however, the clothing company who had already registered this name as a trademark thought better of it and forced him to change the group’s name to the rather less memorable European-American Unity and Rights Organization (or EURO).

[19] Certainly, the Klan was henceforth a major target of the FBI. Indeed, the FBI were even accused, in a sting operation apparently funded by the ADL, of provoking one Klan bombing in which a woman, Kathy Ainsworth, herself one of the bombers and an active, militant Klanswoman, was killed (p363). The FBI was also implicated in another Klan killing, namely that of civil rights campaigner Viola Liuzzo, since an FBI agent was present with the killers in the car from which the fatal shots were fired (p347-54). Indeed, Wade reports that “about 6 percent of all Klansmen in the late 1960s worked for the FBI” (p362).

[20] Thus, former Klan leader David Duke, in his autobiographical My Awakening, reports that, when he and other arrestees were outed as Klansmen in a Louisiana prison, the black prisoners, far attacking them, were initially cowed by the revelation: 

At first, it seemed my media reputation intimidated them. The Klan had a reputation, although undeserved, like that of the mafia. Some of the Black inmates obviously thought that if they did anything to harm me, a “Godfather” type of character, they might soon end up with their feet in cement at the bottom of the Mississippi.

[21] All but one of those killed, Wade reports, were leaders of the Maoist group responsible for the anti-Klan rally (p381). Wade uses this to show that the violence was premeditated, having been carefully planned and coordinated by the Klansmen and neo-Nazis. However, the fact that they were leading figures in this Maoist group would also likely mean that they were hardly innocent victims, at least in the eyes of conservative white jurors in North Carolina. In fact, the victims were indeed highly unsympathetic, not merely on account of their politics, but also on account of the fact that they had seemingly deliberately provoked the Klan attack, openly challenging the Klan to attend their provocatively titled ‘Death to the Klan’ rally (p379), and, though ultimately heavily outgunned, they themselves seem to have first initiated the violence by attacking the cars carrying Klansmen with placards (p381).

[22] This was the same role that the Klan was to play once again during the recent Trump presidential campaigns, as journalists trawled the South in search of grizzled, self-appointed Grand Dragons willing, presumably in return for a few drinks, to offer their unsolicited endorsement of the Trump candidature and thereby, in the journalists’ own minds, and that of some of their readers, discredit him through guilt-by-association.

‘Alas Poor Darwin’: How Stephen Jay Gould Became an Evolutionary Psychologist and Steven Rose a Scientific Racist

Steven Rose and Hillary Rose (eds.), Alas Poor Darwin: Arguments against Evolutionary Psychology, London: Jonathan Cape, 2000.

Alas Poor Darwin: Arguments against Evolutionary Psychology’ is an edited book composed of multiple essays by different authors, from different academic fields, brought together for the purpose of ostensibly all critiquing the emerging science of evolutionary psychology. This multiple authorship makes it difficult to provide an overall review, since the authors approaches to the topic differ markedly.  

Indeed, the editors admit as much, conceding that the contributors “do not speak with a single voice” (p9). This seems to a tacit admission that they frequently contradict one another. 

Thus, for example, feminist biologist Anne Fausto-Sterling attacks evolutionary psychologists such as Donald Symons as sexist for arguing that the female orgasm as a mere by-product of the male orgasm and not an adaptation in itself, complaining that, according to Symons, women “did not even evolve their own orgasms” (p176). 

Yet, on the other hand, scientific charlatan Stephen Jay Gould criticizes evolutionary psychologists for the precise opposite offence, namely for (supposedly) viewing all human traits and behaviours as necessarily adaptations and ignoring the possibility of by-products (p103-4).

Meanwhile, some chapters are essentially irrelevant to the project of evolutionary psychology

For example, one, that of full-time ‘Dawkins-stalker’ (and part-time philosopher) Mary Midgley critiques the quite separate approach of memetics

Likewise, one singularly uninsightful chapter by ‘disability activist’ Tom Shakespeare and a colleague seems to say nothing with which the average evolutionary psychologist would likely disagree. Indeed, they seem to say little of substance at all. 

Only at the end of their chapter do they make the obligatory reference to just-so stories, and, more bizarrely, to the “single-gene determinism of the biological reductionists” (p203).

Yet, as anyone who has ever read any evolutionary psychology is surely aware, evolutionary psychologists, like other evolutionary biologists, emphasize to the point of repetitiveness that, while they may talk of ‘genes for’ certain characteristics as a form of scientific shorthand, nothing in their theories implies a one-to-one concordance between single genes and behaviours. 

Indeed, the irrelevance of some chapters to their supposed subject-matter (i.e. evolutionary psychology) makes one wonder whether some of the contributors to the volume have ever actually read any evolutionary psychology, or even any popularizations of the field – or whether their entire limited knowledge of the field was gained by reading critiques of evolutionary psychology by other contributors to the volume. 

Annette Karmiloff-Smith’s chapter, entitled ‘Why babies’ brains are not Swiss army knives’, is a critique of what she refers to as nativism, namely the belief that certain brain structures (or modules) are innately hardwired into the brain at birth.

This chapter, perhaps alone in the entire volume, may have value as a critique of some strands of evolutionary psychology.

Any analogy is imperfect; otherwise it would not be an analogy but rather an identity. However, given that even a modern micro-computer has been criticized as an inadequate model for the human brain, comparing human brains to a Swiss army knives is obviously an analogy that should not be taken too far.

However, the nativist, massive modularity thesis that Karmiloff-Smith associates with evolutionary psychology, while indeed typical of what we might call the narrow ‘Tooby and Cosmides brand’ of evolutionary psychology is rejected by many evolutionary psychologists (e.g. the authors of Human Evolutionary Psychology) and is not, in my view, integral to evolutionary psychology as a discipline or approach.

Instead, evolutionary psychology posits that behaviour have been shaped by natural selection to maximise the reproductive success of organisms in ancestral environments. It therefore allows us to bypass the proximate level of causation in the brain by recognising that, howsoever the brain is structured and produces behaviour in interaction with its environment, given that this brain evolved through a process of natural selection, it must be such as to produce behaviour which maximizes the reproductive success of its bearer, at least under ancestral conditions. (This is sometimes called the phenotypic gambit.) 

Stephen Jay Gould’s Deathbed Conversion?

Undoubtedly the best known, and arguably the most prestigious, contributor to the Roses’ volume is the famed palaeontologist and popular science writer Stephen Jay Gould. Indeed, such is his renown that Gould evidently did not feel it necessary to contribute an original chapter for this volume, instead simply recycling, and retitling, what appears to be a book review, previously published in The New York Review of Books (Gould 1997). 

This is a critical review of a book Darwin’s Dangerous Idea: Evolution and the Meanings of Life by philosopher Daniel Dennett that is itself critical of Gould, a form of academic self-defence. Neither the book, nor the review, deal primarily with the topic of evolutionary psychology, but rather with more general issues in evolutionary biology. 

Yet the most remarkable revelation of Gould’s chapter – especially given that it appears in a book ostensibly critiquing evolutionary psychology – is that the best-known and most widely-cited erstwhile opponent of evolutionary psychology is apparently no longer any such thing. 

On the contrary, he now claims in this essay: 

‘Evolutionary psychology’… could be quite useful, if proponents would change their propensity for cultism and ultra-Darwinian fealty for a healthy dose of modesty” (p98). 

Indeed, even more remarkably, Gould even acknowledges: 

The most promising theory of evolutionary psychology [is] the recognition that differing Darwinian requirements for males and females imply distinct adaptive behaviors centred on male advantage in spreading sperm as widely as possible… and female strategy for extracting time and attention from males… [which] probably does underlie some different, and broadly general, emotional propensities oof human males and females” (p102). 

In other words, it seems that Gould now accepts the position of evolutionary psychologists in that most controversial of areas – innate sex differences

In this context, I am reminded of John Tooby and Leda Cosmides’s observation that critics of evolutionary psychology, in the course of their attacks on evolutionary psychology, often make concessions that, if made in any context other than that of an attack on evolutionary psychology, would cause them to themselves be labelled (and attacked) as evolutionary psychologists (Tooby and Cosmides 2000). 

Nevertheless, Gould’s backtracking is a welcome development, notwithstanding his usual arrogant tone.[1]

Given that he passed away only a couple of years after the current volume was published, one might almost, with only slight hyperbole, characterise his backtracking as a deathbed conversion. 

Ultra-Darwinism? Hyper-Adaptationism?

On the other hand, Gould’s criticisms of evolutionary psychology have not evolved at all but merely retread familiar gripes which evolutionary psychologists (and indeed so-called sociobiologists before them) dealt with decades ago. 

For example, he accuses evolutionary psychologists of viewing every human trait as adaptive and ignoring the possibility of by-products (p103-4). 

However, this claim is easily rebutted by simply reading the primary literature in the field. 

Thus, for example, Martin Daly and Margo Wilson view the high rate of abuse perpetrated by stepparents, not as itself adaptive, but as a by-product of the adaptive tendency for stepparents to care less for their stepchildren than they would for their biological children (see The Truth about Cinderella: which I have reviewed here).  

Similarly, Donald Symons argued that the female orgasm is not itself adaptive, but rather is merely a by-product of the male orgasm, just as male nipples are a non-adaptive by-product of female nipples (see The Evolution of Human Sexuality: which I have reviewed here).  

Meanwhile, Randy Thornhill and Craig Palmer are divided as to whether human rape is adaptive or merely a by-product of men’s greater desire for commitment-free promiscuous sex (A Natural History of Rape: which I have reviewed here). 

However, unlike Gould himself, evolutionary psychologists generally prefer the term ‘by-product’ to Gould’s unhelpful coinage ‘spandrel’. The former term is readily intelligible to any educated person fluent in English. Gould’s preferred terms is needless obfuscation. 

As emphasized by Richard Dawkins, the invention of jargon to baffle non-specialists (e.g. referring to animal rape as “forced copulation” as the Roses advocate: p2) is the preserve of fields suffering from physics-envy, according to ‘Dawkins’ First Law of the Conservation of Difficulty’, whereby “obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity”. 

Untestable? Unfalsifiable?

Gould’s other main criticism of evolutionary psychology is his claim that sociobiological theories are inherently untestable and unfalsifiable – i.e. what Gould calls Just So Stories

However, one only has to flick through copies of journals like Evolution and Human Behavior, Human Nature, Evolutionary PsychologyEvolutionary Psychological Science, and many other journals that regularly publish research in evolutionary psychology, to see evolutionary psychological theories being tested, and indeed often falsified, every month. 

As evidence for the supposed unfalsifiability of sociobiological theories, Gould cites, not such primary research literature, but rather a work of popular science, namely Robert Wright’s The Moral Animal

Thus, he quotes Robert Wright as asserting in this book that our “sweet tooth” (i.e. taste for sugar), although maladaptive in the contemporary West because it leads to obesity, diabetes and heart disease, was nevertheless adaptive in ancestral environments (i.e. the EEA) where, as Wright put it, “fruit existed but candy didn’t” (The Moral Animal: p67). 

Yet, Gould protests indignantly, in support of this claim, Wright cites “no paleontological data about ancestral feeding” (p100). 

However, Wright is a popular science writer, not an academic researcher, and his book, The Moral Animal, for all its many virtues, is a work of popular science. As such, Wright, unlike someone writing a scientific paper, is not to be expected to cite a source for every claim he makes. 

Moreover, is Gould, a palaeontologist, really so ignorant of human history that he seriously believes we really need “paleontological data” in order to demonstrate that fruit is not a recent invention but that candy is? Is this really the best example he can come up with? 

From ‘Straw Men’ to Fabricated Quotations 

Rather than arguing against the actual theories of evolutionary psychologists, contributors to ‘Alas Poor Darwin’ instead resort to the easier option of misrepresenting these theories, so as to make the task of arguing against them less arduous. This is, of course, the familiar rhetorical tactic of constructing of straw man

In the case of co-editor, Hilary Rose, this crosses the line from rhetorical deceit to outright defamation of character when, on p116, she falsely attributes to sociobiologist David Barash an offensive quotation violating the naturalistic fallacy by purporting to justify rape by reference to its adaptive function

Yet Barash simply does not say the words she attributes to him on the page she cites (or any other page) in Whisperings Within, the book form which the quotation claims be drawn. (I know, because I own a copy of said book.) 

Rather, after a discussion of the adaptive function of rape in ducks, Barash merely tentatively ventures that, although vastly more complex, human rape may serve an analogous evolutionary function (Whisperings Within: p55). 

Is Steven Rose a Scientific Racist? 

As for Steven Rose, the book’s other editor, unlike Gould, he does not repent his sins and convert to evolutionary psychology. However, in maintaining his evangelical crusade against evolutionary psychology, sociobiology and all related heresies, Rose inadvertently undergoes a conversion, in many ways, even more dramatic and far reaching in its consequences. 

To understand why, we must examine Rose’s position in more depth. 

Steven Rose, it goes almost without saying, is not a creationist. On the contrary, he is, in addition to his popular science writing and leftist political activism, a working neuroscientist who very much accepts Darwin’s theory of evolution. 

Rose is therefore obliged to reconcile his opposition to evolutionary psychology with the recognition that the brain is, like the body, a product of evolution. 

Ironically, this leads him to employ evolutionary arguments against evolutionary psychology. 

For example, Rose mounts an evolutionary defence of the largely discredited theory of group selection, whereby it is contended that traits sometimes evolve, not because they increase the fitness of the individual possessing them, but rather because they aid the survival of the group of which s/he is a member, even at a cost to the fitness of the individual themselves (p257-9). 

Indeed, Rose even goes further, even going so far as to assert: 

Selection can occur at even higher levels – that of the species for example” (p258). 

Similarly, in the book’s introduction, co-authored with his wife Hillary, the Roses dismiss the importance of evolutionary psychological concept of the ‘environment of evolutionary adaptedness’ (or ‘EEA’).[2] 

This term refers to the idea that we evolved to maximise our reproductive success, not in the sort of contemporary Western societies in which we now so often find ourselves, but rather in the sorts of environments in which our ancestors spent most of our evolutionary history, namely as Stone Age hunter-gatherers. 

On this view, much behaviour in modern Western societies is recognized as maladaptive, reflecting a mismatch between the environment to which we are adapted and that in which we find ourselves, simply because we have not had sufficient time to evolve psychological mechanisms for dealing with such ‘evolutionary novelties’ as contraception, paternity tests and chocolate bars. 

However, the Roses argue that evolution can occur much faster than this. Thus, they point to: 

The huge changes produced by artificial selection by humans among domesticated animals – cattle, dogs and… pigeons – in only a few generations. Indeed, unaided natural selection in Darwin’s own Islands, the Galapagos, studied over several decades by the Grants is enough to produce significant changes in the birds’ beaks and feeding habits in response to climate change” (p1-2). 

Finally, Rose rejects the modular’ model of the human mind championed by some evolutionary psychologists, whereby the brain is conceptualized as being composed of many separate ‘domain-specific modules’, each specialized for a particular class of adaptive problem faced by ancestral humans.  

As evidence against this thesis, Rose points to the absence of a direct one-to-one relationship between the modules postulated by evolutionary psychologists and actual regions of the brain as identified by neuroscientists (p260-2). 

Whether such modules are more than theoretical entities is unclear, at least to most neuroscientists. Indeed evolutionary psychologists such as Pinker go to some lengths to make it clear that the ‘mental modules’ they invent do not, or at least do not necessarily, map onto specific brain structures” (p260). 

Thus, Rose protests: 

Evolutionary psychology theorists, who… are not themselves neuroscientists, or even, by and large, biologists, show as great a disdain for relating their theoretical concepts to material brains as did the now discredited behaviorists they so despise” (p261). 

Yet there is an irony here – namely, in employing evolutionary arguments against evolutionary psychology (i.e. emphasizing the importance of group selection and of recently evolved adaptations), Rose, unlike many of his co-contributors, actually implicitly accepts the idea of an evolutionary approach to understanding human behaviour and psychology. 

In other words, if Rose is indeed right about these matters (group selection, recently evolved adaptations and domain general psychological mechanisms), this would suggest, not the abandonment of an evolutionary approach in psychology, but rather the need to develop a new evolutionary psychology that gives appropriate weight to such factors as group selection, recently evolved adaptations and domain general psychological mechanisms

Actually, however, as we will see, this ‘new’ evolutionary psychology may not be all that new and Rose may find he has unlikely bedfellows in this endeavour. 

Thus, group selection – which tends to imply that conflict between groups such as races and ethnic groups is inevitable – has already been defended by race theorists such as Philippe Rushton and Kevin MacDonald

For example, Rushton, author of Race, Evolution and Behavior (which I have reviewed here), a notorious racial theorist known for arguing that black people are genetically predisposed to crime, promiscuity and low IQ, has also authored papers with titles like ‘Genetic similarity, human altruism and group-selection’ (Rushton 1989) and ‘Genetic similarity theory, ethnocentrism, and group selection’ (Rushton 1998), which defend and draw on the concept of group selection to explain such behaviours as racism and ethnocentrism.

Similarly, Kevin Macdonald, a former professor of psychology widely accused of anti-Semitism, has also championed the theory of group selection, and even developed a theory of cultural group selection to explain the survival and prospering of the Jewish people in diaspora in his book, A People That Shall Dwell Alone: Judaism as a Group Evolutionary Strategy (which I have reviewed here and here) and its more infamous, and theoretically flawed, sequel, The Culture of Critique (which I have reviewed here). 

Similarly, the claim that sufficient time has elapsed for significant evolutionary change to have occurred since the Stone Age (our species’ primary putative environment of evolutionary adaptedness) necessarily also entails recognition that sufficient time has also elapsed for different human populations, including different races, to have significantly diverged in, not just their physiology, but also their psychology, behaviour and cognitive ability.[3]

Finally, rejection of a modular conception of the human mind is consistent with an emphasis on what is perhaps the ultimate domain-general factor in human cognition, namely general factor of intelligence, as championed by psychometriciansbehavioural geneticists, intelligence researchers and race theorists such as Arthur Jensen, Richard Lynn, Chris Brand, Philippe Rushton and the authors of The Bell Curve (which I have reviewed here, here and here), who believe that individuals and groups differ in intellectual ability, that some individuals and groups are more intelligent across the board, and that these differences are partly genetic in origin.

Thus, Kevin Macdonald specifically criticizes mainstream evolutionary psychology for its failure to give due weight to the importance of domain-general mechanisms, in particular general intelligence (Macdonald 1991). 

Indeed, Rose himself elsewhere acknowledges that: 

The insistence of evolutionary psychology theorists on modularity puts a strain on their otherwise heaven-made alliance with behaviour geneticists” (p261).[4]

Thus, in rejecting the tenets of mainstream evolutionary psychology, Rose inadvertently advocates, not so much a new form of evolutionary psychology, as rather an old form of scientific racism.

Of course, Steven Rose is not a racist. On the contrary, he has built a minor, if undistinguished, literary career smearing those he characterises as such.[5]

However, descending to Rose’s own level of argumentation (e.g. employing guilt by association and argumenta ad hominem), he is easily characterised as such. After all, his arguments against the concept of the EEA, and in favour of group-selectionism directly echo those employed by the very scientific racists (e.g. Rushton) whom Rose has built a minor literary career out of attacking. 

Thus, by rejecting many claims of mainstream evolutionary psychologists – about the environment of evolutionary adaptedness, about group-selectionism and about modularity – Rose ironically plays into the hands of the very ‘scientific racists’ whom he purportedly opposes.

Thus, if his friend and comrade Stephen Jay Gould, in own his recycled contribution to ‘Alas Poor Darwin’, underwent a surprising but welcome deathbed conversion to evolutionary psychology, then Steven Rose’s transformation proves even more dramatic but rather less welcome. He might, moreover, find his new bedfellows less good company than he expected. 

Endnotes

[1] Throughout his essay, Gould, rather than admit he was wrong with respect to sociobiology, the then-emerging approach that came to dominate research in animal behaviour but was rashly rejected by Gould and other leftist activists, instead makes no such concession. Rather, he seems to imply, even if he does not directly state, that it was his constructive criticism of sociobiology which led to advances in the field and indeed to the development of evolutionary psychology from human sociobiology. Yet, as anyone who followed the controversies over sociobiology and evolutionary psychology, and read Gould’s writings on these topics will be aware, this is far from the case.

[2] Actually, the term environment of evolutionary adaptedness was coined, not by evolutionary psychologists, but rather by psychoanalyst and attachment theorist, John Bowby.

[3] This is a topic addressed in such controversial recent books as Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution and Nicholas Wade’s A Troublesome Inheritance: Genes, Race and Human History. It is also a central theme of Sarich and Frank Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here). Papers discussing the significance of recent and divergent evolution in different populations for the underlying assumptions of evolutionary psychology include Winegard et al (2017) and Frost (2011). Evolutionary psychologists in the 1990s and 2000s, especially those affiliated with Tooby and Cosmides at UCSB, were perhaps guilty of associating the environment of evolutionary adaptedness too narrowly with Pleistocene hunter-gatherers on the African savanna. Thus, Tooby and Cosmides have written our modern skulls house a stone age mind. However, while embracing this catchy if misleading soundbite, in the same article Tooby and Cosmides also write more accurately:

“The environment of evolutionary adaptedness, or EEA, is not a place or time. It is the statistical composite of selection pressures that caused the design of an adaptation. Thus the EEA for one adaptation may be different from that for another” (Cosmides and Tooby 1997).

Thus, the EEA is not a single time and place that a researcher could visit with the aid of a map, a compass, a research grant and a time machine. Rather a range of environments, and also that the relevant range of environments may differ in respect of different adaptations.

[4] This reference to the “otherwise heaven-made alliance” between evolutionary psychologists and behavioural geneticists, incidentally, contradicts Rose‘s own acknowledgement, made just a few pages earlier, that:

Evolutionary psychologists are often at pains to distinguish themselves from behaviour geneticists and there is some hostility between the two” (p248). 

As we have seen, consistency is not Steven Rose’s strong point. See Kanazawa 2004 the alternative view that general intelligence is itself, paradoxically, a domain-specific module.

[5] I feel the need to emphasise that Rose is not a racist, not least for fear that he might sue me for defamation if I suggest otherwise. And if you think the idea of a professor suing some random, obscure blogger for a blog post is preposterous, then just remember – this is a man who once threatened legal action against publishers of a comic book – yes, a comic book – and forced the publishers to append an apology to some 10,000 copies of the said comic book, for supposedly misrepresenting his views in a speech bubble in said comic book, complaining “The author had literally [sic] put into my mouth a completely fatuous statement” (Brown 1999) – an ironic complaint given the fabricated quotation, of a genuinely defamatory nature, attributed to David Barash by his Rose’s own wife Hillary in the current volume: see above, for which Rose himself, as co-editor, is vicariously responsible. Rose is an open opponent of free speech. Indeed, Rose even stands accused by German scientist, geneticist and intelligence researcher Volkmar Weiss of actively instigating the infamously repressive communist regime in East Germany (Weiss 1991). This is moreover an allegation that Rose has, to my knowledge, never denied or brought legal action in respect, despite his known penchant for threatening legal action against the publishers of comic books.

References 

Brown (1999) Origins of the speciousGuardian, November 30.
Frost (2007) Human nature or human natures? Futures 43(8): 740-74.
Gould (1997) Darwinian Fundamentalism, New York Review of Books, June 12.
Kanazawa, (2004) General Intelligence as a Domain-Specific Module, Psychological Review 111(2):512-523. 
Macdonald (1991) A perspective on Darwinian psychology: The importance of domain-general mechanisms, plasticity, and individual differencesEthology and Sociobiology 12(6): 449-480.
Rushton (1989) Genetic similarity, human altruism and group-selectionBehavioral and Brain Sciences 12(3) 503-59.
Rushton (1998). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Ideology and Warfare: Evolutionary Perspectives (pp369-388). Oxford: Berghahn Books.
Tooby & Cosmides (1997) Evolutionary Psychology: A Primer, published at the Center for Evolutionary Psychology website, UCSB.
Tooby & Cosmides (2000) Unpublished Letter to the Editor of New Republic, published at the Center for Evolutionary Psychology website, UCSB.
Weiss (1991) It could be Neo-Lysenkoism, if there was ever a break in continuity! Mankind Quarterly 31: 231-253.
Winegard et al (2007) Human Biological and Psychological Diversity. Evolutionary Psychological Science 3:159–180.

Edward O Wilson’s ‘Sociobiology: The New Synthesis’: A Book Much Read About, But Rarely Actually Read

Edward O Wilson, Sociobiology: The New Synthesis Cambridge: Belknap, Harvard 1975

Sociobiology – The Field That Dare Not Speak its Name? 

From its first publication in 1975, the reception accorded Edward O Wilson’s ‘Sociobiology: The New Synthesis’ has been divided. 

On the one hand, among biologists, especially those specialist in the fields of ethology, zoology and animal behaviour, the reception was almost universally laudatory. Indeed, my 25th Anniversary Edition even proudly proclaims on the cover that it was voted by officers and fellows of the Animal Behavior Society as the most important ever book on animal behaviour, supplanting even Darwin’s own seminal On The Expression of Emotions in Man and Animals

However, on the other side of the university campus, in social science departments, the reaction was very different. 

Indeed, the hostility that the book provoked was such that ‘sociobiology’ became almost a dirty word in the social sciences, and ultimately throughout the academy, to such an extent that ultimately the term fell into disuse (save as a term of abuse) and was replaced by largely synonymous euphemisms like behavioral ecology and evolutionary psychology.[1]

Sociobiology thus became, in academia, ‘the field that dare not speak its name’. 

Similarly, within the social sciences, even those researchers whose work carried on the sociobiological approach in all but name almost always played down the extent of their debt to Wilson himself. 

Thus, books on evolutionary psychology typically begin with disclaimers acknowledging that the sociobiology of Wilson was, of course, crude and simplistic, and that their own approach is, of course, infinitely more sophisticated. 

Indeed, reading some recent works on evolutionary psychology, one could be forgiven for thinking that evolutionary approaches to understanding human behaviour began around 1989 with the work of Tooby and Cosmides

Defining the Field 

What then does the word ‘sociobiology’ mean? 

Today, as I have mentioned, the term has largely fallen into disuse, save among certain social scientists who seem to employ it as a rather indiscriminate term of abuse for any theory of human behaviour that they perceive as placing too great a weight on hereditary or biological factors, including many areas of research only tangentially connected to with sociobiology as Wilson originally conceived of it (e.g. behavioral genetics).[2]

The term ‘sociobiology’ was not Wilson’s own coinage. It had occasionally been used by biologists before, albeit rarely. However, Wilson was responsible for popularizing – and perhaps, in the long-term, ultimately unpopularizing it too, since, as we have seen, the term has largely fallen into disuse.[3] 

Wilson himself defined ‘sociobiology’ as: 

The systematic study of the biological basis of all social behavior” (p4; p595). 

However, as the term was understood by other biologists, and indeed applied by Wilson himself, sociobiology came to be construed more narrowly. Thus, it was associated in particular with the question of why behaviours evolved and the evolutionary function they serve in promoting the reproductive success of the organism (i.e. just one of Tinbergen’s Four Questions). 

The hormonal, neuroscientific, or genetic causes of behaviours are just as much a part of “the biological basis of behavior” as are the ultimate evolutionary functions of behaviour. However, these lie outside of scope of sociobiology as the term was usually understood. 

Indeed, Wilson himself admitted as much, writing in ‘Sociobiology: The New Synthesis’ itself of how: 

Behavioral biology… is now emerging as two distinct disciplines centered on neurophysiology and… sociobiology” (p6). 

Yet, in another sense, Wilson’s definition of the field was also too narrow. 

Thus, behavioural ecologists have come to study all forms of behaviour, not just social behaviour.  

For example, optimal foraging theory is a major subfield within behavioural ecology (the successor field to sociobiology), but concerns feeding behaviour, which may be an entirely solitary, non-social activity. 

Indeed, even some aspects of an organism’s physiology (as distinct from behaviour) have come to be seen as within the purview of sociobiology (e.g. the evolution of the peacock’s tail). 

A Book Much Read About, But Rarely Actually Read 

Sociobiology: The New Synthesis’ was a massive tome, numbering almost 700 pages. 

As Wilson proudly proclaims in his glossary, it was: 

Written with the broadest possible audience in mind and most of it can be read with full understanding by any intelligent person whether or not he or she has had any formal training in science” (p577). 

Unfortunately, however, the sheer size of the work alone was probably enough to deter most such readers long before they reached p577 where these words appear. 

Indeed, I suspect the very size of the book was a factor in explaining the almost universally hostile reception that the book received among social scientists. 

In short, the book was so large that the vast majority of social scientists had neither the time nor the inclination to actually read it for themselves, especially since a cursory flick through its pages showed that the vast majority of them seemed to be concerned with the behaviour of species other than humans, and hence, as they saw it, of little relevance to their own work. 

Instead, therefore, their entire knowledge of the sociobiology was filtered through to them via the critiques of the approach authored by other social scientists, themselves mostly hostile to sociobiology, who presented a straw man caricature of what sociobiology actually represented. 

Indeed, the caricature of sociobiology presented by these authors is so distorted that, reading some of these critiques, one often gets the impression that included among those social scientists not bothering to read the book for themselves were most of the social scientists nevertheless taking it upon themselves to write critiques of it. 

Meanwhile, the fact that the field was so obviously misguided (as indeed it often was in the caricatured form presented in the critiques) gave most social scientists yet another reason not to bother wading through its 700 or so pages for themselves. 

As a result, among sociologists, psychologists, anthropologists, public intellectuals, and other such ‘professional damned fools’, as well as the wider the semi-educated, reading public, ‘Sociobiology: The New Synthesis’ became a book much read about – but rarely actually read (at least in full). 

As a consequence, as with other books falling into this category (e.g. the Bible and The Bell Curve) many myths have emerged regarding its contents which are quite contradicted on actually taking the time to read it for oneself. 

The Many Myths of Sociobiology 

Perhaps the foremost myth is that sociobiology was primarily a theory of human behaviour. In fact, as is revealed by even a cursory flick through the pages of Wilson’s book, sociobiology was, first and foremost, a theoretical approach to understanding animal behaviour. 

Indeed, Wilson’s decision to attempt to apply sociobiological theory to humans as well was, it seems, almost something of an afterthought, and necessitated by his desire to provide a comprehensive overview of the behaviour of all social animals, humans included. 
 
This is connected to the second myth – namely, that sociobiology was Wilson’s own theory. In fact, rather than a single theory, sociobiology is better viewed as a particular approach to a field of study, the field in question being animal behaviour. 
 
Moreover, far from being Wilson’s own theory, the major advances in the understanding of animal behaviour that gave rise to what came to be referred to as ‘sociobiology’ were made in the main by biologists other than Wilson himself.  
 
Thus, it was William Hamilton who first formulated inclusive fitness theory (which came to be known as the theory of kin selection); John Maynard Smith who first introduced economic models and game theory into behavioural biology; George C Williams who was responsible for displacing a crude group-selection in favour of a new focus on the gene itself as the principal unit of selection; while Robert Trivers was responsible for such theories such as reciprocal altruismparent-offspring conflict and differential parental investment theory
 
Instead, Wilson’s key role was to bring the various strands of the emerging field together, give it a name and, in the process, take far more than his fair share of the resulting flak. 
 
Thus, far from being a maverick theory of a single individual, what came to be known as ‘sociobiology’ was, if not based on accepted biological theory at the time of publication, then at least based on biological theory that came to be recognised as mainstream within a few years of its publication. 
 
Controversy attached almost exclusively to the application of these same principles to explain human behaviour. 

Applying Sociobiology to Humans 

In respect of Wilson’s application of sociobiological theory to humans, misconceptions again abound. 

For example, it is often asserted that Wilson only extended his theory to apply to human behaviour in his infamous final chapter, entitled, ‘Man: From Sociobiology to Sociology’. 

Actually, however, Wilson had discussed the possible application of sociobiological theory to humans several times in earlier chapters. 
 
Often, this was at the end of a chapter. For example, his chapter on “Roles and Castes” closes with a discussion of “Roles in Human Societies” (p312-3). Similarly, the final subsection of his chapter on “Aggression” is titled “Human Aggression” (p 254-5). 
 
Other times, however, humans get a mention in mid-chapter, as in Chapter Fifteen, which is titled ‘Sex and Society’, where Wilson discusses the association between adultery, cuckoldry and violent retribution in human societies, and rightly prophesizes that “the implications for the study of humans” of Trivers’ theory of differential parental investment “are potentially great” (p327). 
 
Another misconception is that, while he may not have founded the approach that came to be known as sociobiology, it was Wilson who courted controversy, and bore most of the flak, because he was the first biologist brave, foolish, ambitious, farsighted or naïve enough to attempt to apply sociobiological theory to humans. 
 
Actually, however, this is untrue. For example, a large part of Robert Trivers’ seminal paper on reciprocal altruism published in 1971 dealt with reciprocal altruism in humans and with what are presumably specifically human moral emotions, such as guilt, gratitude, friendship and moralistic anger (Trivers 1971). 
 
However, Trivers’ work was published in the Journal of Theoretical Biology and therefore presumably never came to the attention of any of the leftist social scientists largely responsible for the furore over sociobiology, who, being of the opinion that biological theory was wholly irrelevant to human behaviour, and hence to their own field, were unlikely to be regular readers of the journal in question. 

Yet this is perhaps unfortunate since Trivers, unlike the unfortunate Wilson, had impeccable left-wing credentials, which may have deflected some of the overtly politicized criticism (and pitchers of water) that later came Wilson’s way. 

Reductionism vs Holism

Among the most familiar charges levelled against Wilson by his opponents within the social sciences, and by contemporary opponents of sociobiology and evolutionary psychology, alongside the familiar and time-worn charges of ‘biological determinism’ and ‘genetic determinism’, is that sociobiology is inherently reductionist, something which is, they imply, very much a bad thing. 
 
It is therefore something of a surprise to find among the opening pages of ‘Sociobiology: The New Synthesis’, Wilson defending “holism”, as represented, in Wilson’s view, by the field of sociobiology itself, as against what he terms “the triumphant reductionism of molecular biology” (p7). 
 
This passage is particularly surprising for anyone who has read Wilson’s more recent work Consilience: The Unity of Knowledge, where he launches a trenchant, unapologetic and, in my view, wholly convincing defence of “reductionism” as representing, not only “the cutting edge of science… breaking down nature into its constituent components” but moreover “the primary and essential activity of science” and hence at the very heart of the scientific method (Consilience: p59). 

Thus, in a quotable aphorism, Wilson concludes: 

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science” (Consilience: p59). 

Of course, whether ‘reductionism’ is a good or bad thing, as well as the extent to which sociobiology can be considered ‘reductionist’, ultimately depends on precisely how we define ‘reductionism’. Moreover, ‘reductionism’, how ever defined, is a surely matter of degree. 

Thus, philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, distinguishes what he calls “greedy reductionism”, which attempts to oversimplify the world (e.g. Skinnerian behaviourism, which seeks to explain all behaviours in terms of conditioning), from “good reductionism”, which attempts to understand it in all its complexity (i.e. good science).

On the other hand, ‘holistic’ is a word most often employed in defence of wholly unscientific approaches, such as so-called holistic medicine, and, for me, the word itself is almost always something of a red flag. 

Thus, the opponents of sociobiology, in using the term ‘reductionist’ as a criticism, are rejecting the whole notion of a scientific approach to understanding human behaviour. In its place, they offer only a vague, wishy-washy, untestable and frankly anti-scientific obscurantism, whereby any attempt to explain behaviour in terms of causes and effects is dismissed as reductionism and determinism

Yet explaining behaviour, whether the behaviour of organisms, atoms, molecules or chemical substances, in terms of causes and effects is the very essence, if not the very definition, of science. 

In other words, determinism (i.e. the belief that events are determined by causes) is not so much a finding of science as its basic underlying assumption.[4]

Yet Wilson’s own championing of “holism” in ‘Sociobiology: The New Synthesis’ can be made sense of in its historical context. 

In other words, just as Wilson’s defence of reductionism in ‘Concilience’ was a response to the so-called sociobiology debates of the 1970s and 80s in which the charge of ‘reductionism’ was wielded indiscriminately by the opponents of sociobiology, so Wilson’s defence of holism in ‘Sociobiology: The New Synthesis’ itself must be understood in the context, not of the controversy that this work itself provoked (which Wilson was, at the time, unable to foresee), but rather of a controversy preceded its publication. 

In particular, certain molecular biologists at Harvard, and perhaps elsewhere, led by the brilliant yet but abrasive molecular biologist James Watson, had come to the opinion that molecular biology was to be the only biology, and that traditional biology, fieldwork and experiments were positively passé. 

This controversy is rather less familiar to anyone outside of Harvard University’s biology department than the