Aurochs, Annuals, Africa and the Americas: A Review of Jared Diamond’s ‘Guns, Germs and Steel’

Jared Diamond, Guns, Germs, and Steel: The Fates of Human Societies (London: Vintage, 1998)[note]

Guns, Germs and Steel: The Fate of Human Societies’, authored by physiologist, ornithologist, anthropologist, evolutionary biologist, ecologist, bestselling popular science writer and all-round scientific polymath Jared Diamond, is an enormously ambitious work.

In it, Diamond seeks to answer what is perhaps both the greatest and the most controversial question in the entire field of human history – namely, why civilization, technological advancement and modernity emerged in the parts of the world that they did and not in other regions.

In doing so, he seeks to explain the rise of civilization, the conquest of continents and differential rates of development around the world throughout history right up to the present day – in short, more or less the entire course of human history and indeed much of prehistory as well.

Perhaps inevitably, Diamond fails in the hugely ambitious task he has set himself.

Yet, if Diamond ultimately fails in this project, nevertheless the intellectual journey upon which he takes his readers is a hugely enlightening and entertaining one, in which he introduces many novel ideas that are indeed surely a part of the answer to the historical question he has posed.

Moreover, it is a hugely thought-provoking book and perhaps its chief value is in having once again opened up to public discussion and scholarly debate this most important, yet also challenging and taboo, of historical questions.

Diamond’s Theory

In addition to being a hugely ambitious work, ‘Guns, Germs and Steel’ is also a very long book.

This is perhaps inevitable given the scale of his ambition. After all, one is unlikely to be able to explain the rise of civilization throughout the entire world and the entirety of human history in just a few paragraphs.

However, despite its scale, ‘Guns, Germs and Steel’ is still, in my view, an unnecessarily overlong book, and includes much repetition of material as well as the inclusion of much material that is tangential or, at best, peripheral to the book’s main theme and thesis.

Distilling its basic theory therefore easier said than done.

Neither is the book’s title of much help in this direction.

Guns, germs and steel are indeed a part of the story of how some groups came to expand and ultimately dominate the globe—but they are only a relatively late element this story, and certainly not the ultimate factors responsible.[1]

Instead, they represent just some of the means by which certain populations came to conquer, colonize, and displace other populations, although other technologies also played a role.

However, to attribute the conquest of continents to technologies such as guns and steel only raises the further question as to why it was certain peoples inhabiting certain regions who first developed and made use of these technologies and not other peoples in other regions.

Likewise, to attribute the depopulation of Native Americans and Australian Aboriginals to the germs carried by European colonizers may indeed be true, but it only raises the question as to why it was, not only Europeans who invaded America and Australia, and not  Native Americans and Australian Aboriginals who invaded Europe, but also why it was Europeans who carried more virulent infectious diseases than did Native Americans and Australian Aboriginals, such that it was the latter who were decimated by European diseases, not the European settlers wiped out by indigenous germs.[2]

In short, these are proximate causes that explain how Europeans came to conquer their colonies, but not the ultimate reason why they were able to do so

Yet the greater part of Diamond’s text is indeed devoted to answering this more fundamental question.

Diamond’s theory can be summarized thus:

The more advanced technological development of certain regions is traced ultimately to their domestication of plants and animals, or adoption of domestic species that were domesticated elsewhere.

Whether a population domesticated any plants or animals, or were able to adopt domestic species domesticated elsewhere, and how many such species they were able to domesticate or adopt, and how early, depended on three factors:

  1. How many, if any, species were available that were suitable for domestication in the area they inhabited?
  2. Whether they were in contact with other regions where species had been domesticated, or which had adopted domestic species that had been domesticated elsewhere?
  3. Whether climatic factors permitted the adoption in their own locale of these domesticates?

The adoption of domestic plants permitted higher population densities, which increased both:

  1. The potential for technological innovation, and
  2. The number and virulence of infectious diseases with which a population was afflicted.

Technological innovation was greater in more densely populated regions simply because, the most people there are, the greater the chances that some of them may come up with useful technological innovations, while greater population density also facilitates the spread and diffusion of these technologies.

Meanwhile, infectious diseases came to be more virulent and deadly in more densely populated regions because they spread more easily, and can hence afford to evolve to become more deadly, in densely populated environments where people are in closer contact with one another, and with one another’s waste materials, enabling the pathogens to spread from one person to another more easily.

On the other hand, in less densely populated regions, infectious diseases pass between different people much less easily. Therefore, there is selection pressure against a pathogen evolving to become deadly to its host, or at least to kill its host too quickly, because, if the pathogen kills its host before it has managed to spread to any new hosts (as is more likely in sparsely populated regions), its genes usually perish along with the host.

In addition, Diamond argues, the domestication of animals itself also leads to more infectious diseases because, according to Diamond, many infectious diseases which afflict us today first spread to humans via contact with domestic animals.

However, if the rise of civilization, and conquest of continents, is indeed ultimately attributable to the availability of potentially domesticable species, and of already domesticated species from other regions that can be readily adopted in one’s own region, then this only raises several further questions, namely:  

  1. Why some species are evidently domesticable and others apparently not?
  2. Why domesticable species were present in some regions but not others? and
  3. What factors prevented the transfer of these domesticates to some other regions?

Here, as we will see, Diamond provides compelling and quite persuasive theoretical reasons why there was:

  1. A lack of domesticable plants in Africa; and
  2. A lack of domesticable animals in the Americas and Australasia.

However, at the same time, he fails to adequately explain why there was, and indeed that there was (supposedly):

  1. A lack of domesticable plants in America; and
  2. A lack of domesticable animals in Africa.

Domesticated Plants in Eurasia vs Sub-Saharan Africa

Thus, with respect to the fact that tropical Africans domesticated few plants, Diamond explains that annual plants, namely those which complete their entire lifecycles within a single year, are ideal for exploitation and domestication by humans and many have come to represent important parts of our staple diets. This, Diamond explains, is because:

Within their mere one year of life, annual plants inevitably remain small herbs. Many of them instead put their energy into producing big seeds, which remain dormant during the dry season and are then ready to sprout when the rains come. Annual plants therefore waste little energy on making inedible wood or fibrous stems, like the body of trees and bushes. But many of the big seeds… are edible by humans. They constitute 6 of the modern world’s 12 major crops” (p136).

However, in the Tropics, which includes most of subSaharan Africa, seasonal variation in climate is minimal, and temperatures hence relatively stable all year round.

Therefore, annual plants are rare in subSaharan Africa and other tropical regions, since an organism is unlikely to evolve to calibrate its lifecycle in accordance with predictable annual (i.e. seasonal) changes in climate if annual changes in climate are minimal.

Meanwhile, those parts of subSaharan Africa where the climate was suitable for the cultivation of these crops, and which today enjoy high farm yields, namely Southern Africa, much of which enjoys a subtropical climate similar to that prevailing in the areas of Eurasia where agriculture first developed, were nevertheless unable to adopt crops domesticated in these latter regions prior to modern times, simply because they were not in sufficient contact with the Middle Eastern and North African civilizations, being separated by the Sahara and the Tropics, to both of which environments annual plants domesticated in the Middle East and Mediterranean region are wholly unsuited and hence could never penetrate prior to modern times.

Plant Domestication in the Americas

Unfortunately, however, while this explanation – namely the relative lack of annual plants in the Tropics – works quite well to explain the relative lack of plants domesticated in sub-Saharan Africa, it works much less well in explaining the rise of civilization in the Americas.

Thus, much of North America enjoys a subtropical or temperate climate similar to that prevailing in those regions of Eurasia where agriculture first developed and subsequently flourished. In these regions, given the seasonal variation in climate, annual plants are presumably common.

Yet, in these parts of North America, few important crops seem to have been domesticated, and advanced civilization was largely, if not wholly, absent.

Instead, the greatest civilizations of pre-Columbian America were centred squarely in the Tropics.

Thus, of what are generally regarded as the three greatest pre-Columbian civilizations of the Americas (and arguably the only pre-Columbian American cultures to qualify as true ‘civilizations), the territories of two, namely the Mayan and Aztec, were entirely restricted to the Tropics, while the third, the Inca, though its vast empire expanded beyond Tropics, also had its origins, capital and heartland within this climatic zone.

Animal Domestication in Eurasia vs the Americas

What then of domesticated animals, the other factor emphasized by Diamond?

Whereas in respect of domesticated plants, Diamond has, as we have seen, an explanation that works well in explaining the relative absence of early agriculture in sub-Saharan Africa but rather fails to adequately explain the rise and spread (and the absence in some regions) of civilization in the Americas, with respect to domesticated animals, his explanation works rather better for the the Americas (and indeed for Australasia) than it does for Africa.

Thus, Diamond persuasively explains that the number of animals of the sort suitable for domestication was reduced in the Americas (and Australasia) by the sudden and late arrival of humans on this landmass.

Thus, whereas animal species of the Old World had long been subject to human predation, and hence evolved counter-adaptations, such as avoidance and fear of humans, animal species in the Americas, were entirely unprepared for the sudden influx of humans with their already developed and formidable hunting skills.

Most big mammals of Africa and Eurasia survived into modern times, because they had coevolved with protohumans for hundreds of thousands or millions of years. They thereby enjoyed ample time to evolve a fear of humans, as our ancestors’ initially poor hunting skills slowly improved” (p43)

In contrast, on the sudden arrival of humans in the Americas and Australasia, the indigenous fauna were suddenly confronted with anatomically modern, and comparatively technologically advanced, human hunters, with their already formidable hunting skills honed over thousands of years of evolution, cultural and biological, in Africa and Eurasia.

As evidence, he cites the extinctions that also occurred on isolated islands that had formerly been uninhabited by humans upon the arrival of the first human colonists, such as that of the famous “dodo of Mauritius”:

On every one of the well-studied oceanic islands colonized in the prehistoric era, human colonization led to an extinction spasm whose victims included the moas of New Zealand, the giant lemurs of Madagascar, and the big flightless geese of Hawaii” (p43).[3]

Thus, he not unreasonably concludes, the same process of mass extinctions surely occurred, albeit on a much wider scale, among the indigenous fauna of the Americas and Australasia when humans first arrived en masse during prehistory.

This then explains the disappearance in America of so many large animals of the sort that might have been potentially domesticable at around the same time the first humans arrived there.[4]

As a general rule, predation rarely leads to the complete extinction of a species, because, as the prey species decreases in number due to predation, predators either switch to an alternative source of food as a substitute for the prey that has become increasingly scarce, or themselves begin to decline in numbers due to declining numbers of prey on whom to feed, either of which allow the prey species to recover in numbers.

However, among humans, hunting is often motivated as much by status competition as by caloric needs (Hawkes 1991).

This results in particular prestige being associated with claiming the carcass of an especially rare prey.

This means that, even when a prey species is on the verge of extinction, and continuing to hunt this species makes no sense in terms of optimal foraging theory, humans may continue to hunt down the last surviving members of a species.

Thus, humans have the unique and dubious distinction of having driven many species to extinction through predation.

Animal Domestication in Sub-Saharan Africa

Yet, if this sudden and late influx of formidable human hunters explains the relative lack of domesticable animals in the Americas and Australia, this explanation certainly cannot apply to Africa, which, far from experiencing a late influx of humans, is the region where anatomically modern humans first evolved.

Therefore, indigenous prey species in Africa will have gradually evolved counteradaptations to human predation, not least fear and avoidance of humans, at the same time that humans ourselves were gradually evolving to become such formidable hunters.

This is in stark contrast to the situation, not only in Australasia or the Americas, as emphasized by Diamond, but also, as not mentioned by Diamond, even in Eurasia itself.

Thus, just as the indigenous fauna of Australasia and the Americas were wholly unprepared for the sudden influx of anatomically modern humans who quite suddenly arrived in their midst, so, in a much earlier period, the indigenous fauna of Eurasia were perhaps faced with much the same predicament, and mortal danger, being suddenly faced with the first anatomically modern humans to venture beyond the African continent, yet with their already formidable hunting skills honed over many years of evolution in Africa.[5]

Indeed, the indigenous fauna of Eurasia may even have faced this mortal danger repeatedly, having been confronted with successive waves of hominid (Homo erectus, Homo heidelbergensis) that had successively migrated out of Africa, each of which were likely formidable hunters, and each successive wave perhaps more formidable than that which preceded it.

It is therefore perhaps unsurprising that Africa is famous for its exotic large wild animals, which is why it is a popular destination for safari expeditions.

Thus, according to Diamond’s own reckoning, Africa is today home to almost as many species of large terrestrial mammal as is Eurasia, with 51 such species being indigenous to Africa, as compared to 72 that are found in Eurasia (p162). This, of course, means that, relative to its much smaller overall land mass, Africa actually has a much greater concentration of different large terrestrial mammalian species than does Eurasia.[6]

Why then were no indigenous species of animal, apart from Guinea fowl and donkeys, successfully domesticated in sub-Saharan Africa?

Diamond himself acknowledges the paradox, conceding:

The lack of domestic mammals indigenous to sub-Saharan Africa is especially astonishing, since a main reason why tourists visit Africa today is to see its abundant and diverse wild animals” (p161).

Thus, he acknowledges:

The percentage of [large terrestrial herbivorous or omnivorous mammals] actually domesticated [of those available in each region] is highest in Eurasia (18 percent) and is especially low in subSaharan Africa (no species domesticated out of 51 candidates!” (p163).[7]

However, explains away this paradox by insisting that, although there were indeed a large number of superficially seemingly domesticable mammals in subSaharan Africa, it just so happens that, purely by chance, none of these species were in fact amenable to domestication.

Yet, rather than presenting any general systematic reason why so few African animals were domesticable, Diamond simply argues this was just bad luck. It just so happened that, purely by chance, and for various quite different reasons, no African animals were capable of being domesticated, but rather each were possessed of one or more traits that absolutely precluded their successful domestication.

Given the large number of terrestrial herbivores in Africa, this is unlikely purely on statistical grounds.

Yet Diamond proceeds on a purely ad hoc, piecemeal basis, discussing why several of the more obvious candidates were in fact unsuitable for domestication.

His arguments, moreover, are not always entirely persuasive.

Zebras

A case in point are zebras, a herbivorous odd-toed ungulate, indigenous to much of East and Southern Africa.

Zebras, Diamond concedes, seem superficially eminently suitable for domestication.

Thus, zebras feed on grasses that we cannot consume. This means they do not compete with humans for food, but rather convert a food we cannot consume (namely, grass) into foods we can (namely, zebra meat and milk).

Moreover, zebras are closely related to horses and donkeys, whose wild ancestors have, of course, been successfully domesticated by humans. They also resemble horses and donkeys both morphologically and behaviourally.[8]

This suggests that, since donkeys and horses were, of course, successfully domesticated, surely zebras could have been domesticated in just the same way.

Indeed, with only a little imagination, one can easily envisage a domesticated zebra, not only being farmed for its milk and meat, but also being used as a draft and pack animal, and being ridden, both for transport and perhaps into battle.

However, despite superficial appearances, Diamond nevertheless insists that zebras are in fact wholly undomesticable, something he attributes primarily to what he terms their “nasty disposition” (p171-2).

Yet this argument strikes me as immediately suspect.

After all, African wild asses, the ancestors of domestic donkeys, are also known to be quite aggressive, at least with one another, while the wild ancestor of the domestic horse is now extinct, conveniently precluding a direct behavioural comparison.

Moreover, the fact that zebras, while never domesticated, have been successfully tamed, as Diamond himself acknowledges, seems to rule out Diamond’s claim that their “nasty disposition” alone prevents exploitation by humans.

Aurochs

Another comparison is even more devastating to Diamond’s argument, namely that of a wild animal that, unlike zebras, early humans did successfully domesticate, and whose wild ancestor is also extinct, but whose highly aggressive behavioural disposition can be readily inferred and whose physical formidability is surely not in doubt – namely the auroch, the wild ancestor of domestic cattle.

Domestic bulls remain a physically formidable and aggressive animal. This is among the reasons they are favoured in blood sports such as bull-baiting, bull riding and bullfighting.[9]

Yet the wild ancestor of cattle, the auroch, was much larger and more formidable even than the domestic bull.

Moreover, it was surely also undoubtedly far more aggressive as well, since the reduction of aggression, so as to make animals more easily manageable by humans, is an early, universal and important consequence of domestication.

Indeed, the domestication of the formidable auroch perhaps even forces us to reconsider whether hippos and rhinos, the prospect for whose domestication Diamond, seemingly not unreasonably, dismisses in little more than a sentence, might also have been potentially domesticable.[10]

Moreover, aurochs, and indeed modern cattle, also evince yet another trait that, at least according to Diamond, supposedly precludes a species’ domestication – namely, it is not advisable to keep multiple adult males in close proximity to one another in the same enclosure during the breeding season.

Thus, Diamond argues that “one of the main factors” that precluded the domestication of African antelope is the fact that:

Males of those herds space themselves into territories and fight fiercely with each other when breeding” (p174).

But the same is also true of bulls. Thus, dairy farmers well know that it is not generally advised to keep more than one bull in a single enclosed field at any time, and, as with antelope, especially not during the breeding season.[11]

Yet, if this is true of modern domestic bulls, then it was undoubtedly even more true of the first wild aurochs to be tamed, prior to their full domestication, since, as we have already seen, the reduction of aggression is among the principle aims, and effects, of the domestication process.

Yet according to Diamond, if males of a given species “disperse themselves into territories and fight fiercely with each other when breeding”, then this absolutely precludes any possibility of their successful domestication.

We are fortunate that our ancient Eurasian forebears, those who successfully domesticated the formidable wild auroch, never took the trouble to read Jared Diamond’s celebrated nonfiction bestseller, for, if they had, they would no doubt have abandoned the project as futile at the outset.

Modern Domestication

As the final definitive evidence that the failure of the indigenous peoples of Africa, Australasia and the Americas to domesticate indigenous fauna and flora did not betoken any deficiency on their own part as compared to Eurasians, but rather reflected the inherent unsuitability for domestication of the various species available, Diamond points to the inability of white colonists in Africa, America and Australasia, and even of modern scientists, to domesticate any of the indigenous species of Africa, Australasia and the Americas that natives had also failed to domesticate.

His argument seems to be that, if the white colonists in Africa also failed to domesticate zebras, then it cannot be racial factors that prevented indigenous black Africans from doing so; and, if even modern scientists, with all the modern technologies and scientific knowledge available to them, have proven unable to domesticate, say, zebras, then what hope did ancient Africans have. Clearly, zebras must simply be intrinsically undomesticable.

However, the problem with this argument is that the process of domestication is necessarily a gradual one, involving selective breeding over many generations. Therefore, even with the aid of modern scientific knowledge, by its very nature, it can occur only over many generations.[12]

Yet most of subSaharan Africa was colonized by Europeans only from the late nineteenth century. Therefore, white western settlers arrived in Africa only a few generations ago, and in Australia and the Americas, only a few generations before that.[13]

Moreover, in most of subSaharan Africa, they were few in number, and mostly left during the process of decolonization only a few generations later, or soon thereafter.

Therefore, they had little time in which to domesticate any indigenous fauna or flora.

Perhaps more importantly, they also often had little incentive.

After all, why begin the slow, difficult and uncertain process of domesticating indigenous fauna and flora when they already had their own domesticates, already domesticated in Eurasia, which they could often readily transplant to their new homes?

For example, wheat, rice and barley were all first domesticated in Eurasia, but, transplanted to the Americas, they are now among the most important staple crops of North America.[14]

Shape, Axis and Orientation of Continents

What then are the factors that prevented ancient peoples from simply adopting the domesticates that had already been domesticated in other regions?

One important factor identified by Diamond is isolation. A people isolated from other civilizations or peoples by geographic barrers obviously cannot adopt the domesticates of the latter, and nor can they copy, reverse engineer and adopt their technologies, for the simple reason that they never come into contact with these technologies.

Thus, of all the world’s continents, Australia was undoubtedly the most isolated, being separated from Eurasia and the Americas by vast oceans.[15]

Yet, besides oceans, deserts, tundra and mountains, another less obvious factor identified by Diamond as precluding the successful transfer of domesticates in ancient times is the shape, axis and orientation of the various continents.

Thus, Eurasia, which Diamond identifies as a single cultural zone, and which, for his purposes, includes North Africa (p161), is, he observes, orientated primarily on an east-west axis, from Japan and Korea in the Far East, to Western Europe and the Maghreb thousands of miles away in the west.

Since climate varies primarily with latitude (i.e. distance from the equator, and from the North and South Poles), and not with longitude, this means that, despite its vast size, many distant regions of Eurasia nevertheless enjoy very similar climates, making the transfer of domesticates adapted to these climates between these different regions quite feasible.

Thus, many domesticates that were first domesticated in one part of the vast Eurasian landmass nevertheless came to be adopted in many other parts of Eurasia far from region of initial domestication even in ancient times.

For example, barley, first domesticated in the Fertile Crescent, nevertheless came to be adopted as far away as Europe and East Asia in prehistoric times.

In contrast, Diamond argues that both Africa and the Americas are oriented primarily on a north-south axis.

Thus, North and South America, considered as a single continent, is a tall, thin landmass, being very narrow in places, especially at the Isthmus of Panama, which, at its narrowest point, is less than fifty miles across, but, on a north-south axis, stretches from the Arctic tundra of Northern Canada to Cape Horn in Chile at its southern tip several thousands of miles away.

These different regions obviously enjoy very different climates, making the transfer of domesticates across the continent in a northern and southerly direction very difficult for plants and animals adapted to a specific climate.

The Axis and Orientation of Africa

Again, however, this explanation does not work quite as well for Africa as it does for the Americas.

Thus, once we exclude North Africa, which, as we have seen, Diamond classifies as a part of the Eurasian cultural zone, being culturally, biologically, racially and climatically continuous with the Middle East and Mediterranean region (p161), subSaharan Africa is not an especially tall, narrow continent. On the contrary, it is, at its maximum extent, as wide as it is tall.

Thus, the total distance from the Somalian coast in East Africa to the Senegalese coast in West Africa, the widest expanse of the continent, is about 4,500 miles, which is very similar to the distance from the southern edge of the Sahara Desert to the most southerly tip of South Africa.

This is also much wider than the greatest east-west expanse of either North or South America.

Thus, astrophysicist-turned-historian Michael Hart, assessing Diamond’s theory, observes:

SubSaharan Africa, where a vast stretch of savannah (the Sudan, situated between the Sahara and the tropical rainforest) stretches 3500 miles in an east-west direction, from the highlands of Ethiopia to Senegal… [T]ransmission of technology and domesticates could — and repeatedly did — take place along the Sudan, and also across Ethiopia” (Understanding Human History: p176).

In short, Africa obviously does not enjoy the same vast East-West expanse as Eurasia, but, by the same token, it benefits from a vastly greater east-west expanse than does either North or South America.

Yet, in most respects, the pre-Columbian civilizations of the Americas seem to have been much more advanced than any indigenous sub-Saharan African culture.

The Axis and Orientation of the Americas

Indeed, if this explanation doesn’t work well for Africa, on closer inspection, it doesn’t work that well for America either.

While America is indeed a tall thin landmass, two of the three greatest pre-Columbian civilizations of the Americas, namely the Aztec and Mayan, were both concentrated in central America, where the continent is at its narrowest.

Being located in this part of the Americas, they were therefore especially disadvantaged according to Diamond’s theory, as they were therefore likely unable to adopt any domesticates domesticated anywhere else in the American landmass for climatic reasons.

However, despite this disadvantage, they nevertheless built the most impressive civilizations of the pre-Columbian Americas.

Conversely, the Americas are at their widest in North America, much of which also enjoys a temperate and subtropical climate ideal for agriculture and where advanced agriculture today thrives. Yet it was precisely in these regions that advanced civilization was largely if not entirely absent prior to the arrival of Europeans.

Relative Degrees of Cultural Isolation

Indeed, the achievements of the Mesoamerican civilizations, especially the Maya, were not only far more impressive than what was achieved in elsewhere in the Americas, but also much more impressive than anything achieved in subSaharan Africa.

However, the civilizations of the Americas were also disadvantaged as compared to those of subSaharan Africa in yet another respect – namely whereas the civilizations of Mesoamerica were entirely cut off from cultural exchange with the civilizations of Eurasia for thousands of years, this was never true to anything like the same degree in sub-Saharan Africa.

On the contrary, trade and cultural exchange between subSaharan Africa and the peoples and civilizations of North Africa and the Middle East was extensive and longstanding, especially across down the Nile Valley and across the Red Sea into the Horn of Africa, down the Swahili coast in coastal East Africa, and thence indirectly into the remainder of subSaharan Africa.[16]

In contrast, contact, let alone cultural diffusion, between Eurasian civilization and the emerging civilizations of meso-America can be ruled out almost entirely.

The great civilizations of meso-America emerged entirely independently of those in Eurasia.

Astrophysicist-turned-historian Michael Hart reports:

[S]ubSaharan Africa was not completely cut off from Eurasia, and some important aspects of Eurasian technology and culture did reach [subSaharan Africa]. Techniques of pottery-making, bronze working, and ironworking reached [subSaharan Africa] from the Middle East, as did the use of domesticated camels [whereas] domestic sheep and goats were introduced into [subSaharan Africa] from the Middle East by 4 kya. In contrast, prior to 1492, no Neolithic flora, fauna, or technology ever spread from the Old World to the Western Hemisphere” (Understanding Human History: p176).

Thus, anthropologist and physiologist John R Baker, who, in his magnus opus Race (reviewed here), even credits the remarkable Mayan civilization, alongside other impressive achievements (e.g. in astronomy), with being the first people to have  independently ‘invented the concept of zero’, laments increulously:

How, on the environmental hypothesis, can one explain the fact that the Negrids inhabiting the tropical rain-forest of central Africa made not even a start in mathematics, while the Maya of the Guatemalan tropical rain-forest, equally cut off from all contacts with civilized people, made astounding progress in this subject, and at one time were actually ahead of the whole of the rest of the world in one important branch of it?” (Race: p527-8)[17]

Similarly, Hart, assessing Diamond’s theory incredulously, concludes:

By 1000 AD, Mesoamerica was far more advanced than [subSaharan Africa] was, or ever had been. For example, Mesoamericans had originated writing on their own, had constructed many large stone structures, and had built large cities (rivaling any existing in Europe, and far larger than any in [subSaharan Africa). Furthermore, the Mayan achievements in mathematics and astronomy dwarf any intellectual achievements in [subSaharan Africa]” (Understanding Human History: p177).

Elephants in the Room?

Why then does Diamond fail in his endeavour?

Partly this reflects the scale of the task he has set himself. As discussed above, Diamond aspires to do nothing less than to explain the rise and spread of human civilizations across the entirety of the globe throughout the entirety of human history and much of prehistory. It is therefore hardly a surprise that he ultimately fails in the gargantuan task that he has set himself

Yet it hardly helps that Diamond restricts the range of factors that he is willing to consider.

Thus, he dismisses outright the idea that innate racial differences might play a role in explaining the different rates of technological and societal development among different races (see Understanding Human History and IQ And Global Inequality).

Admittedly, he does briefly alludes to this possibility in his Prologue, but only so as to dismiss it summarily:

Sound evidence for the existence of human differences in intelligence that parallel human differences in technology is lacking” (p19).

Yet, in his very next paragraph, he acknowledges the existence of an enormous” literature in psychometrics, intelligence research and behaviour genetics that shows just that (p19).

However, he dismisses this literature, not only on scientific grounds, but also on moral grounds. Thus, he writes:

The objection to such racist explanations is not just that they are loathsome, but also that they are wrong” (p19)

Yet, in saying that his objection is “not just” that these sorts of explanations are “loathsome”, he implicitly concedes that the supposedly loathsomeness of such explanations is indeed part of his objection. In other words, Diamond has allowed his moral convictions to influence his scientific judgement, what Bernard Davies, in just this context, referred to as the moralistic fallacy.

Yet quite why such theories are supposedly so “loathsome” Diamond does not take the trouble to explain. He presumably takes it as given, or as self-evident, and assumes that his readership shares his moral revulsion, as most of them no doubt do.[18]

Yet we would do well to remember that, if ideas are indeed loathsome, this has no bearing on whether they are also true.

For example, many Christians considered the heliocentric astronomical model introduced by Copernicus and Galileo similarly objectionable; many still consider the Darwin’s theory of natural selection objectionable. Yet this does not lead us to reject these theories.

The fact that many people die horrible painful deaths through no fault of their own may also be “loathsome”, but this does nothing to prevent it also being true.

Moreover, we must ask why anyone would consider theories of racial differences in intelligence so objectionable in the first.

After all, almost everyone accepts that different individuals differ in intelligence. Few of us have any difficulty accepting that, for example, Albert Einstein is probably more intelligent than we will ever be. Why are group differences any more difficult to accept?

Maturity is coming to accept that you cannot be the best at everything, and indeed are unlikely to be the very best at anything.

Indeed, most of us do indeed accept the existence of group differences in ability, certainly of sex differences, and indeed even of racial differences, in other spheres. For example, most white Americans, I suspect, have little difficulty accepting that blacks are, on average, better at basketball, Kenyans better at marathons, and Asians at math.

Accepting the existence of race differences in intelligence seems, in principle, little different.

Indeed, for most people, being intelligent isn’t all that important. Most men, I suspect, would rather be considered brave, strong and athletic than a brainy nerd, and most women, in my experience, would rather be considered pretty or beautiful than as what was once formerly derisively termed a ‘bluestocking’.

As to the other part of Diamond’s objection to race realist theories, namely, not that they are “loathsome”, but also that they are “wrong”, we might question whether someone who has such an oddly visceral emotional reaction to a scientific theory as to refer to it as “loathsome” is really the person best suited to accurately assess its objective merits.

Yet, although he acknowledges the existence of an enormous” literature in psychometrics, intelligence research and behaviour genetics on the question of race differences in intelligence and their alleged societal correlates, Diamond does not engage with this literature at all, but rather curtly dismisses this entire body of research in just a single paragraph (p19).

Given Diamond’s own cursory dismissal of this research tradition, a review of Diamond’s book is therefore not the place to discuss this body of scientific research.

However, for those interested, I have previously discussed this body of research here, here, here, here and, in the most depth, here.

With respect to the possible consequences of these differences for different levels of development and technological progress in different parts of the world, I discuss this matter here, here and here.

Conclusion

In conclusion, with regard to the topic of differential rates of development in different parts of the globe both today and throughout history, we still await a full explanation. This is a vast and important topic upon which much research, discussion and debate is surely yet to be conducted.

But one thing is surely certain—any complete explanation, and completely convincing explanation, will surely have to consider, not only the geographic factors so monolithically focussed upon by Diamond, but also the full range of possible contributing factors, howsoever politically incorrect the latter might be.


[Note] Readers may be interested that I am now cross-posting this and future posts at https://contemporaryheretic.substack.com for those who prefer that format. [NB: Not THEcontemporaryheretic.substack.com, which address was already taken by someone else.] This specific post is accessible at: https://contemporaryheretic.substack.com/p/aurochs-annuals-africa-and-the-americas

[1] A more obvious, and perhaps more accurate title, might have been ‘Yali’s Question’, a reference to the question, supposedly posed by a New Guinean native of Diamond’s acquaintance, as to why the newly arrived European colonizers had so much more cargo (i.e. imported technologies and other useful manufactured products) than did the indigenous aboriginals, that he claims provoked him to investigate the ultimate causes of differential development in different regions of the globe and among different peoples.

[2] Whereas the diseases introduced by European colonizers brought death and destruction in their wake throughout the Americas, often travelling ahead of their original European hosts, and hence decimating indigenous populations long before Europeans even arrived in many parts of America, indigenous American diseases seem to have had much less of an impact on their European colonizers themselves. To my knowledge, the only major infectious disease thought to have been introduced into Europe from the Americas is syphilis, though even this is in doubt, as the origin of this once devastating disease is still much disputed.

[3] As for indigenous birds and mammals of the Galapagos Islands and Antarctic, which humans discovered and inhabited only in recent times, these species, Diamond reports, were saved from extinction only by “protective measures” imposed by early pioneering conservationists, and otherwise remain “incurably tame” (i.e. hopelessly unafraid of humans, and hence vulnerable to human predation) to this day (p43).

[4] Actually, although he makes very clear that this is the hypothesis that he favours, Diamond actually remains strictly agnostic regarding the causes of the mass extinctions that engulfed the Americas and Australasia around the time of the arrival of the first humans. Thus, he notes that an alternative theory is that “America’s big mammals instead became extinct because of climate changes at the end of the last Ice Age, but comments sardonically:

The Americas’ big animals had already survived the ends of 22 previous Ice Ages. Why did most of them pick the 23rd to expire in concert, in the presence of all those supposedly harmless humans? Why did they disappear in all habitats, not only in habitats that contracted but also in ones that greatly expanded at the end of the last Ice Age?” (p47).

Yet, despite this persuasive argument, Diamond nevertheless charitably concedes “the debate remains unresolved” (Ibid.).
Likewise, he reports, it has been argued that the indigenous fauna of Australia and New Guinea that died out around the time of the first arrival of humans in that continent may instead have “succumbed instead to a change in climate, such as a severe drought on the already chronically dry Australian continent” (p43).
Again, however, Diamond is skeptical, observing:

I can’t fathom why Australia’s giants should have survived innumerable droughts in their tens of millions of years of Australian history, and then have chosen to drop dead almost simultaneously (at least on a time scale of millions of years) precisely and just coincidentally when the first humans arrived” (p43).

Those who doubt the human role in prehistoric mass extinctions typically attribute these theories to human arrogance and anthropocentrism. It is true, they observe, that humans today, with our advanced technologies (e.g. guns), are indeed formidable predators capable of wreaking unparalleled environmental damage. However, ancient hunter-gatherers were no doubt much less formidable.
This is indeed true. However, as compared, not to modern technologically advanced humans, but rather to other species of predator, our ancient ancestors may already have been formidable hunters, long before we evolved modern technologies such as guns.
Indeed, our greatest innovation was likely the capacity for cultural and technological innovation itself.
Thus, whereas other species must usually biologically evolve a new hunting technique, or superior weaponry (e.g. sharper teeth, longer claws), which takes many generations of gradual natural selection, humans are unique in our capacity to invent a new hunting method, or a new weapon (spear, bow and arrow). This new invention may be quite sudden, and can spread through an entire population in less than generation.
Prey species lack this same capacity for rapid innovation. They are therefore always playing catch-up. Therefore, in the ongoing evolutionary arms race between predator and prey, humans are at an enormous advantage as compared to any other species.
Even ancient man was therefore no doubt a formidable apex predator.

[5] An alternative possibility, which might explain why the indigenous fauna of Europe did not come to be hunted to extinction on the first arrival of humans in the same way as did the indigenous fauna of Australasia and the Americas when humans later arrived in these regions, is that the first humans to venture out of Africa were perhaps not yet such formidable hunters. Thus, it is known that the diet of hunter-gatherer groups in tropical subSaharan Africa is dependent more on plant food than on meat, with the former providing most of the caloric requirements of the group. However, as one moves from the tropics into temperate climes, meat comes to provide an increasing proportion of the hunter-gatherer diets, because plant foods are less widely available, especially during the cold winter months, necessitating an increasingly reliance on carnivory, which reaches an extreme in the Arctic and sub-Arctic, where plant foods are almost entirely unavailable for most of the year, and foragers such as Eskimos ate a largely carnivorous diet.
Alternatively, perhaps Eurasian prey species were not so vulnerable to the sudden influx of formidable human hunters, because they had, unlike species in the Americas, previously been exposed to earlier waves of prehuman hominid who had spread out of Africa, but who were somewhat less formidable hunters, at least on first arrival, allowing the indigenous fauna to gradually develop counteradaptations to hominid predation as successive waves of hominids successively colonized the region.

[6] Of course, it is possible the relatively greater number of large terrestrial herbivores in Africa as compared to Europe is partly attributable to certain species in Europe being driven to extinction in historical times by human predation and habitat loss. For example, tarpans, the last surviving subspecies of wild horse, are thought to have gone extinct in the late-nineteenth century, while wolves (not, of course, a herbivore) were driven to extinction in the British Isles some time earlier. However, for the theoretical reasons discussed above (namely, Africa is where anatomically modern humans first evolved, such that prey species will have evolved counter adaptations to human predation as humans themselves gradually evolved to become formidable hunters), it is likely that Africa had a relatively large number of large terrestrial mammals, as compared to Europe and other continents, even in ancient times, namely the timescale of interest for the purposes of evaluating Diamond’s theory.

[7] Actually, the latest evidence, not available to Diamond at the time he authored his book, has modified this conclusion somewhat. Thus, whereas Diamond reports that not a single large terrestrial herbivorous or omnivorous mammal was domesticated out of the fifty or so available in sub-Saharan Africa, the latest genetic evidence suggests that African wild asses (i.e. donkeys) were first domesticated, not in North Africa as formerly thought, but rather in East Africa, albeit possibly the Horn of Africa, which is culturally and racially, closely linked to the Middle East. In addition, it ought to be noted that guineafowl were also first domesticated in sub-Saharan Africa, but, as a bird species, obviously do not qualify as a large terrestrial herbivorous or omnivorous mammal.

[8] Actually, as discussed in the previous endnote, though it was formerly thought that they had first been domesticated in North Africa, the latest DNA evidence suggests that donkeys themselves were first domesticated in East Africa. This would mean that, contrary to what Diamond claims, one large terrestrial herbivorous mammal was domesticated in sub-Saharan Africa, namely donkeys. Along with the guineafowl, this would mean that at least two species of animal were first domesticated in sub-Saharan Africa.

[9] Though today concern is, understandably, primarily focussed at the suffering experienced by the bull (understandably since, unlike the human participants, the bull is unable to express consent to participating in the sporting spectacle), it ought to be noted that both bull riding and bullfighting are also dangerous sports for the human participant. Indeed, bull riding, an American rodeo sport, seems to be an exceptionally dangerous sport, almost unbelievably so. Indeed, relative to the its short duration (a bull ride is considered successful if the rider manages to stay on the bucking bull for just eight seconds, but, today, only a minority of elite riders manage to stay on even this long), bull riding is, I suspect, the most dangerous sport this side of Russian roulette.
Bull baiting, a once popular, now banned, blood sport of British origin, that involved pitting a pack of dogs (specially bred ‘bulldogs’) and against a bull, was also more dangerous for the dogs than for the bull, in the sense that more dogs died in the process than did bulls, even though the death of the bull, and its consumption as meat, was, along with entertainment and spectacle, among the ostensible purposes of the practice, an odd folk belief holding that meat from bulls that had been ‘baited’ was more tender and succulent.
I recount these facts to emphasize that, even after domestication, the bull remains a formidable and potentially deadly adversary, both for humans and packs of fierce dogs.

[10] Elephants, Diamond argues, were not worth domesticating, not so much on account of their size, but rather because of their slow developmental rate:

What would-be… elephant rancher would wait 15 years for his herd to reach adult size? Modern Asians who want work elephants find it much cheaper to capture them in the wild and tame them” (p169).

[11] Cattle farmers today generally advise that it is possible, albeit ill-advised unless absolutely necessary due to, say, limited available land, to keep two bulls in a single field, but only under certain conditions (e.g. not during the mating season), and, even then, they must be carefully managed. However, since the reduction of aggression is one of the principle aims and effects of domestication, and therefore wild male aurochs were almost certainly far more aggressive than modern bulls, this may not have been possible for the first tame aurochs, prior to full domestication.

[12] Scientific knowledge has certainly sped up the process of domestication. The ancient humans responsible for beginning the process of domesticating the first wild species probably had little idea what they were doing, and inadvertently selected for certain traits rather than doing so deliberately as a consequence of an understanding of heredity. In contrast, a famous Russian experiment allowed for the partial (self-)domestication of foxes in just a few decades.
Most recently, scientists have even developed various forms of genetic engineering which allow them to directly edit the genome of a species, remove or deactivate genes, insert genes from different species and rearrange genetic sequences. However, these techniques are, even today, very much in their infancy. Certainly, it is not yet possible to domesticate a wild species through genetic engineering alone, and nor can such techniques, as yet, even speed up the process to any significant degree. Successfully domesticating a wild species still requires many generations of selective breeding.

[13] Of course, human generations are generally longer than the generation time for most domesticated and wild species. Therefore, more generations will have passed among the species in question than among the humans who failed to domesticate them. However, this still leaves only a relatively short period of time, and number of generations, given that domestication can take literally thousands of years.

[14] Admittedly, the transplant of plants and animals that were first domesticated in one region to another region was not always possible, often because climatic or other environmental factors precluded this. Indeed, this is a major theme of Diamond’s book. Thus, plants first domesticated in the Fertile Crescent were often unsuited to tropical Africa, but sometimes could be adopted in Southern African where the climate is more similar to that prevailing in much of Eurasia.
Also, since I have focussed here on the failure of Africans to domesticate zebras, it is worth noting the difficulty of transplanting their fellow equine, the domestic horse, to sub-Saharan Africa, where they were afflicted with sleeping sickness spread by the tsetse fly. However, while this may indeed explain the failure of sub-Saharan Africans to adopt horses, nevertheless horses were introduced and widely and successfully employed in colonial Africa, especially in Southern Africa, which, for climatic reasons, was the only part of sub-Saharan Africa settled by large numbers of whites.
Interestingly, the ill-suitedness of horses to sub-Saharan Africa due to the prevalence of sleeping sickness has been posited as the reason Africa never developed the wheel, since, in the absence of the suitable draft animal, wheels are supposedly of little value. For example, Diamond himself makes a similar argument in respect of the failure of pre-Columbian Mesoamerican civilizations to make full use of the wheel, lamenting how, for the geographic reasons discussed above:

The wheels invented in Mesoamerica as parts of toys never met the llamas domesticated in the Andes, to generate wheeled transport for the New World” (p367).

The problem with this argument, however, is that wheels are useful even in the absence of a draft animal. First, they can be used for non-transport purposes – namely, the spinning wheel, the potter’s wheel, even water wheels. Indeed, in Eurasia, the potter’s wheel was actually invented and used before the use of wheels for transport purposes.
Moreover, even for transport, wheels are useful even in the absence of a draft animal. Thus, humans ourselves can be employed as a draft animal, as with wheelbarrows and pulled rickshaws. Ironically, Diamond himself acknowledges as much elsewhere, writing of how:

[Wheels] had become the basis of most Eurasian land transport—not only for animal-drawn vehicles but also for human-powered wheelbarrows, which enabled one or more people, still using just human muscle power, to transport much greater weights than they could have otherwise” (p359).

Thus, he acknowledges the paradox whereby, in Mesoamerica, the use of wheels was confined to what appear to be toys and the technology eventually, he reports, disappeared altogether, even though, he concedes, even without a draft animal, “they could presumably have been useful in human-powered  wheelbarrows” (p370).

[15] Although in this piece, I have focussed on the situation in Eurasia, Africa and the Americas, it ought to be noted that Australia had many other manifest geographic disadvantages as compared to other continents, as Diamond himself rightly emphasizes. Thus, quite apart from their isolation from other continents, the climate and terrain of much of Australia, namely the Australian Outback is such that it can support only a very low population density, and then only in very trying conditions and at bare subsistence levels. Meanwhile, those few regions of the continent where conditions were more hospitable, and which are today quite densely populated, were, not only isolated from other continents, but also from one another by largely uninhabitable intermediate areas of the interior.
Even more isolated than Australia were some Pacific islands. However, unlike Australia, these were generally settled by humans relatively late in human history, and hence often benefited from the technologies, and the domesticates, that the settlers brought with them, not least the advanced seafaring knowledge that enabled them to reach and settle these remote Pacific Islands in the first place.

[16] Interestingly, author Tim Marshall, in his book Prisoners of Geography, identifies one factor that supposedly impeded the movement of peoples, and hence of technologies, within Africa, namely a lack of navigable rivers. Whereas in much of Eurasia, transport by river was, prior to modern times, usually easier and quicker than by land, in Africa this was not generally possible, because, although replete with rivers, many rivers in Africa have waterfalls that make transport by river very dangerous if not impossible.

[17] Actually, it is now generally believed that the first to  invent the concept of zero was neither the Mayans nor the Indians, nor indeed Islamic civilization, which is also sometimes credited with this achievement. In fact, both the Indians and the Muslims seem to have inherited this innovation from the ancient Babylonians, although it was Indians who took full advantage of this innovation by developing mathematics in such a way as this innovation made possible. The Maya, like the Mesopotamians, also failed take full mathematical advantage of this innovation, but, unlike both the Indians and the Muslims, they can claim to have independently hit upon this innovation, not adopted it from without.

[18] Curiously, despite his oddly visceral aversion and distaste for theories of racial differences in intelligence, and curt dismissal of such theories as both “loathsome” and scientifically unsupported just a couple of paragraphs previously, Diamond nevertheless then proceeds to proffer one such theory of his own, speculatively theorizing:

“In mental ability New Guineans are probably genetically superior to Westerners, and they surely are superior in escaping the devastating developmental disadvantages under which most children in industrialized societies grow up” (p21). 

Thus, he contends that, whereas New Guineans have to survive on their wits, using their intelligence to avoid dying from such causes as “murder, chronic tribal warfare, accidents, and problems in procuring food”, in densely populated western societies most early mortality was a consequence of disease, which, Diamond argues, would have struck quite randomly, or as a consequence of random biochemical variations between individuals, rather than being related to intelligence. Thus, he concludes:

Natural selection promoting genes for intelligence has probably been far more ruthless in New Guinea than in more densely populated, politically complex societies, where natural selection for body chemistry was instead more potent” (p21).

In addition, he argues that the intelligence of westerners is surely suppressed due to their spending too much time watching television and movies in childhood (p21). In fact, however, since IQs have increased over the course of the twentieth century concomitantly with increases in television viewership, it is far from obvious that inceased time watching television, or playing computer games, necessarily suppresses intellectual development. On the contrary, some researchers have even suggested that increasingly complex and stimulating visual media may be behind some of this increase.
At any rate, Richard Lynn reports the average IQ of New Guineans as just 62 (Race Differences in Intelligence: p112-3). Although he bases this on only a few studies, this average IQ is almost identical that reported for Australian Aboriginals, to whom New Guineans are closely related, and for whom Lynn has much more abundent data from the Australian school system (Race Differences in Intelligence: p104).

References

Hawkes (1991) Showing off: Tests of an hypothesis about men’s foraging goals. Ethology and Sociobiology 12(1): 29-54.

Sarich and Miele’s ‘Race: The Reality of Human Differences’: A Rare Twenty-First Century Hereditarian Take on Race Differences Published by a Mainstream Publisher and Marketed to a General Readership

Vincent Sarich and Frank Miele, Race: The Reality of Human Differences (Cambridge, MA: Westport Press 2004)

First published in 2004, ‘Race: The Reality of Human Differences’ by anthropologist and biochemist Vincent Sarich and science writer Frank Miele is that rarest of things in this age of political correctness – namely, a work of popular science presenting a hereditarian perspective on that most incendiary of topics, namely the biology of race and of racial differences.

It is refreshing that, even in this age of political correctness, at the dawn of the twenty-first century, a mainstream publisher still had the courage to publish such a work.

On first embarking on reading ‘Race: The Reality of Human Differences’ I therefore had high expectations, hoping for something approaching an updated, and more accessible, equivalent to John R Baker’s seminal Race (which I have reviewed here).

Unfortunately, however, ‘Race: The Reality of Human Differences’, while it contains much interesting material, is nevertheless, in my view, a disappointment and something of a missed opportunity.

Race and the Law

Despite their subtitle, Sarich and Miele’s primary objective in authoring ‘Race: The Reality of Human Differences’ is, it seems, not to document, or to explain the evolution of, the specific racial differences that exist between populations, but rather to defend the race concept itself.

The latter has been under attack at least since Ashley Montagu’s Man’s Most Dangerous Myth: The Fallacy of Race, first published in 1942, perhaps the first written exposition of race denial.

Thus, Sarich and Miele frame their book as a response to the then-recent PBS documentary Race: The Power of an Illusion, which, like Montagu, also espoused the by-then familiar line that human races do not exist, save as a mere illusion or social construct.

As evidence that, on the contrary, race is indeed a legitimate biological and taxonomic category, Sarich and Miele begin by discussing, not the field of biology, but rather that of law, discussing the recognition accorded the race concept under the American legal system.

They report that, in the USA:

There is still no legal definition of race; nor… does it appear that the legal system feels the need for one” (p14).

Thus, citing various US legal cases where race of the plaintiff was at issue, Sarich and Miele conclude:

The most adversarial part of our complex society [i.e. the legal system], not only continues to accept the existence of race, but also relies on the ability of the average individual to sort people into races” (p14).

Moreover, Sarich and Miele argue, not only do the courts recognise the existence of race, they also recognise its ultimate basis in biology.

Thus, in response to the claim that race is a mere social construct, Sarich and Miele cite the recognition the criminal courts accord to the evidence of forensic scientists, who can reliably determine the racial background of a criminal from microscopic DNA fragments (p19-23).

If race were a mere social construction based upon a few highly visible features, it would have no statistical correlation with the DNA markers that indicate relatedness” (p23).[1]

Indeed, in criminal investigations, Sarich and Miele observe in a later chapter, racial identification can be a literal matter of life and death.

Thus, they refer to the Baton Rouge serial killer investigation, where, in accordance with the popular, but wholly false, notion that serial killers are almost invariably white males, the police initially focussed solely on white suspects, but, after DNA analysis showed that the offender was of predominantly African descent, shifted the focus of their investigation and eventually successfully apprehended the killer, preventing further killings (p238).[2]

Another area where they observe that racial profiling can be literally a matter of life and death is the diagnosis of disease and prescribing of appropriate and effective treatment – since, not only do races differ in the prevalence, and presentation, of different medical conditions, but they also differ in their responsiveness and reactions to different forms of medication. 

However, while folk-taxonomic racial categories do indeed have a basis in real biological differences, they are surely also partly socially-constructed as well.

For example, in the USA, black racial identity, including eligibility for affirmative action programmes, is still largely determined by the same so-called one-drop-rule that also determined racial categorization during the era of segregation and Jim Crow.

This is the rule whereby a person with any detectable degree of black African ancestry, howsoever small (e.g. Barack Obama, Colin Powell), is classed as ‘African-American’ right alongside a recent immigrant from Africa of unadulterated sub-Saharan African ancestry.

This obviously has far more to do with social and political factors, and with America’s unique racial history, than it does with biology and hence shows that folk-taxonomic racial categories are indeed part ‘socially-constructed’.[3]

Similarly, the racial category Hispanic’ or ‘Latino obviously has only a distant and indirect relationship to race in the biological sense, including as it does persons of varying degrees of European, Native American and also black African ancestry.[4]

It is also unfortunate that, in their discussion of the recognition accorded the race concept by the legal system, Sarich and Miele restrict their discussion entirely to the contemporary US legal system.

In particular, it would be interesting to know how the race of citizens was determined under overtly racialist regimes, such as under the Apartheid regime in South Africa,[5] under the Nuremberg laws in National Socialist Germany,[6] or indeed under Jim Crow laws in the South in the USA itself in the early twentieth century,[7] where the stakes were, of course, so much higher.

Also, given that Sarich and Miele rely extensively in later chapters on an analogy between human races and dog breeds (what he calls the “canine comparison”: p198-203; see discussion below), a discussion of the problems encountered in drafting and interpreting so-called breed-specific legislation to control so-called ‘dangerous dog breeds’ would also have been relevant and of interest.[8]

Such legislation, in force in many jurisdictions, restricts the breeding, sale and import of certain breeds (e.g. Pit Bulls, Tosas) and orders their registration, neutering and sometimes even their destruction. It represents, then, the rough canine equivalent of the Nuremberg laws.

A Race Recognition Module?

According to Sarich and Miele, the cross-cultural universality of racial classifications suggests that humans are innately predisposed to sort humans into races.

As evidence, they cite Lawrence Hirschfeld’s finding that, at age three, children already classify people by race, and recognise both the immutable and hereditary nature of racial characteristics, giving priority to race over characteristics such as clothing, uniform or body-type (p25-7; Hirschfeld 1996).[9]

Sarich and Miele go on to also claim:

The emerging discipline of evolutionary psychology provides further evidence that there is a species-wide module in the human brain that predisposes us to sort the members of our species into groups based on appearance, and to distinguish between ‘us’ and ‘them’” (p31).

However, they cite no source for this claim, either in the main body of the text or in the associated notes for this chapter (p263-4).[10]

Certainly, Pierre van den Berghe and some other sociobiologists have argued that ethnocentrism is innate (see The Ethnic Phenomenon: reviewed here). However, van den Berghe is also emphatic and persuasive in arguing that the same is not true of racism, as such.

Indeed, since the different human races were, until recent technological advances in transportation (e.g. ships, aeroplanes), largely separated from one another by the very oceans, deserts and mountain-ranges that reproductively isolated them from one another and hence permitted their evolution into distinguishable races, it is doubtful human races have been in contact for sufficient time to have evolved a race-classification module.[11]

Moreover, if race differences are indeed real and obvious as Sarich and Miele contend, then there is no need to invoke – or indeed to evolve – a domain-specific module for the purposes of racial classification. Instead, people’s tendency to categorise others into racial groups could simply reflect domain-general mechanisms (i.e. general intelligence) responding to real and obvious differences.[12]

History of the Race Concept

After their opening chapter on ‘Race and the Law’, the authors move on to discussing the history of the race concept and of racial thought in their second chapter, which is titled ‘Race and History’.

Today, it is often claimed by race deniers that the race concept is a recent European invention, devised to provide a justification for such nefarious, but by no means uniquely European, practices as slavery, segregation and colonialism.[13]

In contrast, Sarich and Miele argue that humans have sorted themselves into racial categories ever since physically distinguishable people encountered one another, and that ancient peoples used roughly the same racial categories as nineteenth-century anthropologists and twenty-first century bigots.

Thus, Sarich and Miele assert in the title of one of their subheadings:

“[The concept of] race is as old as history or even prehistory” (p57).

Indeed, according to Sarich and Miele, even ancient African rock paintings distinguish between Pygmies and Capoid Bushmen (p56).

Similarly, they report, the ancient Egyptians showed a keen awareness of racial differences in their artwork.

This is perhaps unsurprising since the ancient Egyptians’ core territory was located in a region where Caucasoid North Africans came into contact with black Africans from South of the Sahara through the Nile Valley, unlike in most other parts of North Africa, where the Sahara Desert represented a largely insurmountable barrier to population movement.

While not directly addressing the controversial question of the racial affinities of the ancient Egyptians, Sarich and Miele report that, in their own artwork:

The Egyptians were painted red; the Asiatics or Semites yellow; the Southerns or Negroes, black; and the Libyans, Westerners or Northerners, white, with blue eyes and fair beards” (p33).[14]

Indeed, rather than being purely artistic in intent, Sarich and Miele go further, even suggesting that at least some Egyptian artwork had an explicit taxonomic function:

“[Ancient] Egyptian monuments are not mere ‘portraits but an attempt at classification” (p33).

They even refer to what they call “history’s first [recorded] colour bar, forbidding blacks from entering Pharaoh’s domain”, namely an an Egyptian stele (i.e. stone slab functioning as a notice), which other sources describe as having been erected during the reign of Pharaoh Sesostris III (1887-1849 BCE) at Semna near the Second Cataract of the Nile, part of the inscription of which reads, in part:

No Negro shall cross this boundary by water or by land, by ship or with his flocks, save for the purpose of trade or to make purchases in some post” (p35).[15]

Sarich and Miele also interpret the famous caste system of India as based ultimately in racial difference, the lighter complexioned invading Indo-Aryans establishing the system to maintain their dominant social position and their racial integrity vis à vis the darker-complexioned indigenous Dravidian populations whom they conquered and subjugated.

Thus, Sarich and Miele claim:

The Hindi word for caste is varna. It means color (that is, skin color), and it is as old as Indian history itself” (p37).[16]

There is indeed evidence of racial prejudice and notions of racial supremacy in the earliest Hindu texts. For example, in the Rigveda, thought to be the earliest of ancient Hindu texts:

The god of the Aryas, Indra, is described as ‘blowing away with supernatural might from earth and from the heavens the black skin which Indra hates.’ The dark people are called ‘Anasahs’—noseless people—and the account proceeds to tell how Indra ‘slew the flat-nosed barbarians.’ Having conquered the land for the Aryas, Indra decreed that the foe was to be ‘flayed of his black skin’” (Race: The History of an Idea in America: p3-4).[17]

Indeed, higher caste groups have relatively lighter complexions than lower caste groups residing in the same region of India even today (Jazwal 1979Mishra 2017).

However, most modern Indologists reject the notion that the term ‘varna’ was originally coined in reference to differences in skin colour and instead argue that colour was simply used as a method of classification, or perhaps in reference to clothing.[18]

According to Sarich and Miele, ancient peoples also believed races differed, not only in morphology, but also in psychology and behaviour.

In general, ancient civilizations regarded their own race’s characteristics more favourably than those of other groups. This, Sarich and Miele suggest, reflected, not only ethnocentrism, which is, in all probability, a universal human trait, but also the fact that great civilizations of the sort that leave behind artwork and literature sophisticated enough to permit moderns to ascertain their views on race did indeed tend to be surrounded by less advanced neighbours (p56).

In the vast majority of cases, their opinions of other peoples, including the ancestors of the Western Europeans who supposedly ‘invented’ the idea of race, are far from flattering, at times matching modern society’s most derogatory stereotypes” (p31).

Thus, Thomas F Gossett, in his book Race: The History of an Idea in America, reports that:

Historians of the Han Dynasty in the third century B.C. speak of a yellow-haired and green-eyed barbarian people in a distant province ‘who greatly resemble monkeys from whom they are descended’” (Race: The History of an Idea in America: p4).

Indeed, the views expressed by the ancients regarding racial differences, or at least those examples quoted by Sarich and Miele, are also often disturbingly redolent of modern racial stereotypes.

Thus, in ancient Roman and Greek art, Sarich and Miele report:

“Black males are depicted with penises larger than those of white figures” (p41).

Likewise, during the Islamic Golden Age, Sarich and Miele report that:

Islamic writers… disparaged black Africans as being hypersexual yet also filled with simple piety, and with a natural sense of rhythm” (p53).

Similarly, the Arab polymath Al Masudi is reported to have quoted the Roman physician-philosopher Galen, as claiming blacks possess, among other attributes:

A long penis and great merriment… [which] dominates the black man because of his defective brain whence also the weakness of his intelligence” (p50).

From these and similar observations, Sarich and Miele conclude:

European colonizers did not construct race as a justification for slavery but picked up an earlier construction of Islam, which took it from the classical world, which in turn took it from ancient Egypt” (p50).

The only alternative, they suggest, is the obviously implausible suggestion that:

Each of these civilisations independently ‘constructed’ the same worldview, and that the civilisations of China and India independently ‘constructed’ similar worldviews, even though they were looking at different groups of people” (p50).

There is, of course, another possibility the authors never directly raise, but only hint at – namely, perhaps racial stereotypes remained relatively constant because they reflect actual behavioural differences between races that themselves remained constant simply because they reflect innate biological dispositions that have not changed significantly over historical time.

Race, Religion, Science and Slavery

Sarich and Miele’s next chapter, ‘Anthropology as the Science of Race’, continues their history of racial thought from biblical times into the age of science – and of pseudo-science.

They begin, however, not with science, or even with pseudo-science, but rather with the Christian Bible, which long dominated western thinking on the subject of race, as on so many other subjects.

At the beginning of the chapter, they quote from John Hartung’s controversial essay, Love Thy Neighbour: The Evolution of In-Group Morality, which was first published in the science magazine, Skeptic (p60; Hartung 1995).

However, although the relevant passages appear in quotation marks, neither Hartung himself, nor his essay is directly cited, and, where I not already familiar with this essay, I would be none the wiser as to where this series of quotations had actually been taken from.[19]

In the passage quoted, Hartung, who, in addition to being an anaesthesiologist, anthropologist and human sociobiologist, known for his pioneering cross-cultural studies of human inheritance patterns, is also something of an amateur (atheist) biblical scholar, argues that Adam, in the Biblical account of creation, is properly to be interpreted, not as the first human, but rather only as the first Jew, the implication being that, and the confusion arising because, in the genocidal weltanschauung of the Old Testament, non-Jews are, at least according to Hartung, not really to be considered human at all.[20]

This idea seems to have originated, or at least received its first full exposition, with theologian Isaac La Peyrère, whom Sarich and Miele describe only as a “Calvinist”, but who, perhaps not uncoincidentally, is also widely rumoured to be of Sephardi converso or even crypto-Jewish marrano ancestry.

Thus, Sarich and Miele conclude:

The door has always been open—and often entered—by any individual or group wanting to confine ‘adam’ to ‘us’ and to exclude ‘them’” (p60).

This leads to the heretical notion of the pre-Adamites, which has also been taken up by such delightfully bonkers racialist religious groups as the Christian Identity movement.[21]

However, mainstream western Christianity always rejected this notion.

Thus, whereas today many leftists associate atheism, the Enlightenment and secularism with anti-racist views, historically there was no such association.

On the contrary, Sarich and Miele emphasize, it was actually polygenism – namely, the belief that the different human races had separate origins, a view that naturally lent itself to racialism – that was associated with religious heresy, free-thinking and the Enlightenment.

In contrast, mainstream Christianity, of virtually all denominations, has always favoured monogenism – namely, the belief that, for all their perceived differences, the various human races nevertheless shared a common origin – as this was perceived as congruent with (the orthodox interpretation of) the Old Testament of the Bible.

Thus, for example, both Voltaire and David Hume identified as polygenists – and, although their experience with and knowledge of black people was surely minimal and almost entirely second-hand, each also both expressed distinctly racist views regarding the intellectual capacities of black Africans.

Moreover, although the emerging race science, and cranial measurements, of the nineteenth century American School’ of anthropology is sometimes credited with lending ideological support to the institution of slavery in the American South, or even as being cynically formulated precisely in order to defend this institution, in fact Southern slaveholders had little if any use for such ideas.

After all, the American South, as well as being a stronghold of slavery, racialism and white supremacist ideology, was also, then as now, the Bible Belt – i.e. a bastion of intense evangelical Protestant Christian fundamentalism.

But the leading American School anthropologists, such as Samuel Morton and Josiah Nott, were all heretical polygenists.

Thus, rather than challenge the orthodox interpretation of the Bible, Southern slaveholders, and their apologists, preferred to defend slavery by invoking, not the emerging secular science of anthropology, but rather Biblical doctrine.

In particular, they sought to justify slavery by reference to the so-called curse of Ham, an idea which derives from Genesis 9:22-25, a very odd passage of the Old Testament (odd even by the standards of the Old Testament), which was almost certainly not originally intended as a reference to black people.[22]

Thus, the authors quote historian William Stanton, who, in his book The Leopard’s Spots: Scientific Attitudes Toward Race in America 1815-59 concludes that, by rejecting polygenism and the craniology of the early American physical anthropologists:

The South turned its back on [what was by the scientific standards of the time] the only intellectually respectable defense of slavery it could have taken up” (p77)

As for Darwinism, which some creationists also claim was used to buttress slavery, Darwin’s On the Origin of Species was only published in 1959, just a couple of years before the Emancipation Proclamation of 1862 and final abolition of slavery in North America and the English-speaking world.[23]

Thus, if Darwinian theory was ever used to justify the institution of slavery, it clearly wasn’t very effective in achieving this end.

Into the ‘Age of Science’ – and of Pseudo-Science

The authors continue their history of racial thinking by tracing the history of the discipline of anthropology, from its beginnings as ‘the science of race’, to its current incarnation as the study of culture (and, to a lesser extent, of human evolution), most of whose practitioners vehemently deny the very biological reality of race, and some of whom deny even the possibility of anthropology being a science.

Giving a personal, human-interest focus to their history, Sarich and Miele in particular focus on three scientific controversies, and personal rivalries, each of which were, they report, at the same time scientific, personal and political (p59-60). These were the disputes between, respectively:

1) Ernst Haeckel and Rudolf Virchow;

2) Franz Boas and Madison Grant; and

3) Ashley Montagu and Carleton Coon.

The first of these rivalries, occurring as it did in Germany in the nineteenth century, is perhaps of least interest to contemporary North American audiences, being the most remote in both time and place.

However, the outcomes of the latter two disputes, occurring as they did in twentieth century America, are of much greater importance, and their outcome gave rise to, and arguably continues to shape, the current political and scientific consensus on racial matters in America, and indeed the western world, to this day.

Interestingly, these two disputes were not only about race, they were also arguably themselves racial, or at least ethnic, in character.

Thus, perhaps not uncoincidentally, whereas both Grant and Coon were Old Stock American patrician WASPs, the latter proud to trace his ancestry back among the earliest British settlers of the Thirteen Colonies, both Boas and Montagu were recent Jewish immigrants from Eastern Europe.[24]

Therefore, in addition to being personal, political and scientific, these two conflicts were also arguably racial, and ultimately indirectly concerned with the very definition of what it meant to be an ‘American’.

The victory of the Boasians was therefore both coincident with, and arguably both heralded and reflected (and perhaps even contributed towards, or, at least, was retrospectively adopted as a justification for), the displacement of Anglo-Americans as the culturally, socially, economically and politically dominant ethnic group in the USA, the increasing opening up of the USA to immigrants of other races and ethnicities, and the emergence of a new elite, no longer composed exclusively, or even predominantly, of people of any single specific ethnic background, but increasingly overwhelmingly disproportionately Jewish.

Sarich and Miele, to their credit, do not entirely avoid addressing the ethnic dimension to these disputes. Thus, they suggest that Boas and Montagu’s perception of themselves as ethnic outsiders in Anglo-America may have shaped their theories (p89-90).[25]

However, this is topic is explored more extensively by Kevin Macdonald in the second chapter of his controversial, anti-Semitic and theoretically flawed, The Culture of Critique (which I have reviewed here).

Boas, and his student Montagu, were ultimately to emerge victorious, not so much on account of the strength of their arguments, as on the success of their academic politicking, in particular Boas’s success in training students, including Montagu himself, who would go on to take over the social science departments of universities across America.

Among these students were many figures who were to become even more famous, and arguably more directly influential, than Boas himself, including, not only Montagu, but also Ruth Benedict and, most famous of all, the anthropologically inept Margaret Mead.[26]

Nevertheless, Sarich and Miele trace the current consensus, and sacrosanct dogma, of race-denial ultimately to Boas, whom they credit with effectively inventing anew the modern discipline of anthropology as it exists in America:

It is no exaggeration to say that Franz Boas (1858-1942) remade American anthropology in his own image. Through the influence of his students, Margaret Mead (Coming of Age in Samoa and Sex and Temperament in Three [Primitive] Societies[sic]), Ruth Benedict (Patterns of Culture) and Ashley Montagu (innumerable titles, especially the countless editions of Man’s Most Dangerous Myth) Boas would have more influence on American intellectual thought than Darwin did. For generations hardly anyone graduated an American college without having read at least one of these books” (p86).

Thus, today, Boas is regarded as the father of American anthropology, whereas both Grant and Coon are mostly dismissed (in Coon’s case, unfairly) as pseudo-scientists and racists.

The Legacy of Boas

As to whether the impact of Boas and his disciples was, on balance, a net positive or a net negative, Sarich and Miele are ambivalent:

The cultural determinism of the Boasians served as a useful corrective to the genetic determinism of racial anthropology, emphasizing the variation within races, the overlap between them and the plasticity of human behavior. The price, however, was the divorcing of the science of man from the science of life in general. The evolutionary perspective was abandoned, and anthropology began its slide into the abyss of deconstructionism” (p91).

My own view is more controversial: I have come to believe that the influence of Boas on American anthropology has been almost entirely negative.

Admittedly, the Nodicism of his rival, Grant, was indeed a complete non-starter. After all, civilization actually came quite late to Northern Europe, originating in North Africa, the Middle East and South Asia, arriving in Northern Europe much later, by way the Mediterranean region.

However, this view is arguably no less preposterous than the racial egalitarianism that currently prevails as a sacrosanct contemporary dogma, and which holds that all races are exactly equal in all abilities, which, quite apart from being contradicted by the evidence, represents a manifestly improbable outcome of human evolution.

Moreover, Nordicism may have been bad science, but it was at least science – or at least purported to be science – and hence was susceptible to falsification, and was indeed soon to be decisively falsified by pre-war and post-war rise of Japan among other events and indeed scientific findings.

In contrast, as persuasively argued by Kevin Macdonald in The Culture of Critique (which I have reviewed here), Boasian anthropology was not so much a science as an anti-science (not theory but an “anti-theory” according to Macdonald: Culture of Critique: p24), because, in its radical cultural determinism and cultural relativism, it rejected any attempt to develop a general theory of societal evolution, or societal differences, as premature, if not inherently misguided.

Instead, the Boasians endlessly emphasized, and celebrated (and indeed  sometimes exaggerated and fabricated), “the vast diversity and chaotic minutiae of human behavior”, arguing that such diversity precluded any general theory of social evolution as had formerly been favoured, let alone any purported ranking of societies and cultures (let alone races) as superior or inferior in relation to one another.

The Boasians argued that general theories of cultural evolution must await a detailed cataloguing of cultural diversity, but in fact no general theories emerged from this body of research in the ensuing half century of its dominance of the profession… Because of its rejection of fundamental scientific activities such as generalization and classification, Boasian anthropology may thus be characterized more as an anti-theory than a theory of human culture” (Culture of Critique: p24).

The result was that behavioural variation between groups, to the extent there was any attempt to explain it at all, was attributed to culture. Yet, as evolutionary psychologist David Buss, writes:

“[P]atterns of local within-group similarity and between-group differences are best regarded as phenomena that require explanation. Transforming these differences into an autonomous causal entity called ‘culture’ confuses the phenomena that require explanation with a proper explanation of those phenomena. Attributing such phenomena to culture provides no more explanatory power than attributing them to God, consciousness, learning, socialization, or even evolution, unless the causal processes that are subsumed by these labels are properly described. Labels for phenomena are not proper causal explanations for them” (Evolutionary Psychology: p411).

To attribute all cultural differences simply to culture and conclude that that is an adequate explanation is to imply that all cultural variation is simply random in nature. This amounts to effectively accepting the null hypothesis as true and ruling out a priori any attempt to generate a causal framework for explaining, or making predictions regarding, cultural differences. It therefore amounts, not to science, but to an outright rejection of science, or at least of applying science to human cultural differences, in favour of obscurantism.

Meanwhile, under the influence of postmodernism (i.e. “the abyss of deconstructionism” to which Sarich and Miele refer) much of cultural anthropology has ceased even pretending to be a science, dismissing all knowledge, science included, as mere disguised ideology, no more or less valid than the religious cosmologies, eschatologies and creation myths of the scientific and technologically primitive peoples whom anthropologists have traditionally studied, and hence precluding the falsification of post-modernist claims, or indeed any other claims, a priori.

Moreover, contrary to popular opinion, the Nordicism of figures such as Grant seems to have been rather less dogmatically held to, both in the scientific community and society at large, than is the contemporary dogma of racial egalitarianism.

Indeed, quite apart from the fact that it was not without eminent critics even in its ostensible late-nineteenth, early-twentieth century heyday (not least Boas himself), the best evidence for this is the speed with which this belief system was abandoned, and subsequently demonized, in the coming decades.

In contrast, even with the findings of population genetics increasing apace, the dogmas of both race denial and racial egalitarianism, while increasingly scientifically indefensible, seemingly remain ever more entrenched in the universities.

Digressions: ‘Molecular Clocks’, Language and Human Evolution

Sarich and Miele’s next chapter, ‘Resolving the Primate Tree’, recounts how the molecular clock method of determining when species (and races) diverged was discovered.

To summarize: Geneticists discovered they could estimate the time when two species separated from one another by measuring the extent to which the two species differ in selectively-neutral genetic variation – in other words, those parts of the genome that do not affect an organism’s phenotype in such a way as to affect its fitness, are therefore not subject to selection pressures and hence mutate at a uniform rate, hence serving as a ‘clock’ by which to measure when the species separated from one another.

The following chapter, ‘Homo Sapiens and Its Races’, charts the application of the ‘molecular clock’ method to human evolution, and in particular to the evolution of human races.

The molecular clock method of dating the divergence of species from one another is certainly relevant to the race question, since it allows us to estimate, not only when our ancestors split from those of the chimpanzee, but also when different human races separated from one another – though this latter question is somewhat more difficult to determine using this method, since it is complicated by the fact that races can continue to interbreed with one another even after their initial split, whereas species, once they have become separate species, by definition no longer interbreed, though there may be some interbreeding during the process of speciation itself (i.e. when the separate lineages were still only races or populations of the same species).

However, devoting a whole chapter to a narrative describing how the molecular clock methodology was developed seems excessive in a book ostensibly about human race differences, and is surely an unnecessary digression.

Thus, one suspects the attention devoted to this topic by the authors reflects the central role played by one of the book’s co-authors (Vincent Sarich) in the development of this scientific method. This chapter therefore permits Sarich to showcase his scientific credentials and hence lends authority to his later more controversial pronouncements in subsequent chapters.

The following chapter, ‘The Two Miracles that Made Mankind’, is also somewhat off-topic. Here, Sarich and Miele address the question of why it was that our own African ancestors who ultimately outcompeted and ultimately displaced rival species of hominid.[27]

In answer, they propose, plausibly but not especially originally, that our descendants outcompeted rival hominids on account of one key evolutionary development in particular – namely, our evolution of a capacity for spoken language.

Defining ‘Race

At last, in Chapters Seven and Eight, after a hundred and sixty pages and over half of the entire book, the authors address the topic which the book’s title suggested would be its primary focus – namely, the biology of race differences.

The first of these is titled ‘Race and Physical Differences’, while the next is titled ‘Race and Behavior’.

Actually, however, both chapters begin by defending the race concept itself.

Whether the human race is divisible into races ultimately depends on how one defines ‘races’. Arguments are to whether human races exist therefore often degenerate into purely semantic disputes regarding the meaning of the word ‘race.

For their purposes, Sarich and Miele themselves define ‘races as:

Populations, or groups of populations, within a species, that are separated geographically from other such populations or groups of populations and distinguishable from them on the basis of heritable features” (p207).[28]

There is, of course, an obvious problem with this definition, at least when applied to contemporary human populations – namely, members of different human races are often no longer “separated geographically” from one another, largely due to recent migrations and population movements.

Thus, today, people of many different racial groups can be found in a single city, like, say, London.

However, the key factor is surely, not whether racial groups remain “separated geographically” today, but rather whether they were “separated geographically” during the period during which they evolved into separate races.

To answer this objection, Sarich and Miele’s definition of ‘races’ should be altered accordingly.

Races as Fuzzy Sets

Sarich and Miele protest that other authors have, in effect, defined races out of existence by semantic sophistry, namely by defining the word ‘race’ in such a way as to rule out the possibility of races a priori.

Thus, some proposed definitions demand that, in order to qualify as true ‘races’, populations must have discrete, non-overlapping boundaries, with no racially-mixed, clinal or hybrid populations to blur the boundaries.

However, Sarich and Miele point out, any populations satisfying this criterium would not be ‘races’ at all, but rather entirely separate species, since, as I have discussed previously, it is the question of interfertility and reproductive isolation that defines a species (p209).[29]

In short, as biologist John Baker, in his excellent Race (reviewed here), also pointed out, since ‘race’ is, by very definition, a sub-specific classification, it is inevitable that members of different races will sometimes interbreed with one another and produce mixed, hybrid or clinal populations at their borders, because, if they did not interbreed with one another, then they would not be members of different races but rather of entirely separate species.

Thus, the boundaries between subspecies are invariably blurred or clinal in nature, the phenomenon being so universal that there is even a biological term for it, namely intergradation.

Of course, this means that the dividing line where one race is deemed to begin and another to end will inevitably be blurred. However, Sarich and Miele reject the notion that this means races are purely artificial or a social construction.

The simple answer to the objection that races are not discrete, blending into one another as they do is this: They’re supposed to blend into one another and categories need not be discrete. It is not for us to impose our cognitive difficulties upon the Nature.” (p211)

Thus, they characterize races as fuzzy sets – which they describe as a recently developed mathematical concept that has nevertheless been “revolutionarily productive” (p209).

By analogy, they discuss our colour perception when observing rainbows, observing:

Red… shade[s] imperceptibly into orange and orange into yellow but we have no difficulties in agreeing as to where red becomes orange, and orange yellow” (p208-9).

However, this is perhaps an unfortunate analogy. After all, physicists and psychologists are in agreement that different colours, as such, don’t really exist – at least not outside of the human minds that perceive and recognise them.[30]

Instead, the electromagnetic spectrum varies continuously. Colours are imposed on only by human visual system as a way of interpreting this continuous variation.[31]

If racial differences were similarly continuous, then surely it would be inappropriate to divide peoples into racial groups, because wherever one drew the boundary would be entirely arbitrary.[32]

Yet a key point about human races is that, as Sarich and Miele put it:

“[Although] races necessarily grade into one another, but they clearly do not do so evenly” (p209).

In other words, although racial differences are indeed clinal and continuous in nature, the differentiation does not occur at a constant and uniform rate. Instead, there is some clustering and definite if fuzzy boundaries are nevertheless discernible.

As an illustration of such a fuzzy but discernible boundary, Sarich and Miele give the example of the Sahara Desert, which formerly represented, and to some extent still does represent, a relatively impassable obstacle (a “a geographic filter”, in Sarich and Miele’s words: p210) that impeded population movement and hence gene flow for millennia.

The human population densities north and south of the Sahara have long been, and still are, orders of magnitude greater than in the Sahara proper, causing the northern and southern units to have evolved in substantial genetic independence from one another” (p210).

The Sahara hence represented the “ancient boundary” between the racial groups once referred to by anthropologists as the Caucasoid and Negroid races, politically incorrect terms which, according to Sarich and Miele, although unfashionable, nevertheless remain useful (p209-10).

Analogously, anthropologist Stanley Garn reports:

The high and uninviting mountains that mark the Tibetan-Indian border… have long restricted population exchange to a slow trickle” (Human Races: p15).

Thus, these mountains (the Himalayas and Tibetan Plateau), have traditionally marked the boundary between the Caucasoid and what was once termed the Mongoloid race.[33]

Meanwhile, other geographic barriers were probably even more impassable. For example, oceans almost completely prevented gene-flow between the Americas and the Old World, save across the Berring strait between sparsely populated Siberia and Alaska, for millennia, such that Amerindians remained almost completely reproductively isolated from Eurasians and Africans.

Similarly, genetic studies suggest that Australian Aboriginals were genetically isolated from other populations, including neighbouring South-East Asians and Polynesians, for literally thousands of years.

Thus, anthropologist Stanley Garn concludes:

The facts of geography, the mountain ranges, the deserts and the oceans, have made geographical races by fencing them in” (Human Races: p15).

However, with improved technologies of transportation – planes, ocean-going vessels, other vehicles – such geographic boundaries are becoming increasingly irrelevant.

Thus, increased geographic mobility, migration, miscegenation and intermarriage mean that the ‘fuzzy’ boundaries of these fuzzy sets are fast becoming even ‘fuzzier’.

Thus, if meaningful boundaries could once be drawn between races, and even if they still can, this may not be the case for very much longer.

However, it is important to emphasize that, even if races didn’t exist, race differences still would. They would just vary on a continuum (or a cline, to use the preferred biological term).

To argue that races differences do not exist simply because they are continuous and clinal in nature would, of course, be to commit a version of the continuum fallacy or sorties paradox, also sometimes called the fallacy of the heap or fallacy of the beard.

Moreover, just as populations differ in, for example, skin colour on a clinal basis, so they could also differ in psychological traits (such as average intelligence and personality) in just the same way.

Thus, paradoxically, the non-existence of human races, even if conceded for the sake of argument, is hardly a definitive, knock-down argument against the existence of innate race differences in intelligence, or indeed other racial differences, even though it is usually presented as such by those who espouse this view.

Whether ‘races’ exist is debatable and depends on precisely how one defines ‘races’—whether race differences exist, however, is surely beyond dispute.

Debunking Diamond

The brilliant and rightly celebrated scientific polymath and popular science writer Jared Diamond, in an influential article published in Discovery magazine, formulated another even less persuasive objection to the race concept as applied to humans (Diamond 1994).

Here, Diamond insisted that racial classifications among humans are entirely arbitrary, because different populations can be grouped into different ways if one uses different characteristics by which to group them.

Thus, if we classified races, not by skin colour, but rather by the prevalence of the sickle cell gene or of lactase persistence, then we would, he argues, arrive at very different classifications. For example, he explains:

Depending on whether we classified ourselves by antimalarial genes, lactase, fingerprints or skin color, we could place Swedes in the same race as (respectively) either Xhosas, Fulani, the Ainu of Japan or Italians” (p164).

Each of these classifications, Diamond insists, would be “equally reasonable and arbitrary” (p164).

To these claims, Sarich and Miele respond:

Most of us, upon reading these passages, would immediately sense that something was very wrong with it, even though one might have difficulty specifying just what” (p164).

Unfortunately, however, Sarich and Miele are, in my view, not themselves very clear in explaining precisely what is wrong with Diamond’s argument.

Thus, one of Sarich and Miele’s grounds for rejecting this argument is that:

The proportion of individuals carrying the sickle-cell allele can never go above about 40 percent in any population, nor does the proportion of lactose-competent adults in any population ever approach 100 percent. Thus, on the basis of the sickle-cell gene, there are two groups… of Fulani, one without the allele, the other with it. So those Fulani with the allele would group not with other Fulani, but with Italians with the allele” (p165).

Here their point seems to be that it is not very helpful to classify races by reference to a trait that is not shared by all members of any race, but rather differs only in relative prevalence.

Thus, they conclude:

The concordance issue… applies within groups as well as between them. Diamond is dismissive of the reality of the FulaniXhosas African racial unit because there are characters discordant with it [e.g. lactase persistence]… Well then, one asks in response, what about the Fulani unit itself? After all, exactly the same argument could be made to cast the reality of the category ‘Fulani’ into doubt” (p165).

However, this conclusion seems to represent exactly what many race deniers do indeed argue – namely that all racial and ethnic groups are indeed pure social constructs with no basis in biology, including terms such as ‘Fulani’ and ‘Italian’, which are, they would argue, as biologically meaningless and socially constructed as terms such as ‘Negroid’ and ‘Caucasoid’.[34]

After all, if a legitimate system of racial classification indeed demands that some Fulani tribesmen be grouped in the same race as Italians while others are grouped in an entirely different racial taxa, then this does indeed seem to suggest racial classifications are arbitrary and unhelpful.

Moreover, the fact that there is much within-population variation in genes such as those coding for sickle-cell or lactase persistence surely only confirms Richard Lewontin’s famous argument (see below) that there is far more genetic variation within groups than between them.

Sarich and Miele’s other rejoinder to Diamond is, in my view, more apposite. Unfortunately, however, they do not, in my opinion, explain themselves very well.

They argue that:

“[The absence of the sickle-cell gene] is a meaningless association because the character involved (the lack of the sickle-cell allele) is an ancestral human condition. Associating Swedes and Xhosas thus says only that they are both human, not a particularly profound statement” (p165).

What I think Sarich and Miele are getting at here is that, whereas Diamond proposes to classify groups on the basis of a single characteristic, in this case the sickle-cell gene, most biologists favour a so-called cladistic taxonomy, where organisms are grouped together not on the basis of shared characteristics as such at all, but rather on the basis of shared ancestry.

In other words, orgasms are grouped together because they are more closely related to one another (or shared a common ancestor more recently) than are other organisms that are put into a different group.

From this perspective, shared characteristics are relevant only to the extent they are (interpreted as) homologous and hence as evidence of shared ancestry. Traits that evolved independently through convergent or parallel evolution (i.e. in response to analogous selection pressures in separate lineages) are irrelevant.

Yet the genes responsible for lactase persistence, one of the traits used by Diamond to classify populations, evolved independently in different populations through gene-culture co-evolution in concert with the independent development of dairy farming in different parts of the world, an example of convergent evolution that does not suggest relatedness. Indeed, not only did lactase continuance evolve independently in different races, it also seems to have evolved quite different mutations in different genes (Tishkoff et al 2007).[35]

However, Diamond’s proposed classification is especially preposterous. Even pre-Darwinian systems of taxonomy, which did indeed classify species (and subspecies) on the basis of shared characteristics rather than shared ancestry, nevertheless did so on the basis of a whole suite of traits that were clustered together.

In contrast, Diamond proposes to classify races on the basis of a single trait, apparently chosen arbitrarily – or, more likely, to illustrate the point he is attempting to make.

Genetic Differences

In an even more influential and widely-cited paper, Marxist biologist Richard Lewontin claimed that 85% of genetic variation occurred within populations and only 6% accounted for the differences between races (Lewontin 1972).[36]

The most familiar rejoinder to Lewontin’s argument is that of Edwards who pointed out that, while Lewontin’s figures are correct when one looks at individual genetic loci, if one looks at multiple loci, then one can identify an individual’s race with precision that approaches 100% the more loci that are used (Edwards 2003).

However, Edwards’ paper was only published in 2003, just a year before ‘Race: The Reality of Human Differences’ itself came off the presses, so Sarich and Miele may not have been aware of Edwards’ critique at the time they actually wrote the book.[37]

Perhaps for this reason, then, Sarich and Miele respond rather differently to Lewontin’s arguments.

First, they point out:

“[Lewontin’s] analysis omits a third level of variability–the within-individual one. The point is that we are diploid, getting one set of chromosomes from one parent and a second from the other” (p168-9).

Thus Sarich and Miele conclude:

The… 85 percent will then split half and half (42.5%) between the intra- and inter-individual within-population comparisons. The increase in variability in between-population comparisons is thus 15 percent against the 42.5 percent that is between individual within-population. Thus, 15/4.5 = 32.5 percent, a much more impressive and, more important, more legitimate value than 15 percent.” (p169).

However, this seems to me to be just playing around with numbers in order to confuse and obfuscate.

After all, if as Lewontin claims, most variation is within-group rather than between group, then, even if individuals mate endogamously (i.e. with members of the same group as themselves), offspring will show substantial variation between the portion of genes they inherit from each parent.

But, even if some of the variation is therefore within-individual, this doesn’t change the fact that it is also within-group.

Thus, the claim of Lewontin that 85% of genetic variation is within-group remains valid.

Morphological Differences

Sarich and Miele then make what seems to me to be a more valid and important objection to Lewontin’s figures, or at least to the implication he and others have drawn from them, namely that racial differences are insignificant. Again, however, they do not express themselves very clearly.

Their argument seems to be that, if we are concerned with the extent of physiological and psychological differentiation between races, then it actually makes more sense to look directly at morphological differences, rather than genetic differences.

After all, a large proportion of our DNA may be of the nonfunctional non-coding or ‘junk’ variety, some of which may have little or no effect an organism’s phenotype.

Thus, in their chapter ‘Resolving the Primate Tree’, Sarich and Miele themselves claim that:

Most variation and change at the level of DNA and proteins have no functional consequences” (p121; p126).

They conclude:

Not only is the amount of between-population genetic variation very small by the standards of what we observe in other species… but also… most variation that does exist has no functional, adaptive significance” (p126).

Thus, humans and chimpanzees may share around 98% of each other’s DNA, but this does not necessarily mean that we are 98% identical to chimpanzees in either our morphology, or our psychology and behaviour. The important thing is what the genes in question do, and small numbers of genes can have great effects while others (e.g. non-coding DNA) may do little or nothing.[38]

Indeed, one theory has it that such otherwise nonfunctional biochemical variation may be retained within a population by negative frequency dependent selection because different variants, especially when recombined in each new generation by sexual reproduction, confer some degree of protection against infectious pathogens.

This is sometimes referred to as ‘rare allele advantage’, in the context of the ‘Red Queen theory’ of host-parasite co-evolutionary arms race.

Thus, evolutionary psychologists John Tooby and Leda Cosmides explain:

The more alternative alleles exist at more loci—i.e., the more genetic polymorphism there is—the more sexual recombination produces genetically differentiated offspring, thereby complexifying the series of habitats faced by pathogens Most pathogens will be adapted to proteins and protein combinations that are common in a population, making individuals with rare alleles less susceptible to parasitism, thereby promoting their fitness. If parasitism is a major selection pressure, then such frequency-dependent selection will be extremely widespread across loci, with incremental advantages accruing to each additional polymorphic locus that varies the host phenotype for a pathogen. This process will build up in populations immense reservoirs of genetic diversity coding for biochemical diversity” (Tooby & Cosmides 1990: p33).

Yet, other than conferring some resistance to fast-evolving pathogens, such “immense reservoirs of genetic diversity coding for biochemical diversity” may have little adaptive or functional significance and have little or no effect on other aspects of an organism’s phenotype.

Lewontin’s figures, though true, are therefore potentially misleading. To see why, behavioural geneticist Glayde Whitney suggested that we “might consider the extent to which humans and macaque monkeys share genes and alleles”. On this basis, he reported:

If the total genetic diversity of humans plus macaques is given an index of 100 percent, more than half of that diversity will be found in a troop of macaques or in the [then quite racially homogenous] population of Belfast. This does not mean Irishmen differ more from their neighbors than they do from macaques — which is what the Lewontin approach slyly implies” (Whitney 1997).

Anthropologist Peter Frost, in an article for Aporia Magazine critiquing Lewontin’s analysis, or at least the conclusions he and others have drawn from them, cites several other examples where:

Wild animals… show the same pattern of genes varying much more within than between populations, even when the populations are related species and, sometimes, related genera (a taxonomic category that ranks above species and below family)“ (Frost 2023).

However, despite the minimal genetic differentiation between races, different human races do differ from one another morphologically to a significant degree. This much is evident simply from looking at the facial morphology, or bodily statures, of people of different races – and indirectly apparent by observing which races predominate in different athletic events at the Olympics.

Thus, Sarich and Miele point out, when one looks at morphological differences, it is clear that, at least for some traits, such as “skin colorhair formstaturebody build”, within-group variation does not always dwarf between-group variation (p167).

On the contrary, Sarich and Miele observe:

Group differences can be much greater than the individual differences within them; in, for example, hair from Kenya and Japan, or body shape for the Nuer and Inuit” (p218).

Indeed, in respect of some traits, there may be almost no overlap between groups. For example, excepting suffers of rare, abnormal and pathological conditions like albinism, even the lightest complexioned Nigerian is still darker in complexion and skin colour than is the darkest indigenous Swede.

If humans differ enough genetically to cause the obvious (and not so obvious) morphological differences between races, differences which are equally obviously genetic in origin, then it necessarily follows that they also differ enough genetically to allow for a similar degree of biological variation in psychological traits, such as personality and intelligence.

That human populations are genetically quite similar to one another indicates, Sarich and Miele concede, that the different races separated and became reproductively isolated from one another only quite recently, such that random variation in selectively-neutral DNA has not had sufficient time to accumulate through random mutation and genetic drift.

However, the fact that, within this short period, quite large morphological differences have nevertheless evolved suggests the presence of strong selective pressures selecting for such morphological differentiation.

They cite archaeologist Glynn Isaac as arguing:

It is the Garden-of-Eden model [i.e. out of Africa theory], not the regional continuity model [i.e. multiregionalism], that makes racial differences more significant functionally… because the amount of time involved in the raciation process is much smaller, but the degree of racial differentiation is the same and, for human morphology, large. The shorter the period of time required to produce a given amount of morphological difference, the more selectively/adaptively/functionally important those differences become” (p212).

Thus, Sarich and Miele conclude:

So much variation developing in so short a period of time implies, indeed almost requires, functionality; there is no good reason to think that behavior should somehow be exempt from this pattern of functional variability” (p173).

In other words, if different races have been subjected to divergent selection pressures that have led them to diverge morphologically, then these same selection pressures will almost certainly also have led them to psychologically diverge from one another.

Indeed, at least one well-established morphological difference seems to directly imply a corresponding psychological difference – namely, differences in brain size as between races would seem to suggest differences in intelligence, as I have discussed in greater detail both previously and below.

Measuring Morphological Differences

Continuing this theme, Sarich and Miele argue that human racial groups actually differ more from one another morphologically than do many non-human mammals that are regarded as entirely separate species.

Thus, Sarich quotes himself as claiming:

Racial morphological distances within our species are, on the average, about equal to the distances among species within other genera of mammals. I am not aware of another mammalian species whose constituent races are as strongly marked as they are in ours… except, of course, for dogs” (p170).

I was initially somewhat skeptical of this claim. Certainly, it seems to us that, say, a black African looks very different from an East Asian or a white European. However, this may simply be because, being human, and in close day-to-day contact with humans, we are far more readily attuned to differences between humans than differences between, say, chimpanzees, or wolves, or sheep.[39]

Indeed, there is even evidence that we possess an innate domain-specificface recognition module’ that evolved to help us to distinguish between different individuals, and which seems to be localized in certain areas of the brain, including the so-called ‘fusiform facial area’, which is located in the fusiform gyrus.

Indeed, as I have already noted in an earlier endnote, a commenter on an earlier version of this book review plausibly suggested that our tendency to group individuals by race could represent a by-product of our facial recognition faculty.

However, the claim that the morphological differences between human races are comparable in magnitude to those between some different species or nonhuman organism is by no means original to Sarich and Miele.

For example, John R Baker makes a similar claim in his excellent book, Race (which I have reviewed here), where he asserts:

Even typical Nordids and typical Alpinids, both regarded as subraces of a single race (subspecies), the Europid [i.e. Caucasoid), are very much more different from one another in morphological characters—for instance in the shape of the skull—than many species of animals that never interbreed with one another in nature, though their territories overlap” (Race: p97).

Thus, Baker claims:

Even a trained anatomist would take some time to sort out correctly a mixed collection of the skulls of Asiatic jackals (Canis aureus) and European red foxes (vulpes vulpes), unless he had made a special study of the osteology of the Canidae; whereas even a little child, without any instruction whatever, could instantly separate the skulls of Eskimids from those of Lappids” (Race: p427).

Indeed, Darwin himself made a not dissimilar claim in The Descent of Man, where he observed:

If a naturalist, who had never before seen a Negro, Hottentot, Australian, or Mongolian, were to compare them, he would at once perceive that they differed in a multitude of characters, some of slight and some of considerable importance. On enquiry he would find that they were adapted to live under widely different climates, and that they differed somewhat in bodily constitution and mental disposition. If he were then told that hundreds of similar specimens could be brought from the same countries, he would assuredly declare that they were as good species as many to which he had been in the habit of affixing specific names” (The Descent of Man and Selection in Relation to Sex).

However, Sarich and Miele attempt to go one better than both Baker and Darwin – namely, by not merely claiming that human races differ morphologically from one another to a similar or greater extent than many separate species of non-human animal, but also purporting to prove this claim statistically as well.

Thus, relying on “cranial/facial measurements on 29 human populations, 2,500 individuals 28 measurements… 17 measurements on 347 chimpanzees… and 25 measures on 590 gorillas” (p170), Sarich and Miele’s conclusion is dramatic: reporting the “percent increases in distance going from within-group to between-group comparisons of individuals”, measured in terms of “the percent difference per size corrected measurement (expressed as standard deviation units)”, a greater percentage of the total variation among humans is found between different human groups than is found between some separate species of non-human primate.

Thus, Sarich and Miele somewhat remarkably conclude:

Racial morphological distances in our species [are] much greater than any seen among chimpanzees or gorillas, or, on the average, some tenfold greater than those between the sexes” (p172-3).

Interestingly, and consistent with the general rule that Steve Sailer has termed ‘Rushton’s Rule of Three, whereby blacks and Asians respectively cluster at opposite ends of a racial spectrum for various traits, Sarich and Miele report:

The largest differences in Howells’s sample are found when comparing [black sub-Saharan] Africans with either Asians or Asian-derived (Amerindian) populations” (p172).

Thus, for example, measured in this way, the proportion of the total variation that separates East Asians from African blacks is more than twice that separating chimpanzees from bonobos.

This, however, is perhaps a misleading comparison, since chimpanzees and bonobos are known to be morphologically very similar to one another, to such an extent that, although now recognized as separate species, they were, until quite recently, considered as merely different subspecies of a single species.

Another problem with Sarich and Miele’s conclusion is that, as they themselves report, it relies entirely on “cranial/facial measurements” and thus it is unclear whether the extent of these differences generalize to other parts of the body.

Yet, despite this limitation, Sarich and Miele report their results as applying to “racial morphological distances” in general, not just facial and cranial differences.

Finally, Sarich and Miele’s analysis in this part of their book is rather technical.

I feel that the more appropriate place to publish such an important and provocative finding would have been a specialist journal in biological anthropology, which would, of course, include a full methodolgy section and also be subject to full peer review before publication.

Domestic Dog Breeds and Human Races

Sarich and Miele argue that the only mammalian species with greater levels of morphological variation between subspecies than humans are domestic dogs.

Thus, psychologist Daniel Freedman, writing in 1979, claimed:

A breed of dog is a construct zoologically and genetically equivalent to a race of man” (Human Sociobiology: p144).

Of course, morphologically, dog breeds differ enormously, far more than human races.

However, the logistical problems of a Chihuahua mounting a mastiff notwithstanding, all are thought to be capable of interbreeding with one another, and also with wild wolves, and are hence all dog breeds, together with wild wolves, are generally considered by biologists to represent a single species.

Moreover, Sarich and Miele report that genetic differences between dog breeds, and between dogs and wolves, were so slight that, at the time Sarich and Miele were writing, researchers had only just begun to be able to genetically distinguish some dog breeds from others (p185).

Of course, this was written in 2003, and genetic data in the years since then has accumulated at a rapid pace.

Moreover, even then, one suspects that the supposed inability of geneticists to distinguish one dog breed from another reflected, not so much the limited genetic differentiation between breeds, as the fact that, understandably, far fewer resources had been devoted to decoding the canine genome that were devoted to decoding that of humans ourselves.

Thus, today, far more data is available on the genetic differences between breeds and these differences have proven, unsurprisingly given the much greater morphological differences between dog breeds as compared to human races, to be much greater than those between human populations.

For example, as I have discussed above, Marxist-biologist Richard Lewontin famously showed that, for humans, there is far greater genetic variation within races than between races (Lewontin 1972).

It is sometimes claimed that the same is true for dog breeds. For example, self-styled ‘race realist’ and ‘white advocate’, and contemporary America’s leading white nationalist public intellectual (or at least the closest thing contemporary America has to a white nationalist public intellectual), Jared Taylor claims, in a review of Edward Dutton’s Making Sense of Race, that:

People who deny race point out that there is more genetic variation within members of the same race than between races — but that’s true for dog breeds, and not many people think the difference between a terrier and a pug is all in our minds” (Taylor 2021).

Actually, however, Taylor appears to be mistaken.

Admittedly, some early mitochondrial DNA studies did seemingly support this conclusion. Thus, Coppinger and Schneider reported in 1994 that:

Greater mtDNA differences appeared within the single breeds of Doberman pinscher or poodle than between dogs and wolves… To keep the results in perspective, it should be pointed out that there is less mtDNA difference between dogs, wolves and coyotes than there is between the various ethnic groups of human beings, which are recognized as belonging to a single species” (Coppinger & Schneider 1994).

However, while this may be true for mitochondrial DNA, it does not appear to generalize to the canine genome as a whole. Thus, in her article ‘Genetics and the Shape of Dogs’ geneticist Elaine Ostrander, an expert on the genetics of domestic dogs, reports:

Genetic variation between dog breeds is much greater than the variation within breeds. Between-breed variation is estimated at 27.5 percent. By comparison, genetic variation between human populations is only 5.4 percent” (Ostrander 2007).[40]

However, the fact that both morphological and genetic differentiation between dog breeds far exceeds that between human races does not necessarily mean that an analogy between dog breeds and human races is entirely misplaced.

All analogies are imperfect, otherwise they would not be analogies, but rather identities (i.e. exactly the same thing).

Indeed, one might argue that dog breeds provide a useful analogy for human races precisely because the differences between dog breeds are so much greater, since this allows us to see the same principles operating but on a much more magnified scale and hence brings them into sharper focus.

Breed and Behaviour

As well as differing morphologically, dog breeds are also thought to differ behaviourally as well.

Anecdotally, some breeds are said to be affectionate and ‘good with children’, others standoffish, independent, territorial and prone to aggression, either with strangers or with other dogs.

For example, psychologist Daniel Freedman, whose study of average differences in behaviour among both dog breeds, conducted as part of his PhD, and his later analogous studies of differences in behaviour of neonates of different races, are discussed by Sarich and Miele in their book (p203-7), observed:

I had worked with different breeds of dogs and I had been struck by how predictable was the behavior of each breed” (Human Sociobiology: p144).

Freedman’s scientifically rigorous studies of breed differences in behaviour confirmed that at least some such differences are indeed real and seem to have an innate basis.

Thus, studying the behaviours of newborn puppies to minimize the possibility of environmental effects affecting behaviour differences, just as he later studied differences in the behaviour of human neonates, Freedman reports:

The breeds already differed in behavior. Little beagles were irrepressibly friendly from the moment they could detect me, whereas Shetland sheepdogs were most sensitive to a loud voice or the slightest punishment; wire-haired terriers were so tough and aggressive, even as clumsy three-week olds, that I had to wear gloves in playing with them; and, finally, basenjis, barkless dogs originating in central Africa, were aloof and independent” (Human Sociobiology: p145).

Similarly, Hans Eysenck reports the results of a study of differences in behaviour between different dog breeds raised under different conditions then left alone in a room with food they had been instructed not to eat. He reports:

Basenjis, who are natural psychopaths, ate as soon as the trainer had left, regardless of whether they had been brought up in the disciplined or the indulgent manner. Both groups of Shetland sheep dogs, loyal and true to death, refused the food, over the whole period of testing, i.e. eight days! Beagles and fox terriers responded differentially, according to the way they had been brought up; indulged animals were more easily conditioned, and refrained longer from eating. Thus, conditioning has no effect on one group, regardless of upbringing—has a strong effect on another group, regardless of upbringing—and affects two groups differentially, depending on their upbringing” (The IQ Argument: p170).

These differences often reflect the purpose for which the dogs were bred. For example, breeds historically bred for dog fighting (e.g. Staffordshire bull berriers) tend to be aggressive with other dogs, but not necessarily with people; those bred as guard dogs (e.g. mastiffs, Dobermanns) tend to be highly territorial; those bred as companions sociable and affectionate; while others have been bred to specialize in certain highly specific behaviours at which they excel (e.g. pointers, sheep dogs).

For example, the author of one recent study of behavioural differences among dog breeds interpreted her results thus:

Inhibitory control may be a valued trait in herding dogs, which are required to inhibit their predatory responses. The Border Collie and Australian Shepherd were among the highest-scoring breeds in the cylinder test, indicating high inhibitory control. In contrast, the Malinois and German Shepherd were some of the lowest-scoring breeds. These breeds are often used in working roles requiring high responsiveness, which is often associated with low inhibitory control and high impulsivity. Human-directed behaviour and socio-cognitive abilities may be highly valued in pet dogs and breeds required to work closely with people, such as herding dogs and retrievers. In line with this, the Kelpie, Golden Retriever, Australian Shepherd, and Border Collie spent the largest proportion of their time on human-directed behaviour during the unsolvable task. In contrast, the ability to work independently may be important for various working dogs, such as detection dogs. In our study, the two breeds which were most likely to be completely independent during the unsolvable task (spending 0% of their time on human-directed behaviour) were the German Shepherd and Malinois” (Juntilla et al 2022).

Indeed, recognition of the different behaviours of dog breeds even has statutory recognition, with controversial breed-specific legislation restricting the breeding, sale and import of certain so-called dangerous dog breeds and ordering their registration, neutering and in some cases destruction.

Of course, similar legislation restricting the import and breeding, let alone ordering the neutering or destruction, of ‘dangerous human races’ (perhaps defined by reference to differences in crime rates) is currently politically unthinkable.

Therefore, as noted above, breed-specific legislation is the rough canine equivalent of the Nuremberg Laws.

Breed Differences in Intelligence

In addition, just as there are differences between human races in average IQ (see below; see also here, here and especially here) so some studies have suggested that, on average, dog breeds differ in average intelligence.

However, there are some difficulties, for these purposes, in measuring, and defining, what constitutes intelligence among domestic dogs.[41]

Since the subject of race differences in intelligence almost always lurks in the background of any discussion of the biology of race, and, since this topic is indeed discussed at some length by Sarich and Miele in a later chapter (and indeed in a later part of this review), it is perhaps worth discussing some of these difficulties and the extent to which they mirror similar controversies regarding how to define and measure human intelligence, especially differences between races.

Thus, research by Stanley Coren, reported in his book, The Intelligence of Dogs, and also widely reported upon in the popular press, purported to rank dog breeds by their intelligence.

However, the research in question, or at least the part reported upon in the media, actually seems to have relied exclusively on measurements of the ability of the different dogs to learn, and obey, new commands from their masters/owners with the minimum of instruction.[42]

Moreover, this ability also seems, in Coren’s own account, to have been assessed on the basis of the anecdotal impression of dog contest judges, rather then direct quantitative measurement of behaviour.

Thus, the purportedly most intelligent dogs were those able to learn a new command in less than five exposures and obey at least 95 percent of the time, while the purportedly least intelligent were those who required more than 100 repetitions and obey around 30 percent of the time.

An ability to obey commands consistently with a minimum of instruction does indeed require a form and degree of social intelligence – namely the capacity to learn and understand the commands in question.

However, such a means of measurement not only measures only a single quite specific type of intelligence, it also measures another aspect of canine psychology that is not obviously related to intelligence – namely, obedience, submissiveness and rebelliousness.

This is because complying with commands requires not only the capacity to understand commands, but also the willingness to actually obey them.

Some dogs might conceivably understand the commands of an owner, or at least have the capacity to understand if they put their mind to it, but nevertheless refuse to comply, or even refuse to learn, out of sheer rebelliousness and independent spirit. Most obviously, this might be true of wild wolves which have not been domesticated or even tamed, though it may also be true of dog breeds.[43]

Analogously, when a person engages in a criminal act, we do not generally assume that this is because s/he failed to understand that the conduct complained of was indeed a transgression of the law. Instead, we usually assume that s/he knew that the behaviour complained of was criminal, but, for whatever reason, decided to engage in the behaviour anyway.[44]

Thus, a person who habitually refuses to comply with rules of behaviour set down by those in authority (e.g. school authorities, law enforcement) is more likely to be diagnosed with, say, oppositional defiant personality disorder or psychopathy than with low intelligence as such. Much the same might be true of some dog breeds, and indeed some individual dogs (and indeed wild or tame wolves).[45]

Sarich and Miele, in their discussion of Daniel Freedman’s research on behavioural differences among breeds, provide a good illustration of these problems. Thus, they describe how, one of the tests conducted by Freedman involved measuring how well the different breeds navigated “a series of increasingly difficult mazes”. This would appear to be a form of intelligence test measuring spatial intelligence. However, in fact, they report, perhaps surprisingly:

The major breed differences were not in the ability to master the mazes (a rough measure of canine IQ) but in what they would do when they were placed in a maze they couldn’t master. The beagles would howl, hoping perhaps that another member of their pack would howl back and lead them to the goal. The inhibited Shelties would simply lie down on the ground and wait. Pugnacious terriers would try to tear down the walls of the maze, but the basenjis saw no reason they had to play by a human’s rules and tried to jump over the walls of the maze” (p202).

Far from demonstrating low intelligence, the behaviour of the terriers, and especially the basenjis might even be characterized as an impressive form of lateral thinking, inventiveness and creativity – devising a different way to escape the maze than that intended by the experimenter.

However, it more likely reflects the independent and rebellious personality of basenjis, a breed which is, according to Sarich and Miele, more recently domesticated than other most breeds, related to semi-domesticated pariah dogs, and who, they report, “dislike taking orders and are born canine scofflaws” (p201-2).

You may also recall that psychologist Hans Eysenck, in a passage quoted in greater length in the preceding section of this review, described this same breed, perhaps only semi-jocularly, as “natural psychopaths” (The IQ Argument: p170).

Consistent with this, Stanley Coren reports that they are the second least trainable dog, behind only Afghan Hounds.

Natural, Artificial and Sexual Selection

Of course, domestic dog breeds are a product, not of natural selection, of rather of artificial selection, i.e. selective breeding by human breeders, often to deliberately produce strains with different traits, both morphological and behavioural.

This, one might argue, makes dog breeds quite different to human races, since, although many have argued that humans are ourselves, in some sense, a domesticated species, albeit a self-domesticated one (i.e. we have domesticated ourselves, or perhaps one another), nevertheless most traits that differentiate human races seems to be a product of natural selection, in particular adaptation to different geographic regions and their climates.[46]

However, the processes of natural and artificial selection are directly analogous to each other. Indeed, they are so similar that it was the selective breeding of domestic animals by agriculturalists that helped inspire Darwin’s theory of natural selection, and was also used by Darwin to explain and illustrate this theory in The Origin of Species.

Moreover, many eminent biologists have argued that at least some racial differences are the product, not of natural selection (in the narrow sense), but rather of sexual selection, in particular mate choice.

Yet mate choice is arguably even more analogous to artificial selection than is natural selection, since both mate choice and artificial selection involve deliberate choice as to which individual with whom to breed by a third-party, namely, in the case of artificial selection, the human breeder, or, in respect of mate choice, the prospective mate.

As Sarich and Miele themselves observe:

Unlike for dog breeds, no one has deliberately exercised that level of selection on humans, unless we exercised it on ourselves, a thought that has led evolutionary thinkers from Charles Darwin to Jared Diamond to attribute human racial variation to a process termed ‘sexual’ rather than ‘natural’ selection” (p236).

Thus, Darwin himself went as far as to claim in The Descent of Man that “as far as we are enabled to judge… none of the differences between the races of man are of any direct or special service to him”, and instead proposes:

The differences between the races of man, as in colour, hairiness, form of features, etc., are of a kind which might have been expected to come under the influence of sexual selection” (The Descent of Man: p189-90).

Darwin’s claim that none of the physical differences between races have any survival value is now clearly untenable, as anthropologists and biologists have demonstrated that many observed race differences, for example, in skin colour, nose shape, and bodily dimensions, represent, at least in part, climatic adaptations.[47]

However, the view that sexual selection has also played some role in human racial differentiation remains plausible, and has been championed in recent years by scientific polymath and populariser Jared Diamond in chapter six of his book The Third Chimpanzee, which he titles ‘Sexual Selection and the Origin of Human Races’ (The Third Chimpanzee: pp95-105), and especially by anthropologist Peter Frost in a series of papers and blog posts (e.g. Frost 2008).

For example, as emphasized by Frost, differences in hair colour, eye colour and hair texture, having no obvious survival benefits, yet often being associated with perceptions of beauty, might well be attributed, at least in part, to sexual selection (Frost 2006; Frost 2014; Frost 2015).

The same may be true of racial and sexual differentiation levels of muscularity and in the distribution of body fat, as discussed later in this review.

For example, John R Baker, in his monumental magnus opus, Race (reviewed here), argues that the large protruding buttocks evinced among some San women likely reflect sexual selection (Race: p318).[48]

Meanwhile, both Frost and Diamond argue that even differences in skin colour, although partly reflecting the level of exposure to ultraviolet radiation from the sun in different regions of the globe and at different latitudes, and affecting vitamin D synthesis and susceptibility to sunburn and melanoma, all of which were subject to natural selection to some degree, likely also reflects mate choice and sexual selection as well, given that skin tone does not perfectly correlate with levels of exposure to UV rays in different regions, yet a lighter than average complexion seems to be cross-culturally associated with female beauty (van den Berghe and Frost 1986; Frost 1994; Frost 2014).

Similarly, in his recent book A Troublesome Inheritance, science writer Nicholas Wade, citing a study suggesting that an allele carried by East Asian people is associated with both thicker hair and smaller breasts in mice, suggests that this gene may have spread among East Asians as a consequence of sexual selection, with males preferring females as mates who possess one or both of these traits (A Troublesome Inheritance: p89-90).

Similarly, Wade also proposes that the greater prevalence of dry earwax among Northeast Asians, and, to a lesser degree, among Southeast Asians, Native Americans and Northern Europeans may reflect sexual selection and mate choice, because this form of earwax is also associated with a less strong body odour, and, in colder regions, where people spend more of their time indoors, Wade surmises that this is likely to be more noticeable, as well as unpleasant in a sexual partner (A Troublesome Inheritance: p90-91).[49]

Finally, celebrated Italian geneticist Luigi Luca Cavalli-Sforza proposes, in his book Genes Peoples and Languages that, although the “fatty folds of skin” around the eyes characteristic of East Asian peoples likely evolved to protect against “the cold Siberian air” and represent “adaptions to the bitter cold of Siberia”, nevertheless, since “these eyes are often considered beautiful” they “probably diffused by sexual selection from northeastern Asia into Southeast Asia where it is not at all cold” (Genes Peoples and Languages: p11).

Curiously, in this context, however, Sarich and Miele, save for the passing mention of Darwin and Diamond quoted above, not only make no mention of sexual selection as a possible factor in human racial differentiation, but also make the odd claim in relation to sexual selection that:

There has been no convincing evidence of it [i.e. sexual selection] yet in humans” (p186).[50]

As noted, this is a rather odd, if not outright biologically ignorant, claim.

It is true that some of the more outlandish claims of evolutionary psychologists for sexual selection – for example, Geoffrey Miller’s intriguing theory that human intelligence evolved through sexual selection – remain unproven, as indeed does the claim that sexual selection played an important role in human racial differentiation.

However, there is surely little doubt, for example, human body-size dimorphism is a product of sexual selection (more specifically intra-sexual selection), since levels of body-size dimorphism is consistently correlated with levels of polygyny across many mammalian species.

A strong claim can also be made that the permanant breasts that are unique to human females evolved as a product of intersexual selection. (see discussion here).

Sexual selection has also surely acted on human psychology, resulting in, among other traits, the greater levels of violent aggression among males.

On the other hand, Sarich and Miele may be on firmer ground when, in a later chapter, while not denying that sexual selection may have played a role in other aspects of human evolution, they nevertheless insist:

No one has yet provided any hard evidence showing that process [i.e. sexual selection] has produced racial differences in our species” (p236).

However, while this may be true, the idea that sexual selection has played a key role in human racial differentiation certainly remains a plausible hypothesis.

Physical Differences and Athletic Performance

Although they emphasize that morphological differences between human races are greater than those among some separate species of nonhuman animal, and also that such morphological differences provide, for many purposes, a more useful measure of group differences than genetic differences, nevertheless, in the remainder of the chapter on ‘Physical Race Differences’, Sarich and Miele actually have surprisingly little to say about the actual physical differences that exist as between races, nor how and why such differences evolved.

There is no discussion of, for example, Thomson’s nose rule, which seems to explain much of the variation in nose shape among races, nor of Bergmann’s rule and Allen’s rule, which seem to explain much of the variation among humans in body-size and relative bodily proportions.

Instead, Sarich and Miele focus on what is presumably an indirect effect of physiological race differences – namely, differences in athletic performance as between races.

Even this topic is not treated thoroughly. Indeed, the authors talk of “such African dominance as exists in the sporting world” (p182) almost as if this applied to all sports equally.

Yet, just as people of black African descent are conspicuously dominant in certain athletic events (basketball, the 100m sprint), so they are noticeably absent among elite athletes in certain other sports, not least swimming – and, just as the overrepresentation of people of West African descent among elite sprinters, and East Africans among elite distance runners, has been attributed to biological differences, so has their relative absence among elite swimmers, which is most often attributed to differences in bone density and fat distribution, each of which affect buoyancy.

Yet, not only does Sarich and Miele’s chapter on ‘Physical Race Differences’ focus almost exclusively on differences in athletic ability, but a large part of the chapter is devoted to differences in performance in one particular sport, namely the performance of East Africans, especially Kenyans (and especially members a single tribe, the Kelenjin), in long-distance running.

Yet, even here, their analysis is almost exclusively statistical, demonstrating the improbability that this single tribe, who represent, of course, only a tiny proportion of the world’s population, would achieve such success by chance alone if they did not have some underlying innate biological advantage.

They say little of the actual physiological factors that actually make East Africans such as the Kelenjin such great distance runners, nor of the evolutionary factors that selected for these physiological differences.

Others have attributed this advantage to their having evolved to survive at a relatively high altitude, in a mountainous region on the borders of Kenya and Uganda, to which region they are indigenous, as well as their so-called ‘elongate’ body-type, which seems to have evolved as an adaptation to climate.

Amusingly, however, behavioural geneticist Glayde Whitney proposes yet another factor that might explain why the Kelenjin are such excellent runners – namely, according to him, they long had a notorious reputation among their East African neighbours as cattle thieves.

However, unlike cattle thieves in the Old West, they lacked access to horses (which, in sub-Saharan Africa are afflicted with sleeping sickness spread by the tsetse fly) and having failed to domesticate any equivalent indigenous African animal such as the zebra, had instead to escape with their plunder on foot. The result, Whitney posits, was strong selection pressure for running ability in order to outrun and escape any pursuers:

Why are the Kalenjin such exceptional runners? There is some speculation that it may be because the tribe specialized in cattle thievery. Anyone who can run a great distance and get away with the stolen cattle will have enough wealth to meet the high bride price of a good spouse. Because the Kalenjin were polygamous, a really successful cattle thief could afford to buy many wives and make many little runners. This is a good story, anyway, and it might even be true” (Whitney 1999).

The closest Sarich and Miele themselves come to providing a physiological explanation for black sporting success is a single sentence where they write:

Body-fat levels seem to be at a minimum among African populations; the levels do not increase with age in them, and Africans in training can apparently achieve lower body-fat levels more readily than is the case for Europeans and Asians” (p182).

This claim seems anecdotally plausible, at least in respect of young African-American males, many of whom appear able to retain lean, muscular physiques, despite seemingly subsisting on a diet composed primarily of fried chicken with a regrettable lack of healthy alternatives such as watermelon.

However, as was widely discussed in relation to the higher mortality rates experienced among black people (and among fat people) during the recent coronavirus pandemic, there is also some evidence of higher rates of obesity among African-Americans.

Actually, however, this problem seems to be restricted to black women, who evince much higher rates of obesity than do women of most other races in the USA.[51]

African-American males, on the other hand, seem to have similar rates of obesity to white American males.

Thus, according to data cited by the US Department of Health and Human Services and Office of Minority Health, more than 80% African American women are obese or overweight, as compared to only 65% of white women. However, among males the pattern is reversed, with a somewhat higher proportion of white men being overweight or obese than black men (75% of white men versus only about 71% of black men) (US Department of Health and Human Services and Office of Minority Health 2020).

This pattern is replicated in the UK, where black women have higher rates of obesity than white women, but, again, black men have rather lower rates of obesity than white men, with East Asians consistently having the lowest rates of obesity among both sexes.

That similar patterns are observed in both the UK and the USA suggests that the differences reflect an innate race difference – or rather an innate race difference in the magnitude of an innate sex difference, namely in body fat levels, which are higher among women than among men in all racial groups.[52]

This may perhaps be a product of sexual selection and mate choice.

Thus, if black men do indeed, as popular stereotype suggests, like big butts, then black women may well have evolved to have bigger butts through sexual selection.[53]

At least in the US, there is indeed some evidence that mating preferences differ between black and white men with regard to preferred body-types, with black men preferring somewhat heavier body-types (Allison et al 1993; Thompson et al 1996; Freedman et al 2004), though other research suggest little or no significant differences in preferences for body-weight as between black and white men (Singh 1994; Freedman et al 2006).[54]

Sexual selection or, more specifically, mate choice may similarly explain the evolution of fatty breasts among women of all races and the evolution of fatty protruding buttocks among Khoisan women of Southern Africa (which I have written about previously and alluded to above).

Conversely, if the greater fat levels observed among black women is a product of sexual selection and, in particular, of mate choice, then perhaps the greater levels of muscularity and athleticism apparently observed among black men may also be a product of intrasexual selection or male-male competition (e.g. fighting).

Thus, it is possible that levels of intrasexual selection operating on males may have been elevated in sub-Saharan Africa because of the greater prevalence of polygyny in this region, since polygyny intensifies reproductive competition by increasing the reproductive stakes (see Sanderson, Race and Evolution: p92-3; Draper 1989; Frost 2008).

At any rate, other physical differences between the races besides differences in body fat levels also surely play a role in explaining the success of African-descended athletes in many sports.

For example, African populations tend to have somewhat longer legs and arms relative to their torsos than do Europeans and Asians. This reflects Allen’s rule of thermal regulation, whereby organisms that evolved in colder climates evolve to have relatively shorter limbs and other appendages, both to minimize the ratio of surface area to volume, and hence proportion of the body is directly exposed to the elements, and also because it is the extremities that are especially vulnerable to frostbite.

Thus, blacks, having evolved in the tropics, have relatively longer legs and arms than do Europeans and Asians.[55]

Greater relative leg length, sometimes measured by the ratio of sitting to standing height, is surely an advantage in running events, which might partially explain black success in track events and indeed many other sports that also involve running It may also explain African-American performance in sports that involve jumping as well (e.g. basketball, the high jump and long jump), since leg length also confers an advantage here.

Meanwhile, greater relative arm length, sometimes measured by armspan to height ratio, is likely an advantage in sports such as basketball, boxing and racquet sports, since it confers greater reach.

Yet, at least some of the factors that benefit East Africans in distance events are opposite to those that favour West Africans in sprinting (e.g. the relative proportions of fast- versus slow-twitch muscle fibres; a mesomorphic versus an ectomorphic body-build). This suggests that it is, at best, a simplification to talk about a generalized African advantage in running, let alone in athletics as a whole.

Neither do the authors discuss the apparent anomaly whereby racially-mixed African-Americans and West Indians outcompete indigenous West Africans, who, being unmixed, surely possess whatever qualities benefit African-Americans in even greater abundance than do their transatlantic cousins.[56]

Sarich and Miele also advance another explanation for the superior performance of blacks in running events, which stikes me as a very odd argument and not at all persuasive. Here, they argue that, since anatomically modern humans first evolved in Africa:

Our basic adaptations are African. Given that, it would seem that we would have had to make adaptive compromises, such as to cold weather, when populating other areas of the world, thus taking the edge off our ‘African-ness’” (p182)

As a result of our distinctive adaptations having first evolved in Africa, Sarich and Miele argue:

Africans are better than the rest of us at some of those things that most make us human, and they are better because their separate African histories have given them, in effect, better genes for recently developed tests of some basic human adaptations. The rest of us (or, more fairly, our ancestors) have had to compromise some of those African specializations in adapting to more temperate climates and more varied environments. Contemporary Africans, through their ancestors, are advantaged in not having had to make such adaptations, and their bodies, along with their resulting performances, show it” (p183).

Primary among these “basic adaptations”, “African specializations” and “things that most make us human” are, they argue, bipedalism (i.e. walking on two legs). This then, they seem to be arguing, explains African dominance in running events, which represent, if you like, the ultimate measure of bipedal ability.

This argument strikes me as completely unpersuasive, if not wholly nonsensical.

After all, another of our “basic adaptations”, even more integral to what “makes us human” than bipedalism is surely our high levels of intelligence and large brains (see discussion below) as compared to other primates.

Yet Africans notoriously do not appear to have “better genesfor this trait, at least as measured in yet another of those “recently developed tests of some basic human adaptations”, namely IQ tests.

Athletic and Cognitive Ability

This, of course, leads us directly to another race difference that is the subject of even greater controversy – namely race differences in intellectual ability.

The real reason we are reluctant to discuss athletic superiority is, Sarich and Miele contend, because it is perceived as also raising the spectre of intellectual inferiority.

In short, if races differ sufficiently genetically to cause differences in athletic performance, then it is surely possible they also differ sufficiently genetically to cause differences in academic performance and performance on IQ tests.

However, American high school movie stereotypes of ‘dumb jocks’ and ‘brainy nerds’ notwithstanding, there is no necessary inverse correlation between intellectual ability and ability at sports.

Indeed, Sarich and Miele argue that athletic ability is actually positively correlated with intellectual ability.

I can see no necessary, or even likely, negative correlation between the physical and the mental. On the contrary, the data show an obvious, strong, positive correlation among class, physical condition, and participation in regular exercise in the United States” (p182).

Thus, they report:

Professional football teams have, in recent years, been known to use the results of IQ tests as one indicator of potential in rookies. And a monumental study of intellectually gifted California schoolchildren begun by Lewis Terman in the 1920s that followed them through their lives showed clearly that they were also more gifted physically than the average” (p183).[57]

It is likely true that intelligence and athletic ability are positively correlated – if only because many of the same things that cause physical disabilities (e.g. physical trauma, developmental disorders) also often cause mental disability. Down syndrome, for example, causes both mental and physical disability; and, if you are crippled in a car crash, you may also suffer brain damage.

Admittedly, there may be some degree of trade-off between performance in different spheres, if only because the more time one devotes to playing sports, then, all else being equal, the less time one has left to devote to one’s studies, and, in both sports and academics, performance usually improves with practice.

On the other hand, however, it may be that doing regular exercise and working hard at one’s studies are positively correlated because both reflect the same underlying personality trait of conscientiousness.

On this view, the real trade-off may be, not so much between spending time, on the one hand, playing sports and exercising and, on the other, studying, as it is between, on the one hand, engaging in any or all of these productive endeavours and, on the other hand, engaging in wasteful and unproductive endeavours such as watching television, playing computer games and shooting up heroin.

As for the American high school movie stereotype of the ‘dumb jock’, this, I suspect, may partly reflect the peculiar American institution of athletic scholarships, whereby athletically gifted students are admitted to elite universities despite being academically underqualified.

On the other hand, I suspect that the ‘brainy nerd’ stereotype may have something to do with a mild subclinical presentation of the symptoms of high-functioning autism.

This is not to say that ‘nerdishness’ and autism are the same thing, but rather that ‘nerdishness’ represents a milder subclinical presentation of autism symptoms not sufficient to justify a full-blown diagnosis of autism. Autistic traits are, after all, a matter of degree.

Thus, it is notable that the symptoms of autism include many traits that are also popularly associated with the nerd stereotype, such as social awkwardness, obsessive ‘nerdyspecial interests and perhaps even with that other popular stereotype of ‘nerds’, namely having to wear glasses.

More relevant for our purposes, high functioning autism is also associated with poor physical coordination and motor skills, which might explain the stereotype of ‘nerds’ performing poorly at sports.

On the other hand, however, contrary to popular stereotype, autism is not associated with above average intelligence.[58]

In fact, although autistic people can present the whole range of intelligence, from highly gifted to intellectually disabled, autism is overall said to be associated with somewhat lower average intelligence than is observed in the general population.

This is consistent with the fact that autism is indeed, contrary to the claims of some neurodiversity advocates, a developmental disorder and disability.

However, I suspect autism may be underdiagnosed among those of higher intelligence, precisely because they are able to use their higher general intelligence to compensate for and hence ‘mask’ their social impairments such that they go undetected and often undiagnosed.

Moreover, autism has a complex and interesting relationship with intelligence, and autism seems to be associated with special abilities in specific areas (Crespi 2016).

There is also some evidence, albeit mixed, that autistic people score relatively higher in performance IQ and spatio-visual ability than in verbal IQ. Given there is some evidence of a link between spatio-visual intelligence and mathematical ability, this might plausibly explain the stereotype of nerds being especially proficient in mathematics (i.e. ‘maths nerds’).

Overall, then, there is little evidence of, or any theoretical reason to anticipate, any trade-off or inverse correlation between intellectual and athletic ability. On the contrary, there is probably some positive correlation between the intelligence and athletic ability, if only because the same factors that cause intellectual disabilities – physical trauma, brain damage, birth defects, chromosomal abnormalities – also often cause physical disabilities.

On the other hand, however, Philippe Rushton, in the ‘Preface to the Third Edition’ of his book, Race Evolution and Behavior (which I have reviewed here), contends that some of the same physiological factors that cause blacks to excel in some athletic events are also indirectly associated with other racial differences that perhaps portray blacks in a less flattering light.

Thus, Rushton reports that the reason blacks tend, on average, to be faster runners is because:

Blacks have narrower hips [than whites and East Asians] which gives them a more efficient stride” (Race Evolution and Behavior: p11).

But, he continues, the reason why blacks are able to have narrower hips, and hence more efficient stride, is that they give birth to smaller-brained, and hence smaller headed, infants:

The reason why Whites and East Asians have wider hips than Blacks, and so make poorer runners, is because they give birth to larger brained babies” (Race Evolution and Behavior: p12).[59]

Yet, as discussed below, brain size is itself correlated with intelligence, both as between species, and as between individual humans.

Similarly, Rushton argues:

Blacks have from 3 to 19% more of the sex hormone testosterone than Whites or East Asians. These testosterone differences translate into more explosive energy, which gives Blacks the edge in sports like boxing, basketball, football, and sprinting” (Race Evolution and Behavior: p11).

However, higher levels of testosterone also has a downside, not least since:

The hormones that give Blacks an edge at sports makes them more masculine in general — physically active in school, and more likely to get into trouble” (Race Evolution and Behavior: p12).

In other words, if higher levels of testosterone gives blacks an advantage in some sports, they perhaps also result in the much higher levels of violent crime and conduct disorders reported among people of black African descent (see Ellis 2017).[60]

Intelligence

Whereas their chapter on ‘Race and Physical Differences’ focussed mostly on differences in athletic ability, Sarich and Miele’s chapter on ‘Race and Behavior’, focuses, perhaps inevitably, almost exclusively on race differences in intelligence.

However, though it certainly has behavioural correlates, intelligence is not, strictly speaking, an element of behaviour as such. The chapter would therefore arguably be more accurately titled ‘Race and Psychology’ – or indeed ‘Race and Intelligence’, since this is the psychologial difference upon which they focus almost to the exclusion of all others.[61]

Moreover Sarich and Miele do not even provide a general, let alone comprehensive, review of all the evidence on the subject of race differences in intelligence, their causes and consequences. Instead, they focus on two very specific issues and controversies:

  1. Race differences in brain size; and
  2. The average IQ of blacks in sub-Saharan Africa.

Yet, despite the title of the Chapter, neither of the these reflect a difference in behaviour as such.

Indeed, race differences in brain-size are actually a physical difference – albeit a physical difference presumed, not unreasonably, to be associated with a psychological difference – and therefore should, strictly speaking, have gone in their previous chapter on ‘Race and Physical Differences’.

Brain Size

Brain-size and its relation to both intelligence and race is a topic I have written about previously. As between individuals, there exists a well-established correlation between brain-size and IQ (Pietschnig et al 2015; Rushton and Ankney 2009).

Nicholas Mackintosh, himself by no means a doctrinaire hereditarian and a critic of hereditarian theories with respect to race differences in intelligence, nevertheless reports in the second edition of his undergraduate textbook on IQ and Human Intelligence, published in 2011:

Although the overall correlation between brain size and intelligence is not very high, there can be no doubt of its reliability” (IQ and Human Intelligence: p132).

Indeed, Sarich and Miele go further. In a critique of the work of infamous scientific charlatan Stephen Jay Gould, to whom they attribute the view that “brain size and intellectual performance have nothing to do with one another”, they retort:

Those large brains of ours could not have evolved unless having large brains increased fitness through what those large brains made possible-that is, through minds that could do more” (p213).

This is especially so given the metabolic expense of brain tissue and other costs of increased brain size, such that, to have evolved during the course of human evolution, our large brains must have conferred some compensating advantage.

Thus, dismissing Gould as a “behavioral creationist”, given his apparent belief that the general principles of natural selection somehow do not apply to behaviour, or at least not to human behaviour, the authors forthrightly conclude:

The evolutionary perspective demands that there be a relationship-in the form of a positive correlation-between brain size and intelligence… Indeed, it seems to me that a demonstration of no correlation between brain size and cognitive performance would be about the best possible refutation of the fact of human evolution” (p214).

Here, the authors go a little too far. Although, given the the metabolic expense of brain tissue and other costs associated with increased brain size, larger brains must have conferred some selective advantage to offset these costs, it need not necessarily have been an advantage in intelligence, certainly not in general intelligence. Instead, increased brain-size could, at least in theory, have evolved in relation to some specific ability, or cognitive or neural process, other than intellectual ability.

Yet, despite this forthright claim, Sarich and Miele then go on to observe that one study conducted by one of Sarich’s graduate students, in collaboration with Sarich himself, actually found no association between brain size and IQ as between siblings from the same family (Schoenemann et al 2000).

This, Sarich and Miele explain, suggests the relationship between brain-size and IQ is not causal, but rather that some factor that differs as between families is responsible for causing both larger brains and the higher IQs. However, they explain, “the obvious candidates” (e.g. socioeconomic status, nutrition) do not have nearly a big enough effect to account for this (p222).

However, they fail to note that other studies have found a correlation between brain size and IQ scores even within families, suggesting that brain size does indeed cause higher intelligence (e.g. Jensen & Johnson 1994; Lee et al 2019).

Indeed, according to Rushton and Ankney (2009: 695), even prior to the Lee et al study, four studies had already established a correlation between brain-size and IQ even within families, a couple of them published before Sarich and Miele’s book.

Of course, Sarich and Miele can hardly be faulted for failing to cite Lee et al (2019), since that study had not been published at the time their book was written. However, other studies (e.g. Jensen & Johnson 1994) had already been published at the time Sarich and Miele authored their book.

Brain-size is also thought to correlate with intelligence as between species, at least after controlling for body-size (see encephalization quotient).

However, comparing the intelligence of different species obviously represents a difficult endeavour.

Quite apart from the practical challenges (e.g. building a maze for a mouse to navigate in the laboratory is simple enough, building a comparable maze for elephants presents more difficulties), there is the fact that, whereas most variation in human intelligence, both between individuals and between groups, is captured by a single g factor, different species no doubt have many different specialist abilities.[62]

For example, migratory birds surely have special abilities in respect of navigation. However, these are not necessarily reflective of their overall general intelligence.

In other words, if you think a culture-fair’ IQ test is an impossibility, then try designing a ‘species-fair’ test!

If brain-size correlates with intelligence both as between species and as between individual humans, it seems probable that race differences in brain-size also reflect differences in intelligence.

However, larger brains do not automatically, or directly, confer, or cause, higher levels of intelligence.

For example, most dwarves have IQs similar to those of non-dwarves, despite having smaller brains, but, save in the case of ‘proportionate dwarves’, larger brains relative to their body-size. Neither is macrocephaly (i.e. abnormally and pathologically large head-size) associated with exceptional intelligence.

The reason that disproportionate dwarves and people afflicted with macrocephaly do not have especially high intelligence, despite larger brains relative to their body size, is probably because these are abnormal pathological conditions. The increased brain-size did not evolve through natural selection, but rather represents some kind of malfunction in development.

Therefore, whereas increases in brain size that evolved through natural selection must have conferred some advantage to offset the metabolic expense of brain tissue and other costs associated with increased brain size, these sort of pathological increases in brain-size need not have any compensating advantages, since they did not evolve through natural selection at all, and the increased relative brain size may indeed be wasted.

Likewise, although sex differences in brain-size are greater than those between races, at least before controlling for body-size, sex differences in IQ are either small or non-existent.[63]

Meanwhile, Neanderthals had larger brains than modern humans, despite a shorter, albeit more robust, stocky and more muscular frame, and with somewhat heavy overall body weight.

As with so much discussion of the topic of race differences in intelligence, Sarich and Miele focus almost exclusively on the topic of differences between whites and blacks, the authors reporting:

With respect to the difference between American whites and blacks, the one good brain-size study that has been done indicates a difference between them of about 0.8 SD [i.e. 0.80 of a standard deviation]; this could correspond to an IQ difference of about 5 points, or about one-third of the actual differential [actually] found [between whites and blacks in America]” (p217)

The remainder of the differential presumably relates to internal differences in brain-structure as between the races in question, whether these differences are environmental or innate in origin.

Yet Sarich and Miele say little if anything to my recollection about the brain-size of other groups, for example Australian Aboriginals or East Asians.

Neither, most tellingly, do they discuss the brain-size of the race of mankind gifted with the largest average brain size – namely, Eskimos.

Yet the latter are not renowned for their contributions to science, the arts or civilization.

Moreover, according to Richard Lynn, their average IQ is only 91, as compared to an average IQ of 100 for white Europeans – high for a people who, until recently, subsisted as largely hunter-gatherers (other such groups – Australian Aborigines, San Bushmen, Native Americans – have low average IQs), but well below whites, East Asians and Ashkenazi Jews, each of whom possess, on average, smaller brains than Eskimos  (see Race Differences in Intelligence: reviewed here).

In general, a clear pattern emerges in respect of the relative brain-size of different human populations. In general, the greater the latitude of the region in which a given population evolved, the greater their brain-size. Hence the large brains of Eskimos (Beals et al 1984).

This then seems to be a climatic adaptation. Some racialists like Richard Lynn and Philippe Rushton have argued that this reflects the greater cognitive demands of surviving in a cold climate (e.g. building shelter, making fire, clothes, obtaining sufficient foods in regions where plant foods are rare throughout the winter).

In contrast, to the extent that race and population differences in average brain size are even acknowledged by mainstream anthropologists, they are usually attributed to the Bergmann’s rule of temperature regulation. Thus, the authors of one recent undergraduate level anthropology textbook on biological anthropology contend:

Larger and relatively broader skulls lose less heat and are adaptive in cold climates; small and relatively narrower skulls lose more heat and are adaptive in hot climates” (Human Biological Variation: p285).[64]

As noted, this seems to be an extrapolation of Bergmann’s rule of temperature regulation. Put simply, in a cold climate, it is adaptive to minimize the proportion of the body that is directly exposed to the elements, or, in other words, to minimize the ratio of surface-area-to-volume.

As the authors of another undergraduate level textbook on physical anthropology explain:

“The closer a structure approaches a spherical shape, the lower will be the surface-to-volume ratio. The reverse is true as elongation occurs—a greater surface area to volume is formed, which results in more surface to dissipate heat generated within a given volume. Since up to 80 percent of our body heat may be lost through our heads on cold days, one can appreciate the significance of shape” (Human Variation: Races, Types and Ethnic Groups, 5th Ed: p188).

However, it seems implausible that an increase in metabolically expensive brain tissue would have evolved solely for regulating temperature, when the same result could have been achieved at less metabolic cost by modifying only the external shape of the skull.

Moreover, perhaps tellingly, it seems that brain size correlates more strongly with latitude than do other measures of body-size. Thus, in their review of the data on population differences in cranial capacity, Beals et al report:

Braincase volume is more highly correlated with climate than any of the summative measures of body size. This suggests that cranial morphology may be more influenced by the thermodynamic environment than is the body as a whole” (Beals et al 1984: p305).

Given that, contrary to popular opinion, we do not in fact lose an especially large proportion of our body heat from our heads, certainly not the eighty percent claimed by Molnar in the anthropology textbook quoted above, this is not easy to explain interms of temperature regulation alone.

At any rate, even if differences in brain size did indeed evolve solely for the purposes of temperature regulation, then it is still surely possible that differences in average intelligence evolved as a byproduct of such increases in brain-size.

Measured IQs in Sub-Saharan Africa

With regard to the second controversial topic upon which Sarich and Miele focus their discussion in their chapter on ‘Race and Behavior’, namely that of the average IQ in sub-Saharan Africa, the authors write:

Perhaps the most enigmatic and controversial results in the IQ realm pertain to sub-Saharan Africans and their descendants around the world. The most puzzling single finding is the apparent mean IQ of the former of about 70” (p225).

This figure applies, it ought to be emphasized, only to black Africans still resident within sub-Saharan Africa. Blacks resident in western economies (except Israel, oddly), whether due to racial admixture or environmental factors, or a combination of the two, generally score much higher, though still substantially below whites and Asians, with average IQs of about 85, compared, of course, to a white average of 100 (see discussion here).

The figure seems to come originally from the work of Richard Lynn on national IQs (reviewed here, for discussion of black IQs in particular: see here, here and here), and has inevitably provoked much criticism and controversy.[65]

While the precise figure has been questioned, it is nevertheless agreed that the average IQ of blacks in sub-Saharan Africa is indeed very low, and considerably lower than that of blacks resident in western economies, unsurprisingly given the much higher living standards of the latter.[66]

For their part, Sarich and Miele seem to accept Lynn’s conclusion, albeit somewhat ambiguously. Thus, they conclude:

One can perhaps accept this [figure] as a well-documented fact” (p225).

Yet including both the word “perhaps” and the phrase “well-documented” in a single sentence and in respect of the same ostensible “fact” strikes me as evidence of evasive fence-sitting.

An IQ of below 70 is, in Western countries, regarded as indicative of, and inconclusive evidence for, mental retardation, though mental disability is not, in practice, diagnosed by IQ alone.[67]

However, Sarich and Miele report:

Interacting with [Africans] belies any thought that one is dealing with an IQ 70 people” (p226).[68]

Thus, Sarich and Miele point out that, unlike black Africans:

Whites with 70 IQ are obviously substantially handicapped over and above their IQ scores” (p225).

In this context, an important distinction must be recognised between, on the one hand, what celebrated educational psychologist Arthur Jensen calls “biologically normal mental retardation” (i.e. individuals who are simply at the tail-end of the normal distribution), and, on the other, victims of conditions such as chromosomal abnormalities like Down Syndrome or of brain damage, who tend to be impaired in other ways, both physical and psychological, besides intelligence (Straight Talk About Mental Tests: p9).

Thus, as he explains in his more recent and technical book, The g Factor: The Science of Mental Ability:

There are two distinguishable types of mental retardation, usually referred to as ‘endogenous’ and ‘exogenous’ or, more commonly, as ‘familial’ and ‘organic’… In familial retardation there are no detectable causes of retardation other than the normal polygenic and microenvironmental sources of IQ variation that account for IQ differences throughout the entire range of IQ… Organic retardation, on the other hand, comprises over 350 identified etiologies, including specific chromosomal and genetic anomalies and environmental prenatal, perinatal, and postnatal brain damage due to disease or trauma that affects brain development. Nearly all of these conditions, when severe enough to cause mental retardation, also have other, more general, neurological and physical manifestations of varying degree… The IQ of organically retarded children is scarcely correlated with the IQ of their first-order relatives, and they typically stand out as deviant in other ways as well” (The g Factor: p368-9).

Clearly, given that the entire normal distribution of IQ among blacks is shifted downwards, a proportionally greater number of blacks with IQs below any given threshold will simply be at the tail-end of the normal distribution for their race rather than suffering from, say, chromosomal abnormalities, as compared to whites or East Asians with the same low IQs.

Thus, as Sarich and Miele themselves observe:

Given the nature of the bell curve for intelligence and the difference in group means, there are proportionately fewer whites with IQs below 75, but most of these are the result of chromosomal or single-gene problems and are recognizable as such by their appearance as much as by their behavior” (p230).

This, then, is why low-IQ blacks appear relatively more competent and less stereotypically ‘retarded’ than whites or East Asians with comparably low IQs, since the latter are more likely to have deficits in other areas, both physical and psychological.

Thus, leading intelligence researcher Nicholas Mackintosh reports that low-IQ blacks perform much better than whites of similarly low IQ in respect of so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in an apparently ‘normal’ manner).

Indeed, Mackintosh reports that, according to one sociological study first published in 1973:

If IQ alone was used as a criterion of disability, ten times as many blacks as whites would have been classified as disabled; if adaptive behaviour measures were added to IQ, this difference completely vanished” (IQ and Human Intelligence: p356-7).

This is indeed among the reasons that IQ alone is now no longer deemed a sufficient ground in and of itself for diagnosing a person as suffering from a mental disability.

Similarly, Jensen himself reports:

In social and outdoor play activities… black children with IQ below seventy seldom appeared as other than quite normal youngsters— energetic, sociable, active, motorically well coordinated, and generally indistinguishable from their age-mates in regular classes. But… many of the white children with IQ below seventy… appeared less competent in social interactions with their classmates and were motorically clumsy or awkward, or walked with a flatfooted gait” (The g Factor: p367).[69]

Indeed, in terms of physical abilities, some black people with what are, at least by white western standards, very low IQs, can even be talented athletes, a case-in-point being celebrated world heavyweight boxing champion, Muhammad Ali, who tested so low in an IQ test that was used by the armed services for recruitment purposes that he was initially rejected as unfit for military service.[70]

In contrast, I am unaware of any successful white or indeed Asian athletes with comparably low IQs.

In short, according to this view, most sub-Saharan Africans with an IQs less than or equal to 70 are not really mentally handicapped at all. On the contrary, they are within the normal range for the subspecies to which they belong.

Indeed, to adopt an admittedly provocative analogy or reductio ad absurdum, it would be no more meaningful to say that the average chimpanzee is mentally handicapped simply because they are much less intelligent than the average human.

Sarich and Miele adopt another, less provocative analogy, suggesting that, instead of comparing sub-Saharan Africans with mentally handicapped Westerners, we do better to compare them to Western eleven-year-old children, since 70 is also the average score for children around this age (p229-30).

Thus, they cite Lynn himself as observing:

Since the average white 12-year-old can do all manner of things, including driving cars and even fixing them, estimates of African IQ should not be taken to mean half the population is mentally retarded” (p230).

However, this analogy is, I suspect, just as misleading.

After all, just as people suffering from brain damage or chromosomal abnormalities such as Down Syndrome tend to differ from normal people in various ways besides intelligence, so children differ from adults in many ways other than intelligence.

Thus, even highly intelligent children often lack emotional maturity and worldly knowledge.[71]

Khoisan Intelligence

Interestingly, however, the authors suggest that one specific morphologically very distinct subgroup of sub-Saharan Africans, often recognised as a separate race (Capoid as opposed to Congoid, in Carleton Coon’s terminology and taxonomy) by many early twentieth century anthropologists, may be an exception when it comes to sub-Saharan African IQs – namely San Bushmen.

Thus, citing anecdotal evidence of a single individual Bushman who proved himself very technically adept and innovative in repairing a car motor, the authors quote population geneticist Henry Harpending, who has done fieldwork in Africa, as observing:

All of us have the impression that Bushmen are really quick and clever and are quite different from their neighbours” (p227).

They also quote Harpending as anticipating:

There will soon be real data available about the relative performance of Bushmen, Hottentot, and Bantu kids – or more likely, they will supress it” (p227).

Some two decades or so later, the only data I am aware of is that reported by Richard Lynn.

Relying on just two very limited studies of Khoisan intelligence, Lynn nevertheless does not hesitate to estimate Bushmen’s average IQ at just 54 – the lowest that he reports for any ethnic group anywhere in the world (Race Differences in Intelligence: p76).

However, we should be reluctant to accept these conclusions prematurely. Not only does Lynn rely on only two studies of Khoisan intelligence, but both these studies were very limited, neither remotely resembling a full modern IQ test.

Agriculture, Foraging and Intelligence

As to why higher intelligence might have been selected for among San Bushmen than among neighbouring tribes of Black Bantu, they consider the possibility that there was “lessened selection for intelligence (or at least cleverness) with the coming of agriculture, versus hunting-gathering”, since, whereas Bantu are agriculturalists, the San still subsist through hunting-gathering (p227).

On this view, hunting-gatherers must employ intelligence to track and capture prey and otherwise procure food, whereas farming, save for the occasional invention of a new agricultural technique, is little more than tedious, repetitious and mindless drudgery.

I am reminded of Jared Diamond’s provocative claim, in his celebrated book, Guns, Germs and Steel, that “in mental ability New Guineans are probably genetically superior to Westerners”, since the former must survive on their wits, avoid being murdered and procure prey to survive, whereas in densely populated agricultural and industrial societies most mortality comes from disease, which tends to strike randomly (Guns, Germs and Steel: p20-1).

Yet, how ever intuitively plausible this theory might appear, especially, perhaps, for those of us who have, throughout our entire lives, never either hunted or farmed, certainly not in the manner of the Bantu or San, it is not supported by the evidence.

According to data collected by Richard Lynn in his book, Race Differences in Intelligence (reviewed here), albeit on the basis of quite limited data, both New Guineans and San Bushmen have very low average IQs, lower even than other sub-Saharan Africans.[72]

Thus, they again quote Henry Harpending as concluding:

Almost any hypothesis about all this can be falsified with one sentence. For example:

  1. Hunting-gathering selects for cleverness. But then why do Australian Aborigines do so badly in school and on tests?
  2. Dense labor-intensive agriculture selects for cleverness, explaining the high IQ scores in the Far East and in South India. But then why is there not a high~IQ pocket in the Nile Valley?

And so on. I don’t have any viable theory about it all.”[73]

Indeed, if we rely on Lynn’s data in his book, Race Differences in Intelligence (which I have reviewed here), then it would seem that groups that have, until recently, subsisted primarily through a hunter-gatherer lifestyle, tend to have low IQs.

Thus, Lynn attributes exceptionally low average IQs not only to San Bushmen, but also to African Pygmies and Australian Aboriginals, and, while his data for the Bushmen and Pygmies is very limited, his data on Australian Aboriginals from the Australian school system is actually surprisingly abundant, revealing an average IQ of just 62.

Interestingly, other groups who had already partly, but not wholly, transitioned to agriculture by the time of European contact, such as Pacific Islanders and Native Americans, tend to score rather higher, each with average IQs of around 85, rather higher indeed than the average IQs of black Bantu agriculturalists in Africa.

Indeed, even cold-adapted Eskimos, also, until recently hunter-gatherers, but with the largest brain-size of any human population, score only around 90 in average IQ according to Lynn.

Interestingly, one study that I am aware of did find evidence that a genetic variant associated with intelligence, executive function and working memory was more prevalent among populations that had transitioned to agriculture than among hunter-gatherers (Piffer 2013).

Race Bombs’?

In their final chapter, ‘Learning to Live With Race’, Sarich and Miele turn to the vexed subject of the social and political implications of what they have reported and concluded regarding the biology of race and of race differences in previous chapters.

One interesting if somewhat sensationalist subject that they discuss is the prospect of what they call “ethnically targeted weapons” or “race bombs”. These are:

The ultimate in biological weapons… ethnically targeted weapons-biological weapons that selectively attack members of a certain race or races but, like the Death Angel in the Book of Exodus, ignore members of the attacker’s race” (p250).

This might sound more like dystopian science fiction than it does science, but Sarich and Miele cite evidence that some regimes have indeed attempted to develop such weapons.

Perhaps predicably, the regimes in question are the ‘usual suspects’, those perennial pariah states of liberal democratic western modernity, each of whom were/are, nevertheless, very much western states, which is, of course, the very reason for their pariah status, since, for this reason, they are held to relatively higher standards than are other African and Middle Eastern polities – namely apartheid-era South Africa and Israel.

The evidence the authors cite goes beyond mere sensationalist rumours and media reports.

Thus, they report that one scientist who been had employed in a chemical and biological warfare plant in South Africa testified before the post-apartheid Truth and Reconciliation Commission that he had indeed led a research team tasked with developing a “a ‘pigmentation weapon’ that would ‘target only black people’ and that could be spread through beer, maize, or even vaccinations’” (p252).

Meanwhile, according to media reports and government leaks cited by the authors, Israel has taken up the gauntlet of developing a ‘race bomb’, building on the research begun by its former ally South Africa (p252).

Unfortunately, however, (or perhaps fortunately, especially for the Palestinians) Sarich and Miele report that, as compared to developing a ‘race bomb’ for use in apartheid-era South Africa:

Developing a weapon that would target Arabs but spare Jews would be much harder because the two groups are exceedingly alike genetically” (p253).[74]

Indeed, this problem is not restricted to the Middle East. On the contrary, Sarich and Miele report, listing almost every ethnic conflict that had recently been in the headlines at the time they authored their book:

The same would hold for the Serbs, Croats, and Bosnians in the former Yugoslavia; the Irish Catholics and Ulster Protestants in Northern Ireland; North and South Korea; and Pakistan and India” (p254)

This is, of course, because warring ethnic groups tend to be neighbours, often with competing claims to the same territory; yet, for the same reason, they also often share common origins, as well as the inevitable history of mating, miscegenation and intermarriage that invariably occurs wherever different groups come into contact with one another, howsoever discouraged and illicit such relationships may be.

Thus, paradoxically, warring ethnic groups are almost always genetically quite closely related to one another.

The only exceptions to this general rule are in places there has been recent large-scale movements of populations from distant regions of the globe, but the various populations have yet to interbreed with one another for a sufficient period as to dilute their genetic differences (e.g. blacks and whites in the USA or South Africa).

Thus, Sarich and Miele identify only Sudan in Northeast Africa as, at the time they were writing, a “likely prospect for this risk” (namely, the development of a ‘race bomb’), as at this time war was then raging between what they describe as “racially mixed Islamic north and the black African Christian and traditional-religion south” (p255).

Yet, here, even assuming that the genetic differences between the two then-warring groups were indeed sufficiently substantial as to make such a weapon a theoretical possibility, it is highly doubtful that either side would have the technological wherewithal, capacity, resources and expertise to develop such a weapon.

After all, Israel is a wealthy country with a highly developed high-tech economy with an advanced armaments industry and is a world leader in scientific and technology research, not to mention receiving billions of dollars in military aid annually from the USA alone.

South Africa was also regarded as a developed economy during the heyday of apartheid when this research was supposedly conducted, though it is today usually classed as ‘developing[75]

Sudan, on the other hand, is a technologically backward Third World economy. The prospect of either side in the conflict developing a novel form of biological weapon is therefore exceedingly remote.

A similar objection applies to the authors’ suggestion that, even in multiracial America, supposedly comparatively “immune to attack from race bombs from an outside source” on account of its “large racially diverse population”, there may still be a degree of threat from “terrorist groups within our country” (p255).

Thus, it is true that there may well be terrorist groups in the USA that do indeed harbour genocidal intent. Indeed, black nationalist groups like the Nation of Islam and black Israelites have indeed engaged in openly genocidal mass murders of white Americans, while white nationalist groups, though poitically very marginal, have also been linked to terror attacks and racially motivated murders, albeit isolated, sporadic and on a very small scale, at least in recent decades.

However, it is extremely unlikely that these marginal extremists, whose membership is largely drawn from most uneducated and deprived strata of society, would have the technical knowledge and resources to build a ‘race bomb’ of the sort envisaged by Sarich and Miele, especially since such weapons remain only a theoretical possibility and are not known to have been successfully developed anywhere in the world, even in South Africa and Israel.

At any rate, even among relatively genetically distinct and unmixed populations, any ‘race bomb’ would, Sarich and Miele rightly report, inevitably lack “pinpoint accuracy” given the only very minimal genetic differentiation observed among human races, a key point that they discussed at length earlier in their book (p253).

Therefore, Sarich and Miele conclude:

“[The only] extremists crazy enough to attempt to use such weapons would be [those extremists] crazy enough to view large numbers of dead among their own nation, race or ethnic group as ‘acceptable losses’ in some unholy holy war to save their own group would risk employing such a device” (p353-4).

Unfortunately, some “extremists” are indeed just that “crazy” and extreme, and these “extremists” include not only terrorist groups, but also sometimes governments as well.

Indeed, every major war in recent history has, by very definition, involved the main combatant regimes being all too willing to accept “large numbers of dead among their own nation, race, or ethnic group as ‘acceptable losses’” – otherwise, of course, they would be unlikely to qualify as ‘major’ wars.

Thus, Sarich and Miele conclude:

Even if race bombs do not have the pinpoint accuracy desired, they have the potential to do great harm to people of all races and ethnic groups” (p253).

Political Implications?

Aside from their somewhat sensationalist discussion of the prospect for ‘race bombs’, Sarich and Miele, in their final chapter, also discuss perhaps more realistic scenarios of how an understanding (or failure to understand) the nature and biology of race differences might affect the future of race relations in America, the west and beyond.

In particular, they identify three possible future ‘scenarios’, namely:

  1. Meritocracy;
  2. Affirmative Action, Race Norming and Quotas’; and
  3. Resegregation.

A fourth possibility that they do not discuss is freedom of association, as championed by libertarians.

Under this principle, which largely prevailed in the USA prior to the 1964 Civil Rights Act (and in the UK prior to the 1968 Race Relations Act), any private individual or corporation (but not the government) would be free to discriminate against any person or group he or she wished on any grounds whatsoever, howsoever racist or downright irrational.

Arguably, such a system would, in practice, result in something very close to meritocracy, since any employer that discriminated irrationally against a certain demographic would be outcompeted and hence driven to out of business by competing employers that instead chose the best candidates for the job, or even preferentially employed members of the group disfavoured by other employers precisely because, since some other employers refused to hire them, the latter would be willing to work for lower wages, hence cutting costs and thereby enabling them to undercut and thereby outcompete their more prejudiced competitors.

In practice, however, some degree of discrimination would likely remain, especially in the service industry, not least because, not just employers, but consumers themselves might discriminate against service providers of certain demographics.[76]

The authors, for their part, deplore the effects of affirmative action in higher education.

Relying on Sarich’s own direct personal experience as a professor at the University of California at Berkley, where affirmative action was openly practiced from 1984 until 1996, at which time it was, at least in theory,[77] discontinued after an amendment to the California state constitution prohibiting the practice in public education, government employment and contracting, they report that it resulted in:

An Apartheid-like situation – two student bodies separated by race/ethnicity and performance who wound up, in the main, in different courses, pursued different majors, and had minimal social interactions but maximum resentment” (p245)

Thus, they conclude:

It is, frankly, difficult to imagine policies that could have been more deliberately crafted or better calculated to exacerbate racial and ethnic tensions, discourage individual performance among all groups, and contribute to the decay of a magnificent educational institution” (p245)

The tone adopted here suggests that the authors also very much disapprove of the third possible scenario that they discuss, namely resegregation.

However, they also very much acknowledge that this process is already occurring in modern America, and also seem pessimistic regarding the chances of halting or reversing it.

Despite or perhaps because of government-imposed quotas, society becomes increasingly polarized along racial lines… America increasingly resegregates itself. This trend can already be seen in housing, enrollment in private schools, racial composition of public schools, and political affiliation” (p246).

On the other hand, their own preference seems to be very much for what they call ‘meritocracy’.[78]

After all, they report:

Society… cannot level up-only down-and any such leveling is necessarily at the expense of individual freedom and, ultimately, the total level of accomplishment” (p246).

However, they acknowledge that a return to meritocracy, or at least the abolition of race preferences, would not be without its problems, not least of which is the inevitable degree of resentment of the part of those groups which perceive themselves as losing out in competition with other better performing groups.

Thus, they conclude:

When we assess group representations with respect to the high-visibility pluses (e.g., high-paying jobs) and minuses (e.g., criminality) in any society, it is virtually guaranteed that they are not going to be equal-and that the differences will not be trivial” (p246)

On the other hand, race relations were not especially benign even in modern ‘affirmative action’-era America, or what we might aptly term the ‘post-post-racial America’ era, when the utopian promises of the early Obama-era went up in flames, along with much of America’s urban landscape, in the mostly peaceful BLM riots which claimed at least nineteen lives and caused property damage estimated in the billions of dollars in 2020 alone.

Could things really get any worse were racial preferences abolished altogether? Are the urban ghetto black underclass really likely to riot because fewer upper-middle-class blacks are given places at Harvard that they didn’t really deserve?

In mitigation of any resentments that arise as a consequence of disparities in achievement between groups, Sarich and Miele envisage that, in the future:

Increasing societal complexity, by definition, means increasing the number of groups in that society to which a given individual can belong. This process tends to mitigate exclusive group identification and the associated resentment toward other groups” (p242).

In other words, Sarich and Miele seem to be saying that, instead of identifying only with their race or ethnicity, individuals might come identify with another other aspects of their identity, in respect of which aspects of their identity their ‘own’ group would presumably perform rather better in competition with other groups.[79]

What aspects of their identity they have in mind, they do not say.

The problem with this is that, while individuals do indeed evince an in-group preference even in respect of quite trivial (or indeed wholly imaginary) differences, both historically and cross-culturally in the world today, ethnic identity has always been an especially important part of people’s identity, probably for basic biological reasons, rooted as it is in a perception of shared kinship.

In contrast, other aspects of a person’s identity (e.g. their occupation, which football team they support, their sex) tend to carry rather less emotional weight.[80]

In my view, a better approach to mitigating the resentments associated with the different average performance of different groups is instead to emphasize performance in different spheres of attainment.

After all, if it is indeed, as Sarich and Miele contend in the passage quoted above, “virtually guaranteed” that different groups have different levels of achievement in different activities, it is also “virtually guaranteed” that no group will perform either well or poorly at all these different endeavours.

Thus, blacks may indeed, on average, perform relatively poorly in academic and intellectual pursuits, at least as compared to whites and Asians. However, blacks seemingly perform much better in other spheres, not least in popular music and, as discussed above, in many athletic events.

Indeed, as discussed by blogger and journalist Steve Sailer in his fascinating essay for National Review, Great Black Hopes, African Americans actually consistently outperform whites in any number of spheres (Sailer 1996).

As amply demonstrated by Herrnstein and Murray in The Bell Curve (reviewed here), intellectual ability, as measured by IQ, indeed seems to be of particular importance in determining socioeconomic status and income in modern economically developed technologically advanced societies, such as the USA, and, in this respect, blacks perform relatively poorly.

However, popular entertainers and elite athletes, while not necessarily possessing high IQs, nevertheless enjoy enormous social and cultural prestige in modern western society, far beyond that enjoyed by lawyers, doctors, or even leading scientists, playwrights, artists and authors.

More children grow up wanting to be professional footballers or pop stars than grow up wanting to be college professors or research scientists, and, whereas everyone, howsoever estranged from popular culture like myself, could nevertheless name any number of famous pop stars, actors and athletes, many of them black, the vast majority of leading intellectuals and scientists are all but unknown to the vast majority of the general public.

Indeed, even those working in other ostensibly high-IQ fields, like law and medicine, and perhaps science and academia too, are much more likely to follow sports, and watch popular movies and TV than they are to, say, recreationally read scientific journals or even popular science books and magazines.

In other words, although it is the only example the authors give in the passage quoted above, “high-paying jobs” are far from the only example of “high-visibility pluses” in which different ethnic groups perform differently, and nor are they the most “high-visibility” of such “pluses”.

Indeed, the sort of “high-paying jobs” that Sarich and Miele likely have in mind are not even the only type of “high-paying jobs”, though they may be the most numerous such jobs, since elite athletes and entertainers, in addition to enjoying enormously high social and cultural prestige, also tend to be very well-paid.

In short, the idea that intellectual ability is the sole, or even the most important, determinant of social worth and prestige, is an affection largely restricted to those who, like Sarich and Miele, and also many of their most vocal critics like Gould, happen to work in science, academia and other spheres where intellectual ability is indeed at a premium.

Most women, in my experience, would rather be thought beautiful (or at least pretty) than intelligent; most men would rather be considered athletic, tough, strong, charismatic and manly than they would a ‘brainy nerd’ – and, when it comes to being considered tough, athletic, manly and charismatic, black males arguably perform rather better than do whites or Asians!

Mating, Miscegenation, Race Riots and Rape

Finally, it is perhaps worth noting that Sarich and Miele also discuss, and, perhaps surprisingly, caution against, another widely touted supposed panacea to racial problems, namely mass miscegenation and intermarriage.

On this view, all racial animosities will disappear in just a few generations if we all just literally fornicate them out of existence by indiscriminately reproducing with one another and hence ensuring all future generations are of mixed race and hence indistinguishable from one another.

If this were the case then, in the distant future, race problems would not exist simply because distinguishable races would not exist, and there would only be one race – the human race – and we would all presumably live happily ever after in a benign and quite literally ‘post-racial’ utopia.

In other words, racial conflict would disappear in the future because the claim of the racial egalitarians and race deniers – namely that there are no human races, but rather only one race, the human race – the very claim that Sarich and Miele have devoted their enitre book to rejecting – would ultimately come to be true.

Of course, one might question whether this outcome, even if achievable, would indeed be desirable, not least since it would presumably result in the loss, or at least dilution, of the many unique traits, and abilities of different races, including those that Sarich and Miele have discussed in previous chapters.

At any rate, given the human tendency towards assortative mating, especially with respect to traits such as race and ethnicity, any such post-racial alleged utopia would certainly be long in coming. A more likely medium-term outcome would be something akin to a pigmentocracy of the sort endemic throughout much of Latin America, where race categories are indeed more fluid and continuous, but racial differences are certainly still apparent, and still very much associated with income and status, and race problems arguably not noticeably ameliorated.

Yet Sarich and Miele themselves raise a different, and perhaps less obvious, objection to racial miscegenation as a potential cure-all and panacea for racial animosity and conflict.

Far from being the panacea to end racial animosity and conflict, Sarich and Miele contend that, at least in the short-term, miscegenation may actually exacerbate racial conflict:

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (p255).

Thus, they observe that:

All around the world, downwardly mobile males who perceive themselves as being deprived of wealth, status, and especially females by up-and-coming members of a different race are ticking time bombs” (p248).

Indeed, it is not just intermarriage that ignites racial animosity. Other forms of interracial sexual contact may be even more likely to provoke a violent response, especially where it is alleged, often falsely, that the sexual contact in question was coercive.

Thus, in the USA, allegations of interracial rape seem to have been the most frequent precursor to full-blown race riots. Thus, early twentieth century riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 all seem to have been ignited by rumours or allegations that a white woman had been the victim of rape at the hands of a black man.

Meanwhile, Britain’s first major modern race riot, the 1958 Notting Hill riot, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before then turning on them both.

More recently, the 2005 Birmingham riot, which, in a dramatic reflection of the demographic transformation of Britian, did not involve white people at all, was ignited by the allegation that a black girl had been gang-raped by South Asian males.

Meanwhile, in a dramatic proof that even ‘civilized’ white western Anglo-Celts (or at least semi-civilized Scousers and Aussies) are still not above taking to the streets when they perceive their womenfolk (and their reproductive interests) as under threat, both the 2005 Cronulla riots in Sydney, Australia, and the 2023 attack on a 4-star hotel housing refugees in Kirby, Merseyside were ignited by the allegation that Middle Eastern men had been sexually harassing, or at least propositioning, local white girls.

Likewise, in Britain and beyond, the spectre of ‘Muslim grooming gangs’ sexually exploiting and pimping underage white girls in cities throughout the North of England has ignited anti-Muslim sentiment seemingly to a far greater degree than has an ongoing wave of terrorist attacks in the same country in which multiple people have been killed.

Likewise, the spectre of interracial rape also loomed large in the justifications offered on behalf of the reconstruction-era Ku Klux Klan for their various atrocities, which were supposedly motivated by the perceived need to protect the ostensible virtue and honour of white women in the face of black predation.

More recently, in 2015, Dylann Roof allegedly shouted You rape our women and you’re taking over our country before opening fire on the predominantly black congregation at a church in South Carolina, killing nine people.

Why then is the spectre of interracial sexual contact, especially rape, so likely to provoke racist attacks?

For Sarich and Miele, the answer is obvious:

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (p256).

This so-called “racial solidarist perspective” also represents, of course, a crudely group-selectionist understanding of male reproductive competition – but one that, though biologically questionable at best, is, in simplified form, all but pervasive among racialists.

What applies, according to van den Berghe, to intermarriage surely applies to an even greater degree to other forms of miscegenation, such as casual sex and rape, where the father does not take responsibilty for raising any mixed-race offspring that result, and this is instead left in the hands of the mother’s own ethnic community.

Thus, as sociologist-turned-sociobiologist Pierre van den Berghe, puts it in his excellent The Ethnic Phenomenon (reviewed here), observes:

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (The Ethnic Phenomenon: p75). 

Competition over reproductive access to fertile females is, after all, Darwinian conflict in its most direct and primordial form.

One is thus reminded of the claim of ‘Robert’, a character from Platform, a novel by controversial but celebrated contemporary French author Michel Houellebecq, when he asserts that: 

“What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 

_____________________

Endnotes

[1] Of course, even if race differences were restricted to “a few highly visible features” (e.g. skin colour, nose shape, body size), it may still be possible for forensic scientists to identify the race of a subject from his DNA. They would simply have to look at those portions of the genome that code for these “few highly visible features”. However, there would then be no correlation with other segments of the genome, and genetic differences between races would be restricted to the few genes coding for these “few highly visible features”.
In fact, however, there is no reason to believe that races differ to a greater degree in externally visible traits (skin colour, nose shape, hair texture, stature etc.) than they do in any other traits, be they physiological or indeed psychological. It is just the externally visible traits with which we are most familiar and which are most difficult to dismiss as illusory, or explain away as purely cultural in origin, because we see them before us everyday whenever we are confronted with a person of a different race. In contrast, other traits are less obvious and apparent, and hence easier for race deniers to deny, or, in the case of behavioural differences, dismiss as purely cultural in origin.

[2] Here, the popular notion that serial killers are almost invariably white males was probably a factor in why the police were initially searching for a white suspect in this case. This stereotype was likely also a factor in the delay in apprehending another serial killer, the so-called ‘DC sniper’, whose crimes occurred around the same time, and who was also profiled as likely being a white man.
In fact, however, unlike many other stereotypes regarding race differences in crime rates, this particular stereotype is wholly false. While it is, of course, true that serial killers are disproportionately male, they are not disproportionately white. On the contrary, in the USA, blacks are actually overrepresented by a factor of two among convicted serial killers, as they are also overrepresented among perpetrators of other forms of violent crime (Walsh 2005).
Implicated in both cases were innacurate offender profiles, which, among other errors, had labelled the likely offender as a white male. Yet psychological profiling of this type is largely, if not wholly, a pseudoscience.
Thus, one meta-analysis found that criminal profilers often did not perform better, or performed only marginally better, at predicting the characteristics of offenders than did control groups composed of non-expert laypeople (Snook et al 2007).
As Steve Sailer has pointed out, offender profiling is, ironically, most unreliable where it is also most fashionable – psychological profiles of serial killers etc., which regularly feature in movies, TV and crime literature – but very unfashionable where is it most reliable – e.g. a young black male hanging around a certain area is very likely up to no good (Sailer 2019).
The latter, of course, in involves so-called racial profiling, which is very politically unfashionable, though also represents a much more efficient and effective use of police resources than ignoring factors such as race, age and sex. Of course, it also involves, if you like, ‘age profiling’ and ‘sex profiling’ as well, but these are much less controversial, though they rely on the exact same sort of statistical generalizations, which are again indeed accurate at the aggregate statistical level, though often unfair on individuals to whom the generalizations do not apply.

[3] The one-drop rule seems to have originated as a means of maintaining the racial purity of the white race. Thus, people of only slight African ancestry were classed as black (or, in those days, as ‘Negro’) precisely in order to prevent them passing and thereby infiltrating and adulterating the white gene pool, with interracial marriage, cohabitation and sexual relations, not only socially anathema, but also often explicitly prohibited by law.
Despite this white racialist origin, today the one-drop rule continues to operate in North America. It seems to be primarily maintained by the support of two interest groups, namely, first, mixed-race Americans, especially those of middle-class background, who want to benefit from discriminatory affirmative action programmes in college admissions and employment; and, second, self-styled ‘anti-racists’, who want to maintain the largest possible coalition of non-whites against the hated and resented white oppressor group.
Of course, some white racists may also still support the ‘one-drop rule’, albeit for very different reasons, and there are endless debates on some white nationalist websites as to who precisely qualifies as ‘white (e.g. Armenians, Southern Italians, people from the Balkans and Caucascus, but certainly not Jews). However, white racists are, today, of marginal political importance, save as bogeymen and folkdevils, and hence have minimal influence on mainstream conceptions of race.

[4] An even more problematic term is the currently fashionable but detestable term people of colour, which (like its synonymous but now politically-incorrect precursor, coloured people) manages to arbitrarily group together every race except white Europeans – an obviously highly Eurocentric conception of racial identity, but one which ironically remains popular with leftists because of its perceived usefulness in fermenting a coalition of all non-white races against the demonized white oppressor group.
The term also actually makes very little sense, save in this social, political and historical context. After all, in reality, white people are just as much ‘people of colour’ as people of other races. They are just a different colour, and indeed, since many hair and eye colors are largely, if not wholly, restricted to people of white European descent, arguably whites arguably have a stronger claim to being ‘people of colour’ than do people of most other races.

[5] Famously, and rather bizarrely, race in South Africa was said to be determined, at least on a practical day-to-day basis, by the so-called pencil test, whereby a pencil was placed in a person’s hair, and, if it fell to the ground, they were deemed white, whereas if it remained in their hair, held by the kinky hair characterisitic of sub-Saharan Africans, then they were classed as black or coloured.

[6] Defining race under the Nuremberg Laws was especially problematic, since Jewish people, unlike, say, black Africans, are not obviously phenotypically distinguishable from other white Europeans, at least not in every case. Thus, the Nuremberg laws relied on paper evidence of ancestry rather than mere physical appearance, and distinguished degrees of ancestry, with mischlings of the first and second degrees having differing circumscribed political rights.

[7] Racial identity in the American South during the Jim Crow era, like in America as a whole today, was determined by the so-called one-drop rule. However, the incorporation of other ethnicities into this uniquely American biracial system was potentially problematic. Thus, in the famous case of US v Bhagat Singh Thind, Bhagat Singh Thind, an Indian Sikh, arguing that he was both Caucasian, according to the anthropological claification of the time, and, being of North Indian high caste origin, Aryan too, argued that he ought to eligible for naturalization as an American citizen under the overtly racially discriminatory naturalization laws then in force. He was unsuccessful. Similarly, in Ozawa v United States, a person of Japanese ancestry was deemed not to be white under the same law.
Although I am not aware of any caselaw on the topic, presumably people of Middle Eastern ancestry, or partially of Middle Eastern ancestry, or North African ancestry, would have qualified as ‘white. For example, I am not aware of any Jewish people, surely the largest such group in America at the time (albeit, in the vast majority of cases, of mixed European and Middle Eastern ancestry), being denied naturalization as citizens.
Indeed, today, such groups are still classed as ‘white’ in the US census, much to their apparent chagrin, but a new MENA category is scheduled to be added to the US census in 2030. This new category has been added at the behest of MENA people themselves, aghast at having had to identify as white in earlier censuses, and strangely all too ready to abandon their ostensible ‘white privilege.
This earlier categorization of Middle-Eastern and North African people as white suggests a rather more inclusive definition of ‘white than is applied today, with more and more groups rushing to repudiate their whiteness, possibly in order to qualify as an oppressed group and hence benefit from affirmative action and other forms of racial preference, and certainly in order to avoid the stigma of whiteness. White privilege, it seems, is not all it’s cracked up to be.

[8] One of the main criticisms of the Dangerous Dogs Act 1991, rushed through Parliament in the UK amid a media-led moral panic over canine attacks on children, was the difficulty of distinguishing, or drawing the line between one breed and another. Obviously, similar problems emerge in determining the race of humans.
Indeed, the problems may even be greater, since the morphological differences (and also seemingly the genetic differences: see above) between human races are much smaller in magnitude than those between some dog breeds.
On the other hand, however, the problems may be even greater for identifying dog breeds, because, except for a few pedigreed purebreds, most domestic dogs are mixed-breed ‘mongrels to some degree. In contrast, excepting a few recently formed hybrid populations (such as African-Americans and Cape Coloureds), and clinal populations at the boundaries of the continental races (such as populations from the Horn of Africa), most people around the world are of monoracial ancestry, largely because, until recent migrations facilitated by advances in transport technology (ships, aeroplanes etc.), people of different races rarely came into contact with one another, and, where they did, interracial relationships often tended to be stigmatized, if not strictly prohibited (though this certainly completely didn’t stop them happening).
In addition, whereas human races were formed deep in prehistory, most dog breeds (excepting a few so-called ‘basal breeds’) seem to be of surprisingly recent origin.

[9] For example, when asked to identify the parent of a child from a range of pictures, children match the pictured children with a parent of the same race, rather than those of the same body-size/body-type or wearing similar clothing. Similarly, when asked to match pictures of children with the pictures of the adults whom they will grow up to become, children again match the pictures by race, not clothing or body-build (Hirschfeld 1996).

[10] In the notes for the previous chapter, they do, as I have already discussed, cite the work of Lawrence Hirschfeld as authority for the claim that even young children recognize the hereditary and immutable nature of race differences. It may be that Sarich and Miele have his studies in mind when they write of  evidence for “a species-wide module in the human brain that predisposes us to sort the members of our species into groups based on appearance”.
However, as I understand it, Hirschfeld doesn’t actually argue that his postulated group classification necessarily sorts individuals into groups “based on appearance [emphasis added]” as such. Rather, he sees is as a module designed to classify people into ‘human kinds’, but not necessarily by race. It could also, as I understand it, apply to kinship groups and ethnicities.
Somewhat analogously, anthropologist Francisco Gil-White argues that we have a tendency to group individuals into different ethnicities as a by-product of a postulated ‘species-recognition module’. In other words, we mistakenly classify members of different ethnicities as members of different species (i.e. what some social scientists have referred to as pseudo-speciation) because different ethnicities resemble different species in so far as, just as species breed true, so membership of a given ethnicity is passed down in families, and, just as members of different species cannot interbreed, so individuals are generally encouraged to mate endogamously, i.e., within their own group (Gil-White 2001).
Although Gil-White’s argument is applied to ethnic groups in general, it is perhaps especially applicable to racial groups, since the latter have a further feature in common with different species, namely individuals of different races actually look different in terms of inherited physiological characters (e.g. skin colour, facial morphology, hair texture, stature), as, of course, do different species.
Races are indeed ‘incipient species’, and, until as recently as the early twentieth century, biologists and anthropologists seriously debated the question as to whether the different human races did indeed constitute different species.
For example, Darwin himself gave serious and respectful consideration to this matter in his chapter ‘On the Races of Men’ in The Descent of Man before ultimately concluding that the different races were better described as subspecies.
More recently, John R Baker also gave a fascinating and balanced account of the evidence bearing on this question in his excellent book Race, which I have reviewed here (see this section of my review in particular).

[11] On the other hand, in his paper, ‘An integrated evolutionary perspective on ethnicity’, controversial evolutionary psychologist Kevin Macdonald disagrees with this conclusion, citing personal communication from geneticist and anthropologist Henry Harpending for the argument that:

Long distance migrations have easily occurred on foot and over several generations, bringing people who look different for genetic reasons into contact with each other. Examples include the Bantu in South Africa living close to the Khoisans, or the pygmies living close to non-pygmies. The various groups in Rwanda and Burundi look quite different and came into contact with each other on foot. Harpending notes that it is ‘very likely’ that such encounters between peoples who look different for genetic reasons have been common for the last 40,000 years of human history; the view that humans were mostly sessile and living at a static carrying capacity is contradicted by history and by archaeology. Harpending points instead to ‘starbursts of population expansion.’ For example, the Inuits settled in the arctic and exterminated the Dorsets within a few hundred years; the Bantu expansion into central and southern Africa happened in a millennium or less, prior to which Africa was mostly the yellow (i.e., Khoisan) continent, not the black continent. Other examples include the Han expansion in China, the Numic expansion in northern Africa [correction: actually in the Great Basin region of North America], the Zulu expansion in southern Africa during the last few centuries, and the present day expansion of the Yanomamo in South America. There has also been a long history of invasions of Europe from the east. ‘In the starburst world people would have had plenty of contact with very different looking people’” (Macdonald 2001: p70).

[12] A commenter on an earlier version of this article, Daniel, suggested that that our tendency to group individuals by race could represent a by-product of a postulated facial recognition faculty, which some evidence suggests is a domain-specific module or adaptation, localized in a specific area of the brain, the fusiform gyrus or fusiform facial area, injury or damage to which area sometimes results in an inability or recognize faces (or prosopagnosia). Thus, he writes:

Any two human faces are about as similar in appearance as any two bricks. But humans are far more sensitive to differences in human faces than we are to differences in bricks. The evolutionary psychologist would infer that being very good at distinguishing faces mattered more to our ancestors’ survival than being very good at distinguishing bricks. Therefore we probably have a face-recognition module in our brains.

On this view, race differences, while they may be real, are not so obvious, or rather would not be so obvious were we not so highly attuned to recognizing minor differences in facial morphology in order to identify individuals.
This idea strikes me as very plausible. Certainly, when we think of racial differences in physical appearance, we tend to think of facial characteristics (e.g. differences in the shapes of noses, lips, eyes etc.).
However, this probably also reflects, in part, the fact that, at least in western societies, in ordinary day-to-day life, other parts of our bodies are usually hidden from view by clothing. Thus, at least according to physiologist John Baker in his excellent book, Race (which I have reviewed here) racial groups, especially the Khoisan of Southern Africa, also differ considerably in their external genitalia, but these differences would generally be hidden from view by clothing.
Baker also claims that races differ substantially in the shape of their skulls, claiming:

Even a little child, without any instruction whatever, could instantly separate the skulls of [Eskimos] from those of [Lapps]” (Race: p427).

Of course, facial differences may partly be a reflection of differences in skull shape, but I doubt an ability to distinguish skulls would reflect a byproduct of a facial recognition module.
Likewise, Pygmies differ from other Africans primarily, not in facial morphology, but in stature.
Further evidence that we tend to focus on differences in facial morphology only because we are especially attuned to such differences, whether by practice or innate biology, is provided by the finding that artificial intelligence systems are able to identify the race of a subject through internal x-rays of their bodily organs, even where humans, including trained medical specialists, are incapable of detecting any difference (Gichoya et al 2022).
This also, incidentally, contradicts the popular notion that race differences are largely restricted to a few superficial external characteristics, such as skin-colour, hair texture and facial morphology. In reality, there is no reason in principal to expect that race differences in internal bodily traits (e.g. brain-size) would be of any lesser magnitude than those in external traits. It is simply that the latter are more readily observable on a day-to-day basis, and hence more difficult to deny.

[13] If racism was not a European invention, racism may nevertheless have become particularly virulent and extreme among Europeans in the nineteenth century. One interesting argument is that it was, paradoxically, Europeans’ very commitment to such notions as universal rights and human equality that led them to develop and embrace an ideology of racial supremacism and inequality. This is because, whereas other people’s and civilizations simply took such institutions as slavery for granted, seeing them as entirely unproblematic, Europeans, due to their ostensible commitment to such lofty notions as universal rights and equality, felt a constant need to justify slavery to themselves. Thus, theories of racial supremacy were invented as just such a justification. As sociologist-turned-sociobiologist Pierre van den Berghe explains in his excellent The Ethnic Phenomenon: (which I have reviewed here):

In hundreds of societies where slavery existed over several thousand years, slavery was taken for granted and required no apology… The virulent form of racism that developed in much of the European colonial and slave world was in significant part born[e] out of a desire to justify slavery. If it was immoral to enslave people, but at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (The Ethnic Phenomenon: reviewed here: p115).

[14] Although the differences portrayed undoubtedly reflected real racial differences between populations, the stereotyped depictions also suggest that they were also used as a means of identifying and distinguishing between different peoples and ethnicities and hence may have been exaggerated as a kind of marker for race or nationality. Thus, classicist Mary Lefkowitz writes:

Wall paintings are not photographs, and to some extent the different colors may have been chosen as a means of marking nationality, like uniforms in a football game. The Egyptians depicted themselves with a russet color, Asiatics in a paler yellow. Southern peoples were darker, either chocolate brown or black” (History Lesson: A Race Odyssey: p39).

In reality, since North African Caucasoids and sub-Saharan Africans were in continual contact down the Nile Valley, this also almost certainly means that they interbred with one another, diluting and blurring the phenotypic differences between them. In short, if the Egyptians weren’t wholly Caucasoid, so also the Nubians weren’t entirely black.

[15] Other historical works referring to what seems to be the same stele translate the word that Sarich and Miele render as ‘Negro’ instead as ‘Nubian’, and this is probably the more accurate translation. The specific Egyptian word used seems to have been a variant of ‘nHsy’ or ‘Nehesy’, the precise meaning and connotations of which word is apparently a matter of some controversy.
Incidentally, whether the Nubians are indeed accurately to be described as ‘Negro’ is perhaps debatable. Although certainly depicted by the Egyptians as dark in complexion and also sometimes as having other Negroid features, as indeed they surely did in comparison to the Egyptians themselves, they were also in long and continued contact with the Egyptians themselves, with whom they surely interbred. It is therefore likely that they represented, like contemporary populations from the Horn of Africa, a clinal population, as did the Egyptians themselves, since, just as Nubians were in continual contact with Egyptians, so Egyptians were also in continual contact with the Nubians, which would inevitably have resulted in some gene flow between their respective populations.
Whereas the vast Sahara Desert represented, as Sarich and Miele themselves discuss, a formidable barrier to population movement and gene flow and hence a clear boundary between what were once called the Negroid and Caucasoid races, population movement, and hence gene flow, up and down the Nile valley in Northeast Africa was much more fluid and continuous.

[16] Actually, the English word ‘caste’, deriving from the Portuguese ‘casta’, conflates two distinct but related concepts in India, namely, on the one hand, ‘Varna’ and, on the other, ‘Jāti’. Whereas the former term, ‘Varna’, refers to the four hierarchically organized classes (plus the ‘untouchables’ or ‘dalits’, who strictly are considered so degraded and inferior that they do not qualify as a caste and exist outside the caste system), and may even be of ancient origin among the proto-Indo-Europeans, the latter term, ‘Jāti’, refers to the many thousands of theoretically strictly endogamous occupational groups within the separate Varna.
As for Sarich and Miele’s claim that Varna are “as old as Indian history itself”, history is usually distinguished from prehistory by the invention of writing. By this criterion, Indian history might be argued to begin with the ancient Indus Valley Civilization. However, their script has yet to be deciphered, and it is not clear whether it qualifies as a fully developed writing system.
By this measure, the Indian caste system is not “as old as Indian history itself”, since the caste system is thought to have been imposed by Aryan invaders, who arrived in the subcontinent only after the Indus Valley Civilization had fallen into decline, and may indeed have been instrumental in bringing to an end the remnants of this civilization. However, arguably, at this time, India was not really ‘India’, since the word ‘India’ is of Sanskrit origin and therefore arrived only with the Aryan invaders themselves.

[17] There is also some suggestion that the vanarāḥ, who feature in the Ramayana and are usually depicted as monkey-like creatures, may originally have been conceived as a racist depiction of relatively the darker-complexioned and wider-nosed, Southern and indigenous Indians whom the Aryan invaders encountered in the course of their conquests, as may also be true of the demonic rākṣasāḥ and asurāḥ, including the demon king Ravana, who is described as ruling from his island fortress of Laṅkā, which is generally equated with the island of Sri Lanka, located about 35 miles off the coast of South India.
These ideas are, it almost goes without saying, extremely politically incorrect and unpopular in modern India, especially in South India, since South Indians today, despite different religious traditions, are not noticeably less devout Hindus than North Indians, and hence rever the Ramayana as a sacred text to a similar degree.

[18] One is tempted to reject this claim – namely that the use of the Sanskrit word for colour’ to designate ‘caste has no connection to differences in skin colour as between the Indo-Aryan conquerors and the Dravidian populations whom they most likely subjugated – as mere politically correct apologetics. Indeed, despite its overwhelming support in linguistics, archaeology, genetics, and even in the histories provided in the ancient Hindu texts themselves, the very concept of an Indo-European conquest is very politically incorrect in modern India. The notion is perceived as redolent of the very worst excesses of both caste snobbery and the sort of notions of white racial superiority that were popular among Europeans during the colonial period. Moreover, as we have seen, to this day, castes differ not only genetically, and in a manner consistent with the Aryan invasion theory, but also in skin tone (Jazwal 1979Mishra 2017).
On the other hand, however, some evidence suggests that the association of caste with colour actually predates the Indo-Aryan conquest of the Indian subcontinent and originates with the original proto-Indo-Europeans. Thus, in his recent book The Horse, the Wheel and Language, David W Anthony, discussing Georges Dumézil’s trifunctional hypothesis, reports that: 

“The most famous definition of the basic divisions within Indo-European society was the tripartite scheme of Georges Dumézil, who suggested there was a fundamental three-part division between the ritual specialist or priest, the warrior and the ordinary herder/cultivator. Colors may have been associated with these three roles: white for the priest, red for the warrior and black or blue for the herder/cultivator” (The Horse, the Wheel and Language: p92).

It is from this three-fold social hierarchy that the four-varna Indian caste system may have derived. Similarly, leading Indo-Europeanist JP Mallory observes that “both ancient India and Iran expressed the concept of caste with the word for colour” and that:

Indo-IranianHittiteCeltic and Latin ritual all assign white to priests and red to the warrior. The third function would appear to have been marked by a darker colour such as black or blue” (In Search of the Indo-Europeans: p133).

This would all tend to suggest that the association of caste (or at least occupation) with colour long predates the Indo-Aryan conquest of the subcontinent and hence cannot be a reference to differences in skin colour as between the Aryan invaders and indigenous Dravidians.
On the other hand, however, it is not at all clear that the Indian caste system has anything to do with, let alone derives from, the three social groups that supposedly existed among the ancient proto-Indo-Europeans. On the contrary, the Indian caste system is usually considered as originating much later, after the Indo-European arrival in South Asia, and then only in embryonic form. Certainly, there is little evidence that the proto-Indo-European social struture was anything like as rigid as the later Indian caste system.
However, it is interesting to note that that, even under the trifunctional hypothesis, a relatively lighter colour (white) is considered as having been assigned to the priestly group, and a darker colour to the lower-status agricultural workers, paralleling the probable differences in skin tone as between Aryan conquerors and the indigenous Dravidians whom they encountered and likely subjugated.  

[19] Neither is Hartung nor his essay mentioned in the rather cursory endnote accompanying this chapter (p265-6). This reflects a recurrent problem throughout the enitre book. Thus, in the preceding chapter, ‘Race and History’, many passages appear in quotation marks, but it is not always entirely clear where the quotations are taken from, as the book’s endnotes are rudimentary, just giving a list of sources for each chapter as a whole, without always linking these sources to the specific passages quoted in the text. Unfortunately, this sort of thing is a recurrent problem in popular science books, and, in Sarich and Miele‘s defence, I suspect that it is the publishers and editors, rather than the authors, who are usually to blame.

[20] Thus, Hartung writes:

The [Jewish] Sages were quite explicit about their view that non-Jews were not to be considered fully human. Whether referring to ‘gentiles’, ‘idolaters’, or ‘heathens’, the biblical passage which reads ‘And ye my flock, the flock of my pasture, are men, and I am your God’ (Ezekiel 34:31; KJV) is augmented to read… ‘And ye my flock, the flock of my pastures, are men; only ye are designated ‘men’ (Baba Mezia 114b)” (Hartung 1995).

Similarly, Hartung quotes the Talmud as teaching:

In the case of heathens; are they not in the category of adam? – No, it is written: And ye my sheep, the sheep of my pasture, are adam (man). Ye are called adam but heathens are not called adam. [Footnote reads:]… The term adam does not denote ‘man’ but Israelite. The term adam is used to denote man made in the image of God and heathens by their idolatry and idolatrous conduct mar this divine image and forfeit the designation adam” (Kerithoth 6b)

However, as Sarich and Miele, and indeed Hartung, are at pains to emphasize, lest they otherwise be attacked as antisemitic, the tendency to view one’s own ethnic group as the only ‘true’ humans on earth, is by no means exclusive to the ancient Hebrews, but rather is a recurrent view among many cultures across the world. As I have written previously:

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common

Thus, as Hartung himself writes in the very essay from which Sarich and Miele themselves quote, himself citing the work of anthropologist Napoleon Chagnon:

The Yanomamo Indians, who inhabit the headwaters of the Amazon, traditionally believe that… that they are the only fully qualified people on earth. The word Yanomamo, in fact, means man, and non-Yanomamo are viewed as a form of degenerated Yanomamo.”

Similarly, Sarich and Miele themselves write of the San Bushmen of Southern Africa:

Bushmensort all mammals into three mutually exclusive groups: ‘!c’ (the exclamation point represents the ‘clicking’ sound for which their language is well known) denotes edible animals such as warthogs and giraffes; ‘!ioma’ designates an inedible animal such as a jackal, a hyena, a black African, or a European white; the term ‘zhu’ is reserved for humans, that is, the Bushmen themselves” (p57).

[21] According to John Hartung’s analysis, Adam in the Genesis account of creation is best understood as, not the first human, but rather only the first Jew – hence the first true human (Hartung 1995). However, Christian Identity theology turns this logic on its head: Adam was not the first Jew, but rather the first white man.
As evidence, they cite the fact that the Hebrew word ‘Adam’ (אדם) seems to derive from the word for the colour red, which they, rather doubtfully, interpret as evidence for his light skin, and hence ability to blush. (A more likely interpretation for this etymology is that the colour was a reference to the red clay, or “dust of the ground”, from which man was, in the creation narrative of Genesis, originally fashioned: Genesis 2:7. Thus, the Hebrew word ‘Adam’, אדם, is also seemingly cognate with Adamah, אדמה, translated as ‘ground’ or ‘earth’, and the creation of Man from clay is a recurrent motif Near Eastern creation narratives and mythology.)
Christian Identity is itself a development from British Israelism, which claims, rather implausibly, that indigenous Britons are themselves (among the) true Israelites, representing the surviving descendants of the ten lost tribes of Israel. Other races, then, are equated with the pre-Adamites, with Jews themselves, or at least Ashkenazim, classed as either Khazar-descended imposters, or sometimes more imaginatively equated with the so-called serpent seedline, descended from the biblical Cain, himself ostensibly the progeny of Eve when she (supposedly) mated with the Serpent in the Garden of Eden.
Christian identity theology is, as you may have noticed, completely bonkers – rather like, well… theology in general, and Christian theology in particular.

[22] The Old Testament passage in question, Genesis 9:22-25, recounts how Ham sees his drunken father Noah naked, and so, as a consequence, Ham’s own son Canaan is cursed by Noah. Since seeing one’s father naked hardly seems a sufficient transgression to justify the curse imposed, some biblical scholars have suggested that the original version was censored by puritanical biblical scribes offended by or attempting to cover up its original content, which, it has been suggested, may have included a castration scene or possibly a description of incestuous male rape (or even the rape of his own mother, which, it has been suggested, might explain the curse upon his son Canaan, who is, on this view, the product of this incestuous union).
In some interpretations, the curse of Ham was combined, or perhaps simply confused, with the mark of Cain, which was itself interpreted as a reference to black skin. In fact, these are entirely separate parts of the Old Testament with no obvious connection to one another, or indeed to black people.
The link between the curse of Ham and black people is, however, itself quite ancient, long predating the Atlantic slave trade, and seems to have originated in the Talmud, whose authorship, or at least compilation, is usually dated to the sixth century CE, historian Thomas Gossett reporting:

In the Talmud there are several contradictory legends concerning Ham—one that God forbade anyone to have sexual relations while on the Ark and Ham disobeyed this command. Another story is that Ham was cursed with blackness because he resented the fact that his father desired to have a fourth son. To prevent the birth of a rival heir, Ham is said to have castrated his father” (Race: The History of an Idea in America: p5).

This association may have originated because Cush, another of the sons of Ham (and an entirely different person to Canaan, his brother) was said to be the progenitor of, and to have given his name to, the Kingdom of Kush, located on the Nile valley, south of Ancient Egypt, whose inhabitants, the Kushites, who were indeed known for their dark skin colour (though were, by modern standards, probably best classified as mixed-race, or as a clinal or hybrid population, being in long standing contact with predominantly Caucasoid population of Egypt).
Alternatively, the association of Ham with black people may reflect the fact that the Hebrew word ‘ham’ (‘חָם’) has the meaning of ‘hot’, which was taken as a reference to the heat of Africa.
As you have probably gathered, none of this makes much sense. But, then again, neither does much Christian theology, or indeed much of the Old Testament (or indeed the New Testament) or theology in general, let alone most attempts to provide a moral justification for slavery consistent with Christian slave morality.
In fact, it is thought most likely that the curse of Ham was originally intended in reference to, not black people, but rather the Canaanites, since it was Canaan, not his brother Cush, against whom the curse was originally issued. This interpretation also makes much more sense in terms of the political situation in Palestine at the time this part of the Old Testament was likely authored, with the Canaanites featuring as recurrent villains and adversaries of the Israelites throughout much of the Old Testament. On this view, the so-called curse of Ham was indeed intended as a justification for oppression, but not of black people. Rather, it sought to justify the conquest of Canaan and subjugation of her people, not the later enslavement of blacks.

[23] Slavery had already been abolished throughout the British Empire even earlier in 1833, long before Darwin published The Origin of Species, so the idea of Darwinism being used to justify slavery in the British empire is a complete non-starter. (Darwin himself, to what it’s worth, was also opposed to slavery.)
Admittedly, slavery continued to be practised, however, in other, non-English speaking parts of the world, especially the non-western world, for some time thereafter. However, it is not likely that Darwin’s theory of evolution was a significant factor in the continued practice of slavery in, say, the Muslim world, since most of the Muslim world has never accepted the theory of evolution. In short, slavery was longest maintained in precisely those regions (Africa, the Middle East) where Darwinian theory, and indeed a modern scientific worldview, was least widely accepted.

[24] Montagu, who seems to have been something of a charlatan and is known to have lied in correspondence regarding his academic qualifications, had been born with the very Jewish-sounding, non-Anglo name of Israel Ehrenberg, but had adopted the hilariously pompous, faux-aristocratic name ‘Montague Francis Ashley-Montagu’ in early adulthood.

[25] Less persuasively, Sarich and Miele also suggest that the alleged lesbianism, or bisexuality, of both Margeret Mead and Ruth Benedict may similarly have influenced their similar culturally-determinist theories. This seems, to me, to be clutching at straws.
Neither Mead nor Benedict were Jewish, or in any way ethnically alien, yet arguably each had an even greater direct influence on American thinking about cultural differences than did Boas himself. Boas’s influence, in contrast, was largely indirect – namely through his students such as Montagu, Mead and Benedict. Therefore, Sarich and Miele have to point to some other respect in which Mead and Benedict were outsiders. Interestingly, Kevin Macdonald makes the same argument in Culture of Critique (endnote 61: reviewed here), and is similarly unpersuasive.
In fact, the actual evidence regarding Benedict and especially Mead’s sexuality is less than conclusive. It amounts to little more than salacious speculation. After all, in those days, if a person was homosexual, then, given prevailing attitudes and laws, they probably had every incentive to keep their private lives very much private.
Indeed, even today, speculation about people’s private lives tend to be unproductive, simply because people’s private lives tend, by their very nature, to be private.

[26] Curiously, though he is indeed widely credited as the father of American anthropology, Boas’s own influence on the field seems to have been largely indirect. His students, Mead, Benedict and Montagu, all seem to have gone on to become more famous than he was, at least among the general public, and each certainly published works that became more famous, and more widely cited, than anything authored by Boas himself.
Indeed, Boas’s own work seems to relatively little known, and little cited, even by those whom we could regard as his contemporary disciples. His success was in training students/disciples and in academic politicking rather than research.
Perhaps the only work of his that remains widely cited and known today is his work on cranial plasticity among American immigrants and their descendants, which has now been largely discredited.

[27] In the years that have passed since the publication of Sarich and Miele’s ‘Race: The Reality of Human Differences’, this conclusion, namely the out of Africa theory of human evolution, has been modified somewhat by the discovery that our early African ancestors interbred with other species (or perhaps subspecies?) of hominid, including those inhabiting Eurasia, such as Neanderthals and Denisovans, such that, today, all non-sub-Saharan African populations have some Neanderthal DNA.

[28] I think another key criterion in any definition of ‘race’, but which is omitted from most definitions, is whether the differences in “heritable featuresbreed true. In other words, whether two parents both bearing the trait in question will transmit it to their offspring. For example, among ethnically British people, since two parents, both with dark hair, may nevertheless produce a blond-haired offspring, hair colour is a trait which does not breed true. Whether a certain phenotype breeds true is, at least in part, a measure of the frequency of interbreeding with people of a different phenotype in previous generations. It may therefore change over time, with increasing levels of miscegenation and intermarriage. Therefore, this criterion may be implied by Sarich and Miele’s requirement that, in order to qualify as ‘races’, populations must be “separated geographically from other… populations”.

[29] Actually, the definition of ‘species’ is rather more complex – and less rather precise: see discussion during my review of John Baker’s Race, which discusses the matter in this section.

[30] Using colour differences as an analogy for race differences is also problematic, and potentially confusing, for another reason – namely colour is already often conflated with race. Thus, races are often referred to by their (ostensible) colours (e.g. sub-Saharan Africans as ‘black’, Europeans as white, East Asians as yellow, Hispanics and Middle-Eastern populations as brown, and Native Americans as red) and ‘colour’ is sometimes even used as a synonym (or perhaps a euphemism) for race. Perhaps as a consequence, it is often asserted, falsely, that races differ only in skin colour. Using the electromagnetic spectrum as an analogy for race differences is likely to only exacerbate this already considerable confusion.

[31] Interestingly, however, different languages in widely differing cultures tend to put the boundaries between their different colour terms in roughly the same place, suggesting an innate disposition to this effect. Attempts to teach alternative colour terms, which divide the electromagnetic spectrum in different places, to those peoples whose languages lack certain colour terms, has shown that humans learn such classifications less readily than the familiar ones recognized in other languages. Also, although different cultures and languages have different numbers of colour-terms, the colours recognized follow a distinct order, beginning with just light’ and ‘dark, followed by red (see Berlin & Kay, Basic Color Terms: Their Universality and Evolution).

[32] As I have commented previously, perhaps a better analogy to illustrate the clinal nature of race differences is, not colour, but rather social class – if only because it is certain to cause cognitive dissonance and doublethink among leftist sociologists. As pioneering biosocial criminologist Anthony Walsh demands:

Is social class… a useless concept because of its cline-like tendency to merge smoothly from case to case across the distribution, or because its discrete categories are determined by researchers according to their research purposes and are definitely not ‘pure’” (Race and Crime: A Biosocial Analysis: p6).

But the same sociologists and leftist social scientists who, though typically very ignorant of biology, insist race is a ‘social construct’ with no basis in biology, nevertheless continue to employ the concept of social class, or socioeconomic status, as if it were entirely unproblematic.

[33] In addition to the mountains that mark the Tibetan-Indian border, the vast, but sparsely populated tundra and Steppe of Siberia also provides a part of the boundary between what were formerly called the Caucasoid and Mongoloid races. As Steve Sailer has observed, one can get a good idea of the boundaries between races by looking at maps of population density. Those regions that are sparsely populated today (e.g. mountain ranges, deserts, tundra and, of course, oceans) were also generally incapable of supporting large population densities in ancient times, and hence represented barriers to gene flow and racial admixture.

[34] Indeed, even some race realists might agree that terms like ‘Italian’ are indeed largely social constructions and not biologically meaningful, because Italians are not obviously physically distinguishable from the populations in neighbouring countries on the basis of biologically inherited traits, such as skin colour, nose shape or hair texture – though they do surely differ in gene frequencies, and, at the aggregate statistical level, surely in phenotypic traits too. Thus, John R Baker in his excellent ‘Race’ (reviewed here) warns against what he terms “political taxonomy”, which equates the international borders between states with meaningful divisions between racial groups (Race: p119). Thus, Baker declares:

In the study of race, no attention should be paid to the political subdivisions of the surface of the earth” (Race: p111).

Baker even offers a reductio ad absrudum of this approach, writing:

No one studying blood-groups in Australia ‘lumps’ the aborigines… with persons of European origin; clearly one would only confuse the results by so doing” (Race: p121).

Yet, actually, the international borders between states do indeed often coincide with genetic differences between populations. This is because the same geographic obstacles (e.g. mountain ranges, rivers and oceans) that are relatively impassable and hence have long represented barriers to gene flow also represent both:

  1. Language borders, and hence self-identified ‘nations’; and
  2. Militarily defensible borders.

Indeed, Italians, one of the groups cited by Diamond, and discussed by Sarich and Miele, provide a good illustration of this, because Italy has obvious natural borders, that are defensible against invaders, that represent language borders, and that long represented a barrier to gene flow, being a peninsula, surrounded on three sides by the Mediterranean Sea, and on the fourth, its only land border, by the Alps, which represent the border between Italian-speakers and speakers of French and German.

[35] Likewise, in the example cited by Sarich and Miele themselves, the absence of the sickle-cell gene was, as Sarich and Miele observe, the “ancestral human condition” shared by all early humans before some groups subsequently went on to evolve the sickle-cell gene. Therefore, that any two groups do not possess the sickle-cell gene does not show that they are any more related to one another than to any other human group, including those that have evolved sickle-cell, since all early humans initially lacked this gene.
Moreover, Diamond himself refers not to the sickle-cell gene specifically, but rather to “antimalarial genes” in general and there are several different genetic variants that likely evolved because they provide some degree of resistence to malaria, for example the alleles causing conditions thalassemia, Glucose-6-Phosphate Dehydrogenase (G6PD) Deficiency, and certain hemoglobin variants. These quite different adaptations evolved independently in different populations where malaria was common, and indeed have different levels of prevalence in different populations to this day.

[36] Writing in the early seventies, long before the sequencing of the human genome, Lewontin actually relied, not on the direct measurement of genetic differences between, and within, human populations, but rather indirect markers for genetic differences, such as blood group data. However, his findings have been broadly borne out by more recent research.

[37] However, in fact, similar points had been made soon after Lewontin’s original paper had been published (Mitton 1977; 1978).

[38] Actually, while many people profess to be surprised that, depending on precisely how measurements are made, we share about 98% of our DNA with chimpanzees, this has always struck me as, if anything, a surprisingly low figure. After all, if one takes into account all the possible ways an organism could be built, including those ways in which it could be built but never would be built, simply because, if it were, the organism in question would never survive and reproduce and hence evolve in the first place, then we are surely remarkably similar in morphology.
Just looking at our external, visible physiology, we and chimpanzees (and many other organisms besides) share four limbs, ten digits on each, two eyes, two nostrils, a mouth, all similarly positioned in relation to one another, to mention just a few of the more obvious similarities. Our internal anatomy is also very similar, as are many aspects of our basic cellular structure.

[39] This is analogous to the so-called other-race effect in face recognition, whereby people prove much less proficient at distinguishing individuals of races other than their own than they are at distinguishing members of their own race, especially if they have had little previous contact with members of other races. This effect, of course, is the basis for the familiar stereotype whereby it is said ‘they [i.e. members of another race] all look alike to me’.

[40] If any skeptical readers doubt this claim, it might be worth observing that Ostrander is not only a leading researcher in canine genetics, but also seemingly has no especial ideological or politically-correct axe to grind in relation to this topic. Although she is obviously alluding to Lewontin’s famous finding in the passage quoted, she does not mention race at all, referring only to variation among “human populations”, not human races. Indeed, human races are not mentioned at all in the article. Rather, it is exclusively concerned with genetic differences among dog breeds and their relationship to morphological differences (Ostrander 2007).

[41] In addition to problems with defining and measuring the intelligence of different dogs, and dog breeds, there are also, as already discussed above, difficulties in defining, and identifying different dog breeds, problems that, despite the greater morphological and genetic differentiation among dog breeds as compared to human races, are probably greater than for human races, since, except for a few pedigreed purebreds, most dogs are mixed-breed ‘mongrels . These problems, in turn, create problems when it comes to measuing the intelligence of different breeds, since one can hardly assess the intelligence of a given breed without first defining and identifying which dogs qualify as members of that breed.

[42] In fact, however, whereas the research reported upon in the mass media does indeed seems to have relied exclusively on the reported ability of different breds to learn and obey new commands with minimal instruction, Stanley Coren himself, in the the original work upon which this ranking of dog breeds by intelligence was based, namely his book, The Intelligence of Dogs, seems to have employed a broader, more nuanced and sophisticated understanding of canine intelligence, Thus, Coren is reported as distinguishing three types of canine intelligence, namely:

  1. Instinctive intelligence’, namely the dog’s ability to perform the task it was bred for (e.g, herding in the case of a herding dog);
  2. Adaptive intelligence’, namely the ability and speed with which a dog can learn new skills, and solve novel problems, for itself; and
  3. Obedience intelligence’, namely the ability and speed with which a dog can be trained and learn to follow commands from a human master.

[43] There is no obvious reason to believe that domestic animals are, on average, any more intelligent than their wild ancestors. On the contrary, the process of domestication is actually generally associated with a reduction in brain volume, itself a correlate of intelligence, perhaps are part of a process of becoming more neotenized that tends to accompany domestication.
It is, of course, true that domestic animals, and domestic dogs in particular, evince impressive abilities to communicate with humans (e.g. understanding commands such as pointing, and even intonation of voice) (see The Genius of Dogs). However, this reflects only a specific form of social intelligence rather than general intelligence.
In contrast, in respect of the forms of intelligence required among wild animals, domestic animals would surely fare much worse than their wild ancestors. Indeed, many domestic animals have been so modified by human selection that they are quite incapable of surviving in the wild without humans.

[44] Actually, criminality, or at least criminal convictions, is indeed inversely correlated with intelligence, with incarcerated offenders, having average IQs of around 90 – i.e. considerably below the average within the population at large, but not so low in ability as to qualify as having a general learning disabiltiy. In other words, incarerated offenders tend to be smart enough to have the wherewithal to commit a criminal act in the first place, but not smart enough to realize it probably isn’t a good idea in the long-term.
However, with data mostly comes from incarcerated offenders, who are usually given a battery of psychological tests on admission into the prison system, including a test of cognitive ability. It is possible, indeed perhaps probable, that those criminals who evade detection, and hence never come to the attention of the authorities, have relatively higher IQs, since it may be their higher inteligence that enables them to evade detection.
At any rate, the association between crime and low IQ is not generally thought to result from a failure to understand the nature of the law in the first place. Rather, it probably reflects that intelligent people are more likely to recognise that, in the long-term, regularly committing serious crimes is probably a bad idea, because, sooner or later, you are likely to be caught, with attendant punishment and damage to your reputation and future earning capacity.
Indeed, the association between IQ and crime might partially explain the high crime rates observed among African-Americans, since the average IQ of incarcerated offenders is similar to that found among African Americans as a whole.

[45] One is reminded here of Eysenck’s description of the basenji breed as “natural psychopaths” quoted above (The IQ Argument: p170).

[46] For example, differences in skin colour reflect, at least in part, differences in exposure to the sun at different latitudes; while differences in bodily size and stature, and relative bodily proportions, also seem to reflect adaptation to different climates, as do differences in nose shape. Just as lighter complexion facilitates the synthesis of vitamin D in latitudes where exposure to the sun is at a minimum, and dark skin protects from the potentially harmful effects of excessive exposure to the sun’s rays in tropical climates, so a long, thin nose is thought to allow the warming and moisturizing of air before it enters the lungs in cold and dry climates, and body-size and proportions affect the proportion of the body that is directly exposed to the elements (i.e. the ratio of surface-area to volume), a potentially critical factor in temperature regulation, with tall, thin bodies favoured in warm climates, and short stocky frames, with flat faces and shorter arms and legs favoured in colder regions.

[47] For example, as explained in the preceding endnote, the Bergmann and Allen rules neatly explain many observed differences in bodily stature and body form between different races as an adaptation to climate, while Thomson’s nose rule similarly explains differences in nose shape. Likewise, while researchers such as Peter Frost and  Jared Diamond have argued that differences in skin tone cannot entirely be accounted for by climatic factors, nevertheless such factors have clearly played some role in the evolution of differences in skin tone.
This, of course, explains why, although the correlation is far from perfect, there is indeed an association between latitude and skin colour. This also explains why Australia, with a generally much warmer climate than, and situated at a lower latitude than, the British Isles, but in recent times, at least until very recently, populated primarily by people of predominantly Anglo-Celtic ancestry, has the highest levels of melanoma in the world; and also why, conversely, dark-skinned Afro-Caribbeans and South Asians resident in the UK, experience higher rates of rickets, due to lacking sufficient sunlight for vitamin D synthesis.

[48] Alternatively, Carleton Coon attributed the large protruding buttocks of many Khoisan women to maintaining a storehouse of nutrients that can be drawn upon to meet the caloric demands of pregnancy (Racial Adaptations: p105). This is probably why women of all races have naturally greater fat deposits than do men. However, in the arid desert environment to which San people are today largely confined, namely the Kalahari Desert, where food is often hard to come by, maintaining sufficient calories to successfully gestate an offspring may be especially challenging, which might be posited as the ultimate evolutionary factor that led to the evolution of steatopygia among female Khoisan.
Of course, these two competing hypotheses for the evolution of the large buttocks of Khoisan women – namely, on the one hand, sexual selection or mate choice and, on the other, the caloric demands of pregnancy in a desert environment – are not mutually exclusive. On the contrary, if large fat reserves are indeed necessary to successfully gestate an offspring, then it would pay for males to be maximally attracted to females with sufficiently large fat reserves to do just this, so as to maximize their own reproductive success.

[49] This argument, I might note, does not strike me as entirely convincing. After all, it could be argued that strong body odour would actually be more noticeable in hot climates, simply because, in hot climates, people tend to sweat more, and therefore that dry earwax, which is associated with reduced body odour, should actually be more prevalent among people whose ancestors evolved in hot climates, the precise opposite of what is found.
On the other hand, Edward Dutton, discussing population differences in earwax type, suggests that “pungent ear wax (and scent in general) is a means of sexual advertisement” (J Philippe Rushton: A Life History Perspective: p86). This would suggest that a relatively stronger body odour (and hence the wet earwax with which strong body odour is associated) would have been positively selected for (rather than against) by mate choice and sexual selection, the precise opposite of what Wade assumes.

[50] In their defence, I suspect Sarich and Miele are guilty, not so much of biological ignorance, as of sloppy writing. After all, Vincent Sarich was an eminent and pioneering biological anthropolgist, geneticist and biochemist, hardly likely to be guilty of such an error. What I suspect they really meant to say was, not that there was no evidence of sexual selection operating in humans, but rather that there was no conclusive evidence that sexual selection was responsible for racial differences among humans, as also conclude later in their book (p236).

[51] Of all racial groups in the USA, only among Pacific Islanders display even higher rates of obesity that that observed among black women, though here it is both sexes who are prone to obesity.

[52] Just to clarify and prevent any confusion, higher proportions of white men than white women are indeed overweight or obese, in both the USA and UK. However, this does not mean that men are fatter than women. Women of all races, including white people, have higher body-fat levels than men, whereas men have higher levels of musculature.
Obesity is measured by body mass index (BMI), which is calculated by reference to a person’s weight and height, not their body fat percentage. Thus, some professional bodybuilders, for example, have quite high BMIs, and hence qualify as overweight by this criteria, despite having very low body fat levels. This is one limitation to using BMI to assess whether a person is overweight.
Indeed, criteria for qualifying as ‘obese’ or ‘overweight’ is different for men and women, partly to take account of this natural difference in body-fat percentages, as well as other natural sex differences in body size, shape and composition.

[53] Women of all races have, on average, higher levels of body fat than do men of the same race. This, it is suggested, is to provide the necessary storehouse of nutrients to successfully gestate a foetus for nine months. Possibly men may be attracted to women with fat deposits because this shows that they have sufficient excess energy stored so as to successfully carry a pregnancy to term and nurse the resulting offspring. This may also explain the evolution of breasts among human females, since other mammals develop breasts only during pregnancy and, save during pregnancy and lactation, human breasts are, unlike those of other mammals, composed predominantly of fat, not milk.

[54] Interestingly, in a further case agreeing with what Steve Sailer calls ‘Rushton’s Rule of Three, whereby blacks and Asians respectively cluster at opposite ends of a racial spectrum for various traits, there is some evidence that, if black males prefer a somewhat heavier body-build in prospective mates than do white males, then Asian males prefer a somewhat thinner body-build (e.g. Swami et al 2006).

[55] Whereas most black Africans have long arms and legs, African Pygmies may represent an exception. In addition, of course, to a much smaller body-size overall, one undergraduate textbook in biological anthropology reports that they “have long torsos but relatively small appendages” relative to their overall body-size (Human Variation (Fifth Edition): p185). However, leading mid-twentieth century American phsysical anthropologist Carleton Coon reports that, being “they have relatively short legs, particularly short in the thigh, and long arms, particularly long in the forearm” (The Living Races of Man: p106).

[56] Probably this is to be attributed to better superior health, nutrition and living-standards in North America, and even in the Caribbean, as compared to sub-Saharan Africa. Better training facilities, which only richer countries (and people) have sufficient resources to invest in, is also likely a factor. However, one interesting paper by William Aiken proposes that high rates of mortality during the ‘Middle Passage’ (i.e. the transport of slaves across the Atlantic) during the slave trade selected for increased levels of androgens (e.g. testosterone) among the survivors, which he suggests may explain both the superior athletic performance and the much higher rates of prostate cancer among both African-Americans and Afro-Caribbeans as compared to whites (Aitken 2011). Of course, high androgen levels might also plasusibly explain the high rates of violent crime among African-Americans and Afro-Caribbean populations.

[57] Of course, the degree of relationship, if any, between athletic and sporting ability and intellectual ability probably depends on the sport being performed. Most obviously, if chess is to be classified as a ‘sport’, then one would obviously expect chess ability to have a greater correlation with intelligence than, say, arm-wrestling. Intelligence is likely of particular importance in sports where strategy and tactics assume great importance.
Relatedly, in team sports, there are likely differences in the importance of intelligence among players playing in different positions. For example, in the sport discussed by Sarich and Miele themselves, namely American football, it is suggested that the position of quarterback requires greater intelligence than other positions, because the quarterback is responsible for making tactical decisions on the field. This, it is controversially suggested, is why African-Americans, though overrepresented in the NFL as a whole, are relatively less likely to play as quarterbacks.
Similarly, being a successful coach or manager also likely requires greater intelligence.
Interestingly with regard to the question of sports and IQ, though regarded as one of the greatest ever heavyweights, Muhammad Ali scored as low as 78 on an IQ test (i.e. in the low normal range) when tested in an army entrance exam, and was initially turned down for military service in Vietnam as a consequence, though it is sometimes claimed this was because of dyslexia rather than low general intelligence, meaning that the written test he was given underestimated his true intelligence level. Interestingly, another celebrated heavyweight, Mike Tyson, is also said to have scored similarly in the low normal range when tested as a child.
Another reason that IQ might be predictive of ability in some sports is that IQ is known to correlate to reaction times when it comes to performing elementary cognitive tasks. This seems analogous to the need to react quickly and accurately to, say, the speed and trajectory of a ball in order to strike or catch it, as is required in many sports. I have discussed the paradox of African-Americans being overrepresented in elite sports, but having slower average reaction times, here.

[58] People diagnosed with high functioning autism, and Asperger’s syndrome in particular, do indeed have a higher average IQ than the population at large. However, this is only because among the very criteria for diagnosing these conditions is that the person in question must have an IQ which is not so low as to indicate a mental disability. Otherwise, they would not qualify as ‘high functioning’. This removes those with especially low IQs and hence leaves the remaining sample with an above average IQ compared to the population at large.

[59] Rushton’s implication is that this advantage, namely narrower hips, applies to both sexes, and certainly blacks seem to predominate among medal winners in track events in international athletics at least as much in men’s as in women’s athletic events. This suggests, presumably, that, although it is obviously only women who give birth and hence were required to have wider hips in order to birth larger brained infants, nevertheless male hip width was also increased among larger-brained races as a byproduct of selection for increased hip size among females.
If black women do indeed have narrower hips than white women, and black babies smaller brains, then one might predict that black women might have difficulty birthing offspring fathered by white males, as the mixed-race infants would have brains somewhat larger than that of infants of wholly Negroid ancestry. Thus, Russian racialist Vladimir Avdeyev asserts:

“The form of the skull of a child is directly connected with the characteristics of the structure of the mother’s pelvis—they should correspond to each other in the goal of eliminating death in childbirth. The mixing of the races unavoidably leads to this, because the structure of the pelvis of a mother of a different race does not correspond to the shape of the head of [the] mixed infant; that leads to complications during childbirth” (Raciology: p157).

More specifically, Avdeyev claims:

American Indian women… often die in childbirth from pregnancies with a child of mixed blood from a white father, whereas pure-blooded children within them are easily born. Many Indian women know well the dangers [associated with] a pregnancy from a white man, and therefore, they prefer a timely elimination of the consequence of cross-breeding by means of fetal expulsion, in avoidance of it” (Raciology: p157-8).

However, I find little evidence to support this claim from delivery room data. Rather, it seems to be racial differences in overall body size that are associated with birth complications.
Thus, East Asian women have relatively greater difficulties birthing offspring fathered by white males (specifically, a greater rate of c-sections or caesarean births) as compared to those fathered by Asian males (Nystrom et al 2008). However, according to Rushton himself, East Asians have brain sizes as large or larger than those of Europeans.
However, East Asians also have substantially smaller average body-size as compared to Europeans. It seems, then, that Asian women, with their own smaller frames, simply have greater difficulty birthing relatively larger framed mixed-race, half-white offspring.
Avdeyev also claims that, save in the case of mixed-race offspring fathered by larger-brained races, birth is a generally less physically traumatic experience among women from racial groups with smaller average brain-size, just as it is among nonhuman species, who also, of course, have smaller brains than humans. Thus, he writes:

“Women of lower races endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth” (Raciology: p157).

Again, delivery room data provides little support for his claim. In fact, data from the USA actually seems to indicate a somewhat higher rate of caesarean delivery among African-American women as compared to American whites (Braveman et al 1995Edmonds et al 2013Getahun et al 2009Valdes 2020; Okwandu et al 2021).

[60] Another disadvantage that may result from higher levels of testosterone in black men is the much higher incidence of prostate cancer observed among black men resident in the west, since prostate cancer seems to be to be associated with testosterone levels. In addition, the higher apparent susceptibility of blacks to prostate cancer, and perhaps to violent crime and certain forms of athletic ability, may reflect, not just levels of testosterone, but how susceptible different races are to androgens such as testosterone, which, in turn, reflects their level and type of androgen receptors (see Nelson and Witte 2002).

[61] In writing about politically incorrect and controversial topic, the authors are guilty of some rather sloppy errors, which, given the importance of the subject to their book and its political sensitivity, is difficult to excuse. For example they claim that:

Asians have a slightly higher average IQ than do whites” (p196).

Actually, however, this advantage is restricted to East Asians. It doesn’t extend even to Southeast Asians (e.g. Thais, Filipinos, Indonesians), who are also classed as ‘Mongoloid’ in traditional racial taxonomies, let alone to South Asians and West Asians, who, though usually classed as Caucasoid in early twentieth century racial taxonomies, also qualify as Asian in the sense that they trace their ancestry to the Asian continent, and are considered ‘Asian’ in British-English usage, if not American-English.

[62] Issues like this are not really a problem in assessing the intelligence of different human populations. It is true that some groups do perform relatively better on certain types of test item. For example, East Asians score relatively higher in spatio-visual intelligence than in verbal ability, whereas Ashkenazi Jews show the opposite pattern. Meanwhile, African Americans score relatively higher in rote memory than general intelligence and Australian Aboriginals score relatively higher in spatial memory. However, this is not a major factor when assessing the relative intelligence of different human races because most differences in intelligence between humans, whether between individuals or between groups, is captured by the g factor.

[63] Actually, whether the difference in brain size between the sexes disappears after controlling for differences in body-size depends on how one controls for body-size. Simply dividing brain-size by brain size, or vice versa, makes the difference virtually entirely disappear. In fact, it actually gives a slight advantage in brain size to women.
However, Ankney convincingly argues that this is an inappropriate way to control for differences in body-size between the sexes because, among both males and females, as individuals increase in body-size, the brain comes to take up a relatively smaller portion of overall body-size. Yet despite this, individuals of greater stature have, on average, somewhat higher IQs. Ankney therefore proposes that, the correct way to control for body-size, is to compare the average brain size of men and women of the same body-size. Doing so, reveals that men have larger brains relative to bodies even after controlling for body-size in this way (Ankney 1992).
However, zoologist Dolph Schluter points out that, if you do the opposite – i.e. instead of comparing the brain-sizes of men and women of equivalent body-size, compare the body-size of men and women with the same brain-size – then one finds a difference in the opposite direction. In other words, among men and women with the same brain-size as one another, women tend to have smaller bodies (Schluter 1992).
Thus, Schluter reports:

White men are more than 10 cm taller on average than white women with the same brain weight” (Schluter 1992).

This paradoxical finding is, he argues, a consequence of a statistical effect known as regression to the mean, whereby extreme outliers tend to regress to the mean in subsequent measurements, and the more extreme the outlier, the greater the degree of regression. Thus, an extremely tall woman, as tall as the average man, will not usually have a brain quite as unusually large as her exceptionally large body-size; whereas a very short man, as short as the average women, will not usually have a brain quite as unusually small as his unusually small body-size.
Ultimately, I am led to agree with infamous fraud, charlatan and bully Stephen Jay Gould that, given the differences in both body-shape and composition as between males and females (e.g. men have much greater muscle mass; women greater levels of fat), it is simply impossible to know how to adequately control for body-size as between the sexes.
Thus, Gould writes:

“[Even] men and women of the same height do not share the same body build. Weight is even worse than height, because most of its variation reflects nutrition rather than intrinsic size—and fat vs. skinny exerts little influence upon the brain” (The Mismeasure of Man: p106).

The only conclusion that can be reached definitively is that, after controlling for body-size, any remaining differences in brain-size as between the sexes are small in magniude.

[64] Another less widely supported, but similarly politically correct explanation for the correlation between latitude and brain is that these differences reflect a visual adaptation to differing levels of ambient light in different regions of the globe. On this view, popularions further from the equator, where there is less ambient light evolved both larger eyes, so as to see better, and also larger brains, to better process this visual input (Pearce & Dunbar 2011).

[65] Lynn himself has altered his figure slightly in accordance with the availability of new datasets. In the original 2006 edition of his book, Race Differences in Intelligence he gives a slightly lower figure of 67, before changing this back up to 71 in the 2015 edition of the same book, while, in The Intelligence of Nations, published in 2019, Lynn and his co-author report the average IQ in sub-Saharan Africans as 69.

[66] Thus, other researchers have, predictably, considered Lynn’s estimates as altogether too low and provided what they claim are more realistic figures. The disagreement focuses primarily on which samples are to be regarded as representative, with Lynn disregarding studies using what he regards as elite and unrepresentative.
For example, Wicherts et al, in their systematic review of the available literature, give an average IQ of 82 for sub-Saharan Africans as a whole (Wicherts et al 2010). However, even this much higher figure is very low compared to IQs in Europe and North America, with an IQ of 100, and also considerably lower than the average IQ of blacks in the US, which are around 85.
This difference has been attributed both to environmental factors, and to the fact that African-Americans, and Afro-Caribbeans, have substantial white European admixture (though this latter explanation fails to explain why African-Americans are outcompeted academically and economically by recent unmixed immigrants from Africa).
At any rate, even assuming that the differences are purely environmental in origin, an average IQ of 80 for sub-Saharan Africans, as reported by Wicherts et al (2010), seems oddly high when it is compared to the average IQ of 85 reported for African Americans and 100 for whites, since the difference in environmental conditions as between blacks and whites in America is surely far less substantial than that between African Americans and black Africans resident in sub-Saharan Africa.
As Noah Carl writes:

It really doesn’t make sense for them to argue that the average IQ in Sub-Saharan Africa is as high as 80. We already have abundant evidence that black Americans score about 85 on IQ tests, as compared to 100 for whites. If the average IQ in Sub-Saharan Africa is 80, this would mean the massive difference in environment between Sub-Saharan Africa and the US reduces IQ by only 5 points, yet the comparatively small difference in environment between black and white Americans somehow reduces it by 15 points” (Carl 2025)

[67] In diagnosing mental disability, other factors besides raw IQ will also be looked at, such as adaptive behaviour (i.e. the ability to perform simple day-to-day activities, such as basic hygiene). Thus, Mackintosh reports:

In practice, for a long time now an IQ score alone has not been a sufficient criterion [for the diagnosis of mental disability]… Many years ago the American Association on Mental Deficiency defined mental retardation as ‘significantly sub-average general intellectual functioning existing concurrently with deficits in adaptive behavior’” (IQ and Human Intelligence: p356).

[68] Of course, merely interacting with someone is not an especially accurate way of estimating their level of intelligence, unless perhaps one is discussing especially intellectually demanding subjects, which tends to be rare in everyday conversation. Moreover, Philippe Rushton proposes that we are led to overestimate the intelligence of black people when interacting with them because their low intelligence is often masked by a characteristic personality profile – “outgoing, talkative, sociable, warm, and friendly”, with high levels of social competence and extraversion – which personality profile itself likely reflects an innate racial endowment (Rushton 2004).

[69] Ironically, although he was later to have a reptutation among some leftist sociologists as an incorrigible racist who used science (or rather what they invariably refer to as ‘pseudo-science’) to justify the existing racial order, Jensen was in fact first moved to study differences in IQ between races, and the issue of test bias, precisely because he initially assumed that, due to the different behaviours of low-IQ blacks and whites, IQ tests might indeed be underestimating the intelligence of black Americans and somehow biased against them (The g Factor: p367). However, his careful, systematic and quantitative research ultimately showed this assumption to be false (see Jensen, Bias in Mental Testing).

[70] Mike Tyson, another celebrated African American world heavyweight champion, was also recorded as having a similarly low IQ when tested in school. With regard to Ali’s test results, the conditions for admittance to the military were later lowered to increase recruitment levels, in a programme which became popularly known as Macnamara’s morons, after the US Defense Secretary responsible for implementing it. This is why Muhammad Ali, despite initially failing the IQ test that was a prerequisite for enlistment, was indeed later called up, and famously refused to serve.
Incidentally, the project to lower standards in order to increase recruitment levels is generally regarded as having been an unmitigated disaster and was later abandoned. Today, the US military no longer uses IQ testing to screen recruits, instead employing the Armed Services Vocational Aptitude Battery, though this, like virtually all tests of mental ability and aptitude, nevertheless taps into the general factor of intelligence, and hence is, in part, an indirect measure of IQ.

[71] My own analogy, in the text above, is between race and species. Thus, I write that it would be no more meaningful to describe a sub-Saharan with an IQ below 70 as mentally handicapped than it would be to describe a chimpanzee as mentally handicapped simply because they are much less intelligent than the average human. This analogy – between race/subspecies and species – is, in some respects more apposite, since races or subspecies do indeed represent ‘incipient species’ and the first stage of speciation (i.e. the evolution of populations into distinct species). On the other hand, however, it is not only very provocative, but also very misleading in a very different way, simply because the differences between chimpanzees and humans in intelligence and many other traits are obviously far greater than those between the different races of mankind, who all represent, of course, a single species.

[72] Richard Lynn, in Race Differences in Intelligence (which I have reviewed here), attributes a very low IQ of just 62 to New Guineans, and an even lower IQ, supposedly just 52 to San Bushmen. However, he draws this conclusion on the basis of very limited evidence, especially in respect of the San (see discussion here). However, in relation to New Guineans, it is worth noting that Lynn provides much more data (mostly from the Australian school system) in respect of the IQs of the Aboriginal population of Australia, to whom New Guineans are closely related, and to whom he ascribes a similarly low average IQ (as discussed here).

[73] I am not sure what evidence Harpending relies on to infer a high average IQ in South India. Richard Lynn, in his book, Race Differences in Intelligence (which I have reviewed here) reports a quite low IQ of 84 for Indians in general, whom he groups, perhaps problematically, with Middle Eastern and North African peoples as, supposedly, a single race.
However, a more recent study, also authored by Lynn in collaboration with an Indian researcher, does indeed report higher average intelligence in South India than in North India, and also in regions with a coastline (Lynn & Yadav 2015).
This, of course, rather contradicts Lynn’s own ‘cold winters theory’, which posits that the demands of surviving in a relatively colder climate during winter selects for higher intelligence, as North India is situated at a higher latitude than South India, and, especially in some mountainous regions of the North East, has relatively colder winters.
Incidentally, it also seemingly contradicts any theory of what we might term ‘Aryan supremacy’, since it is the lighter complexioned North Indians who have greater levels of Indo-European ancestry and speak Indo-Aryan languages, whereas the darker complexioned South Indians speak Dravidian languages and have much less Indo-European ancestry, and hence North Indians, together with related groups such as Iranians, not German Nazis, who have the strongest claim to being ‘Aryans.
South India also today enjoys much higher levels of economic development than does North India.

[74] Ashkenazi Jews, of course, have substantial European ancestry, as a consequence of long sojourn as diaspora minority in Europe. The same is true to some extent of Sephardi Jews, who trace their ancestry to the Jewish populations formerly resident in and then expelled from Spain and Portugal. However, although these are the groups whom westerners usually have in mind when thinking of Jews, the majority of Jews in Israel today are actually the Mizrahim, who remained resident in the Middle East, if not in Palestine, and hence have little or no European admixture. 

[75] The fact that apartheid-era South Africa, despite international sanctions, was nevertheless a ‘developed economy’, but South Africa today is classed as a ‘developed economy’, of course, ironically suggests that, if South Africa is indeed ‘developing’, it is doing so in altogether the wrong direction.

[76] For example, to take one obvious example, customers at strip clubs and brothels generally have a preference for younger, more physically attractive, service providers of a particular sex, and also often show a racial preference too.
The topic of the economics of discrimination was famously analysed by pioneering Nobel Prize winning economist Gary Becker.

[77] Some degree of discrimination in favour of black and perhaps other underrerpresented demographics likely continued under the guise of a newly-adopted ‘holistic’ approach to university admission. This involved deemphasizing quantifiable factors such as grades and SAT scores, which meant that any discrimination against certain demographics (i.e. whites, Asians and males) is less easily measured and hence proven in a court of law.

[78] It also ought to be noted in this context that the very term meritocracy is itself problematic, raising, as it does, the question of how we define ‘merit’, let alone how we objectively measure and quantify it for the purposes of determining, for example, how is appointed to a particular job or has his application for a particular university accepted or rejected. Determining the ‘merit’ of a given person is necessarily a ‘value judgement’ and hence inherently a subjective assessment.
Of course, in practice, when people talk of meritocracy in this sense, they usually mean that employers should select the ‘best person for the job’, not ‘merit’ in some abstract cosmic moral sense. In this sense, it is not really ‘merit’ that determines whether a person obtains a given job, but rather their market value in the job market (i.e. the extent to which they possess the marketable skills etc.).
Yet this is not the same thing as ‘merit’. After all, a Premiership footballer may command a far higher salary in the marketplace than, say, a construction worker. However, this is not to say that they are necessarily more meritorious outside the football pitch. It is the players merits as a footballer that are in issue not their merits as people or moral agents. Construction workers surely contribute more to a functioning society.
Market value, unlike merit, is something that can be measured and quantified, and indeed the market itself, left to its own devices, automatically arrives at just such a valuation.
However, although a free market system may approximate meritocracy, albeit only in this narrow sense, a perfect meritocracy is unattainable, even in this narrow sense. After all, employers sometimes make the wrong decision. Moreover, humans have a natural tendency towards nepotism (i.e. promoting their own close kin at the expense of non-kin) and perhaps to ethnocentrism and racism too.
Thus, as I have written about previously, equal opportunity is, in practice, almost as utopian and unachievable as equality of outcome (i.e. communism).

[79] Sarich and Miele even cite models of where the salience of racial group identity is supposedly overcome, or at least mitigated:

The examples of basic military training, sports teams, music groups, and successful businesses show that [animosities between racial, religious and ethnic groups] can indeed be overcome. But doing so requires in a sense creating a new identity by to some extent stripping away the old. Eventually, the individual is able to identify with several different groups” (p242).

Yet, even under these conditions, racial animosities are not entirely absent. For example, despite basic training, racial incidents are hardly unknown in the US military.
Moreover, the cooperation between ethnicities often ends with the cessation of the group activity in question. In other words, as soon as they finish playing for their multiracial sports team, the team members will go back to being racist again, to everyone other than their teammates. After all, racists are not known for their intellectual consistency and racism and hypocrisy tend to go together.
For example, members of different races may work, and fight, together in relative harmony and cohesion in the military. However, military veterans are not noticeably any less racist than non-veterans. If anything, in my limited experience, the pattern seems to be quite the opposite. Indeed, many leaders in the ‘white power’ movement in the USA (e.g. Louis Beam, Glenn Miller) were military veterans, and a recent book, Bring the War Home by Kethleen Belew, even argues that it was the experience of defeat in Vietnam, and, in particular, the return of many patriotic but disillusioned veterans, that birthed the modern ‘white power’ movement.
Similarly, John Allen Muhammad, the ‘DC sniper’, a serial killer and member of the black supremacist Nation of Islam cult, who was responsible for killing ten people, all of them white, and whose accomplice admitted that his killings were motivated by a desire to kill white people, was likewise a military veteran.

[80] Despite the efforts of successive generations of feminists to stir up animosity between the sexes, even sex is not an especially salient aspect of a person’s identity, at least when it comes to group competition. After all, unlike in respect of race and ethnicity, almost everyone has relatives and loved ones of both biological sexes, usually in roughly equal number, and the two sexes are driven into one another’s arms by the biological imperative of the sex drive. As Henry Kissinger is, perhaps apocryphally, quoted as observing:

No one will win the battle of the sexes. There is too much fraternizing with the enemy”.

Indeed, the very notion of a ‘battle of the sexes’ is a misleading metaphor, since people compete, in reproductive terms, primarily against people of the same sex as themselves in competition for mates.

References

Aiken (2011) Historical determinants of contemporary attributes of African descendants in the Americas: The androgen receptor holds the key, Medical Hypotheses 77(6): 1121-1124.
Allison et al (1993) Can ethnic differences in men’s preferences for women’s body shapes contribute to ethnic differences in female adiposity? Obesity Research 1(6):425-32.
Ankney (1992) Sex differenes in relative brain size: The mismeasure of woman, too? Intelligence 16(3–4): 329-336.
Beals et al (1984) Brain Size, Cranial Morphology, Climate, and Time MachinesCurrent Anthropology 25(3), 301–330.
Braveman et al (1995) Racial/ethnic differences in the likelihood of cesarean delivery, CaliforniaAmerican Journal of Public Health 85(5): 625–630.
Carl (2025) Are Richard Lynn’s national IQ estimates flawed? Aporia, January 1.
Coppinger & Schneider (1995) Evolution of working dogs.In: Serpell (ed.). The Domestic Dog. Its Evolution, Behaviour and Interactions with People (pp. 22-47). Cambridge: Cambridge University Press.
Crespi (2016) Autism As a Disorder of High Intelligence, Frontiers of Neuroscience 10:300.
Draper (1989) African Marriage Systems: Perspectives from Evolutionary Ecology, Ethology and Sociobiology 10(1-3):145-169.
Edwards (2003). Human genetic diversity: Lewontin’s fallacy. BioEssays 25 (8): 798–801.
Edmonds et al (2013) Racial and ethnic differences in primary, unscheduled cesarean deliveries among low-risk primiparous women at an academic medical center: a retrospective cohort studyBMC Pregnancy Childbirth 13, 168.
Ellis (2017) Race/ethnicity and criminal behavior: Neurohormonal influences, Journal of Criminal Justice 51: 34-58.
Getahun et al (2009) Racial and ethnic disparities in the trends in primary cesarean delivery based on indicationsAmerican Journal of Obstetrics and Gynecology 201(4):422.e1-7.
Freedman et al (2006) Ethnic differences in preferences for female weight and waist-to-hip ratio: a comparison of African-American and White American college and community samples, Eating Behaviors 5(3):191-8.
Frost (1994) Geographic distribution of human skin colour: A selective compromise between natural selection and sexual selection? Human Evolution 9(2):141-153.
Frost (2006) European hair and eye color: A case of frequency-dependent sexual selection? Evolution and Human Behavior 27:85-103.
Frost (2008) Sexual selection and human geographic variation, Journal of Social Evolutionary and Cultural Psychology 2(4):169-191.
Frost (2014) The Puzzle of European Hair, Eye, and Skin Color, Advances in Anthropology 4(02):78-88.
Frost (2015) Evolution of Long Head Hair in Humans. Advances in Anthropology 05(04):274-281.
Frost (2023) Do human races exist? Aporia Magazine July 11.
Gichoya et al (2022) AI recognition of patient race in medical imaging: a modelling study, Lancet 4(6): E406-E414.
Greenberg & LaPorte (1996) Racial differences in body type preferences of men for women International Journal of Eating Disorders 19:275–8.
Hartung (1995) Love Thy Neighbor: The Evolution of In-Group Morality. Skeptic 3(4):86–98.
Hirschfeld (1996) Race in the Making: Cognition, Culture, and the Child’s Construction of Human Kinds. Contemporary Sociology 26(6).
Jensen & Johnson (1994) Race and sex differences in head size and IQ, Intelligence 18(3): 309-333.
Jazwal (1979) Skin colour in north Indian populationsJournal of Human Evolution 8(3): 361-366.
Juntilla et al (2022) Breed differences in social cognition, inhibitory control, and spatial problem-solving ability in the domestic dog (Canis familiaris), Scientific Reports 12:1.
Lee et al (2019) The causal influence of brain size on human intelligence: Evidence from within-family phenotypic associations and GWAS modeling, Intelligence 75: 48-58.
Lewontin (1972). The Apportionment of Human Diversity.  In: Dobzhansky, T., Hecht, M.K., Steere, W.C. (eds) Evolutionary Biology (New York: Springer).
Lynn & Yadav (2015) Differences in cognitive ability, per capita income, infant mortality, fertility and latitude across the states of IndiaIntelligence 49: 179-185
Mishra (2017) Genotype-Phenotype Study of the Middle Gangetic Plain in India Shows Association of rs2470102 with Skin PigmentationJournal of Investigative Dermatology 137(3):670-677.
Mitton (1977). Genetic Differentiation of Races of Man as Judged by Single-Locus and Multilocus AnalysesThe American Naturalist 111 (978): 203–212.
Mitton (1978). Measurement of Differentiation: Reply to Lewontin, Powell, and Taylor. The American Naturalist 112 (988): 1142–1144. 
Macdonald 2001 An integrative evolutionary perspective on ethnicity. Politics & the Life Sciences 20(1):67-8.
Nelson & Witte (2002) Androgen receptor CAG repeats and prostate cancer, American Journal of Epidemiology 15;155(10):883-90.
Norton et al (2019) Human races are not like dog breeds: refuting a racist analogy. Evolution: Education and Outreach 12: 17.
Nystrom et al (2008) Perinatal outcomes among Asian–white interracial couplesAmerican Journal of Obstetrics and Gynecology 199(4), p382.e1-382.e6.
Okwandu et al (2021) Racial and Ethnic Disparities in Cesarean Delivery and Indications Among Nulliparous, Term, Singleton, Vertex Women. Journal of Racial & Ethnic Health Disparities. 12;9(4):1161–1171.
Ostrander (2007) Genetics and the Shape of Dogs, American Scientist 95(5): 406.
Pearce & Dunbar (2011) Latitudinal variation in light levels drives human visual system size, Biology Letters 8(1): 90–93.
Piffer (2013) Correlation of the COMT Val158Met polymorphism with latitude and a hunter-gather lifestyle suggests culture–gene coevolution and selective pressure on cognition genes due to climate, Anthropological Science 121(3):161-171.
Pietschnig et al (2015) Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews 57: 411-432.
Rushton (2004) Solving The African IQ Conundrum : “Winning Personality” Masks Low Scores, Vdare.com August 12.
Rushton, J.P. & Ankney, C.D. (2009) Whole Brain Size and General Mental Ability: A Review. International Journal of Neuroscience, 119(5):692-732.
Sailer (1996) Great Black HopesNational Review, August 12.
Sailer (2019) Richard Jewell’: The Problem With Profiling, Takimag, December 18.
Schluter (1992) Brain size differences, Nature 359:181.
Schoenemann et al (2000) Brain-size does not predict general cognitive ability within families. Proceedings of the National Academy of Sciences, 97:4932–4937.
Singh (1994) Body fat distribution and perception of desirable female body shape by young black men and women, Eating Disorders 16(3): 289-294.
Snook et al (2007) Taking Stock of Criminal Profiling: A Narrative Review and Meta-Analysis, Criminal Justice and Behavior 34(4):437-453.
Swami et al (2006) Female physical attractiveness in Britain and Japan: a cross-cultural study, European Journal of Personality 20(1): 69-81.
Taylor (2021) Making sense of race, American Renaissance, May 14.
Tishkoff et al (2007) Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics (1): 31-40.
Thompson et al (1996) Black and white adolescent males’ perceptions of ideal body size, Sex Roles 34(5-6): 391–406.
Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and Adaptation, Journal of Personality 58(1):17-67.
Valdes (2020) Examining Cesarean Delivery Rates by Race: a Population-Based Analysis Using the Robson Ten-Group Classification System Journal of Racial and Ethnic Health Disparities.
Van den Berghe & Frost (1986) Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9: 87-113.
Whitney (1997) Diversity in the Human Genome, American Renaissance 8(3), March 1997
Whitney (1999) The Biological Reality of RaceAmerican Renaissance, 10(10) October 1999.
Wicherts et al (2010) A systematic literature review of the average IQ of sub-Saharan Africans, Intelligence 38(1):1-20

Catherine Hakim’s ‘Erotic Capital’: Too Much Feminism; Not Enough Evolutionary Psychology

Catherine Hakim, Honey Honey: The Power of Erotic Capital (London: Allen Lane 2011)

Catherine Hakim, a British sociologist – proudly displaying her own ‘erotic capital’ in a photograph on the dust jacket of the hardcover edition of her book – introduces her concept of ‘erotic capital’ in this work, variously titled either Money Honey: the Power of Erotic Capital’ or Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom’.[1]

Although Hakim insists this concept of ‘erotic capital’ is original to her, in reality it appears to be little more than social science jargon for sex appeal – a new term invented for a familiar concept, introduced to disguise the lack of originality of the concept.[2]

Certainly, Hakim may be right that economists and sociologists have often failed to recognize and give sufficient weight to the importance of sexual attractiveness in human relations. However, this reflects only the prejudices, puritanism and prudery of economists and sociologists, not the originality of the concept.

In fact, the importance of sexual attractiveness in human affairs has been recognized by intelligent laypersons, poets and peasants from time immemorial. It is also, of course, a central focus of much research in evolutionary psychology.

Hakim maintains that her concept of ‘erotic capital’ is broader than mere sex appeal by suggesting that even heterosexual people tend to admire and enjoy the company of individuals of the same sex with high levels of erotic capital:

Even if they are not lesbian, women often admire other women who are exceptionally beautiful, or well-dressed, and charming. Even if they are not gay, men admire other men with exceptionally well-toned, ‘cut’ bodies, handsome faces and elegant social manners” (p153).

There is perhaps some truth to this.

For example, I recall hearing that the audiences at (male) bodybuilding contests are, perhaps oddly, composed predominantly of heterosexual men. Similarly, since action movies are a genre that appeals primarily to male audiences, it was presumably heterosexual men and boys who represented the main audiences for Arnold Schwarzenegger action movies during his 1980s heyday, and they were surely not attracted by his acting ability. Indeed, I am reminded of this meme.[3]

Likewise, heterosexual women seem, in many respects, even more obsessed with female beauty than are heterosexual men. Indeed, this is arguably not very surprising, since female beauty is of far more importance to women than to men, since their own marital prospects, and hence socioeconomic status, depend substantially upon it.

Thus, just as pornographic magazines, which, until eclipsed in the internet age, attracted an overwhelmingly male audience, were filled with pictures of beautiful, sexy women in various states of undress, so fashion magazines, which attracted an audience as overwhelmingly female and porn’s was male, were likewise filled with pictures of beautiful, sexy women, albeit somewhat less explicit and wearing more clothes.

However, if men do indeed sometimes admire muscular men, and women do sometimes admire beautiful women, I nevertheless suspect people are just as often envious of and hence hostile towards same-sex rivals whom they perceive as more attractive than themselves.

Indeed, there is even some evidence for this.

In her book, Survival of the Prettiest (which I have reviewed here), Nancy Etcoff reviews many of the advantages associated with good looks, as does Catherine Hakim in Money Honey. However, Etcoff, for her part, also identifies at least one area where beautiful women are apparently at a disadvantage – namely, they tend to have difficulties holding down friendships with other women, presumably on account of jealousy:

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (Survival of the Prettiest: p50; citing Krebs & Adinolfy 1975).[4]

Interestingly, sexually insightful French novelist Michel Houellebecq, in his novel, Whatever, suggests that the same may be true for exceptionally handsome men. Thus, he writes:

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63).

A Sex Difference in Sexiness?

Besides introducing her supposedly novel concept of ‘erotic capital’, Hakim’s book purports to make two original discoveries, namely that:

  1. Women have greater erotic capital than men do; and
  2. Because men have a greater sex drive than women, “there is a systematic and apparently universal male sex deficit: men generally want a lot more sex than they get” (p39).

However, once one recognizes that ‘erotic capital’ essentially amounts to sex appeal, it is doubtful whether these two claims are really conceptually separate.

Rather, it is the very fact that men are not getting as much sex as they want that explains why women have greater sex appeal than men, because men are always on the lookout for more sex – or, to put the matter the other way around, it is women’s greater levels of sex appeal (i.e. ability to trigger the male sex drive) that explains why heterosexual men want more sex than they can get. After all, it is sex appeal that drives the desire for sex, just as it is one person’s desire for sex that invests the person with whom they desire to have sex with sex appeal.

Indeed, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

Evolutionary Psychology

Yet there is a curious omission in Hakim’s otherwise comprehensive review of the literature on this topic, one that largely deprives her exposition of its claims to originality.

Save for two passing references (at p88 and in an endnote at p320), she omits any mention of a theoretical approach in the human behavioural sciences which has, for at least thirty years prior to the publication of her book, not only focused on sexual attractiveness and recognized what Hakim refers to as the ‘universal male sex deficit’ (albeit not by this name), but also provided a compelling theoretical explanation for this phenomenon, something conspicuously absent from her own exposition – namely, evolutionary psychology and sociobiology.

According to evolutionary psychologists, men have evolved a greater desire for sex, especially commitment-free promiscuous sex, because it enabled them to increase their reproductive success at minimal cost, whereas the reproductive rate of women was more tightly constrained, burdened as they are with the costs of both pregnancy and lactation.

This insight, known as Bateman’s principle dates from over sixty years ago (Bateman 1948), was rediscovered, refined and formalized by Robert Trivers in the 1970s (Trivers 1972), and applied explicitly to humans from at least the late-1970s with the publication of Donald Symons’ seminal The Evolution of Human Sexuality (which I have reviewed here).

Therefore, Hakim is disingenuous claiming:

Only one social science theory [namely, Hakim’s own] accords erotic capital any role at all” (p156).

Yet, despite her otherwise comprehensive review the literature on sexual attractiveness and its correlates, including citations of some studies conducted by evolutionary psychologists themselves to test explicitly sociobiological theories, one searches the index of her book in vain for any entry for ‘evolutionary psychology’, ‘sociobiology’ or ‘behavioural ecology’.[5]

Yet Hakim’s book often merely retreats ground evolutionary psychologists covered decades previously.

For instance, Hakim treats male homosexual promiscuity as a window onto the nature of male sexuality when it is freed from the constraints imposed by women (p68-71; p95-6).

Thus, as evidence that men have a stronger sex drive than women, Hakim writes:

Paradoxically, the most compelling evidence of this comes from homosexuals, who are relatively impervious to the brainwashing and socialization of the heterosexual majority. Lesbian couples enjoy sex less frequently than any other group. Gay male couples enjoy sex more frequently than any other group—and their promiscuous lifestyle makes them the envy of many heterosexual men. Gay men in long-term partnerships who have become sexually bored with each other maintain an active sex life through casual sex, hookups, and promiscuity. Even among people who step outside the heterosexual hegemony to carve out their own independent sexual cultures, men are much more sexually active than women, on average” (p95-6).

Here, Hakim echoes, but conspicuously fails to cite or acknowledge the work of evolutionary psychologist Donald Symons, who, in his seminal The Evolution of Human Sexuality (which I have reviewed here), first published in 1979, some three decades before Hakim’s own book, pioneered this exact same approach, in his ninth chapter, titled ‘Test Cases: Hormones and Homosexuals’. Thus, Symons writes:

I have argued that male sexuality and female sexuality are fundamentally different, and that sexual relationships between men and women compromise these differences; if so, the sex lives of homosexual men and women—who need not compromise sexually with members of the opposite sex—should provide dramatic insight into male sexuality and female sexuality in their undiluted states. Homosexuals are the acid test for hypotheses about sex differences in sexuality” (The Evolution of Human Sexuality: p292).

To this end, Symons briefly surveys the rampant promiscuity of American gay culture in the pre-AIDS era when he was writing, including the then-prevalent practice of gay men meeting strangers for anonymous sex in public lavatoriesgay bars and exclusively gay bathhouses (The Evolution of Human Sexuality: p293-4).

He then contrasts this hedonistic lifestyle with that of lesbians, whose romantic relationships typically mirror heterosexual relationships, being characterized by long-term pair bonds and monogamy.

This similarity between lesbian relationships and heterosexual coupling, and the stark contrast with rampant homosexual male promiscuity, suggests, Symons argues, that, contrary to feminist dogma, which asserts that it is men who both dictate and primarily benefit from the terms of heterosexual coupling, it is in fact women who dictate the terms of heterosexual coupling in accordance with their own interests and desires (The Evolution of Human Sexuality: p300).

Thus, as popular science writer Matt Ridley writes:

Donald Symons… has argued that the reason male homosexuals on average have more sexual partners than male heterosexuals, and many more than female homosexuals, is that male homosexuals are acting out male tendencies or instincts unfettered by those of women” (The Red Queen: p176).

This is, of course, virtually exactly the same argument that Hakim is making, using exactly the same evidence, but Symons is nowhere cited in her book.

Hakim again echoes the work of Donald Symons in noting the absence of a market for pornography among women to mirror the extensive market for pornography produced for male consumers.

Thus, before the internet age, magazines featuring primarily nude pictures of women commanded sizable circulations despite the stigma attached to their purchase. In contrast, Hakim reports:

The vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility… Women should logically be the main audience for male nudes, but they display little interest. Most of the erotic magazines aimed at women in Europe have failed, and almost none of the photographers doing male nudes are women. The taste for erotica and pornography is typically a male interest, whether heterosexual or homosexual in character…The lack of female interest in male nudes (at least to the same level as men) demonstrates both lower female sexual interest and desire, and the higher erotic value of the female nude in almost all cultures —with a major exception being ancient Greece” (p71).

Yet here again Hakim directly echoes, but fails to cite, Donald Symons’s seminal The Evolution of Human Sexuality, who, citing the Kinsey Reports, observed:

Enormous numbers of photographs of nude females and magazines exhibiting nude or nearly nude females are produced for heterosexual men; photographs and magazines depicting nude males are produced for homosexual men, not for women” (The Evolution of Human Sexuality: p174)

This Symons calls “the natural experiment of commercial periodical publishing” (The Evolution of Human Sexuality: p182).

Similarly, just as Hakim notes that “the vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility” (p71), so Symons three decades earlier concluded:

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (The Evolution of Human Sexuality: p304).

However, Symons’s pioneering book on the evolutionary psychology human sexuality is not cited anywhere in Hakim’s book, and neither is it listed in her otherwise quite extensive bibliography.

Sex Surveys

Another odd omission from Hakim’s book is that, while she extensively cites the findings of numerous ‘sex surveys’ replicating the robust finding that men report more sexual partners over any given timespan than women do, Hakim never grapples with, and only once in passing alludes to, the obvious problem that (homosexual encounters aside) every sexual encounter must involve both a male and a female, such that, on average, given the approximately equal numbers of both males and females in the population as a whole (i.e. an equal sex ratio), men and women must have roughly the same average number of sex partners over their lifetimes.[6]

Two explanations have been offered for this anomalous finding. Firstly, there may be a small number of highly promiscuous women – i.e. prostitutes – whom surveys generally fail to adequately sample (Brewer et al 2000).

Alternatively, it is suggested, not unreasonably, that respondents may be dishonest even in ostensibly anonymous surveys, especially when they deal with sensitive subjects such as a person’s sexual experience and behaviours.

Popular stereotype has it that it is men who lie in sex surveys in order to portray themselves as more promiscuous and hence ‘successful with women’ than they really are.

However, while this claim seems to be mostly conjecture, there is actual data showing that women are also dishonest in sex surveys, lying about their number of sex partners for precisely the opposite reason – namely to appear more innocent and chaste, or at least less rampantly slutty, than they really are, given the widespread demonization of promiscuity among women.

Thus, one interesting study found that women report relatively more sexual partners in surveys when they believe their answers are anonymous than they do when they believe their answers may be viewed by the experimenter, and more still when they believe that they are hooked up to a polygraph machine designed to detect any dishonest answers when reporting their answers. Indeed, in the fake lie-detector conditions, female respondents actually reported more sexual partners than did male respondents (Alexander and Fisher 2003).

A further factor may be that men and women define ‘sex’ differently, at least for the purposes of giving answers to sex surveys, perhaps exploiting the same sort of semantic ambiguities that Bill Clinton sought to exploit to evade perjury charges in relation to his claim not to have had sexual relations’ with Monica Lewinsky.

Paternity Certainty, Mate Guarding and the Suppression of Female Sexuality

Hakim claims men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’. Thus, she claims:

Men have taken steps to prevent women exploiting their one major advantage over men, starting with the idea erotic capital is worthless anyway. Women who openly deploy their beauty or sex appeal are belittled as stupid, lacking in intellect and other ‘meaningful’ social attributes” (p75).

In particular, Hakim views so-called ‘sexual double-standards’ and the puritanical attitudes expressed by many religions (especially Christianity and Islam) as mechanisms by which men suppress female sexuality and thereby prevent women taking advantage of their greater levels of ‘erotic capital’ or sex appeal as compared to men.

Citing the work of female historian Gerda Lerner, Hakim claims that men established patriarchy and sought to control the sexuality of women so as to assure themselves of the paternity of their offspring:

Patriarchal systems of control and authority were developed by men who wanted to be sure that their land and property, whatever they were, would be passed on to their own biological children” (p77).

However, she fails to explain the ultimate evolutionary reason why men would ever even be interested in, or care about, the paternity of the offspring who inherit their property.

Here, of course, evolutionary psychology provides a ready and compelling explanation.

Evolutionary psychologists contend that human male’s interest in the paternity of their putative offspring ultimately reflects the sociobiological imperative of maximizing their reproductive success by securing the passage of their genes into subsequent generations, and their concern that their parental investment not be maladaptively misdirected towards offspring fathered, not by themselves, but rather by a rival male.

Yet Hakim is evidently unaware of, or at least does not cite, the substantial scientific literature in evolutionary psychology on male sexual jealousy and mate guarding (e.g. Wilson & Daly 1992; Buss et al 1992).

Had Hakim familiarized herself with this literature, and the literature on mate guarding among non-human animals, she might have spared herself from her next error. For on the very next page, citing another female historian, one Julia Stonehouse, Hakim purports to trace men’s efforts to control women’s sexuality back to the supposed discovery of the role sex – and of men – in reproduction in 3000BC (p78-9).

At the beginning of civilization, from around 20000 BC to 8000 BC, there were no gods, only goddesses who had the magical power to give birth to new life quite independently… Men were seen to have no role at all in reproduction up up to around 3000 BC… Theories of reproduction changed around 3000 BC – man was suddenly presented as sowing the ‘seed’ that was incubated by women to deliver the man’s child… Control of women’s sexuality started only when men believed they planted the unique seed that produces a baby” (p78-9).[7]

This would seem a very odd claim to anyone with a background in biology, especially in sociobiology, behavioural ecology and animal behaviour.

Hakim is apparently unaware that naturalists have long observed analogous patterns of what biologists call mate guarding among non-human species, who are, of course, surely not consciously (or even subconsciously) aware of the relationship between sexual intercourse and reproduction, but who have nevertheless been programmed by natural selection to behave in such a way as to maximise their reproductive success by engaging in such mate-guarding behaviours, even without any conscious awareness of the ultimate evolutionary function of such behaviour.

For example, analogous behaviours are observed among our closest extant nonhuman relatives, namely chimpanzees. Thus, Jane Goodall, in her seminal study of chimpanzee behaviour in the wild, describes how the dominantalpha male’ within a troop of chimpanzees will attempt to prevent any males other than him from mating with a fertile estrus female, though she acknowledges:

The best that even a powerful alpha male can, realistically, hope to do is to ensure that most of the copulations around the time of ovulation are his” (The Chimpanzees of Gombe: p473).

In addition, she reports how even subordinate males sometimes successfully sequester fertile females into consortships, whereby they seclude fertile females, often forcibly, leading them to a peripheral part of the group’s home range so as to monopolize sexual access to the female in question, until her period of maximum fertility and sexual receptivity has passed (The Chimpanzees of Gombe: p453-465).

Such chimpanzee consortships sometimes involve force and coercion but other times seem to be largely consensual. We might therefore characterize them as representing the rough chimpanzee equivalent something in between either:

  1. Taking your wife or girlfriend away for a romantic weekend away in Paris; or
  2. Kidnapping a teenage girl and keeping her locked in the basement as a sex slave.

Certainly then, although chimpanzees are almost certainly unaware of the role of sexual intercourse, and of males, in reproduction, they nevertheless engage in mate-guarding behaviours simply because such behaviours tended to maximize their reproductive success in ancestral environments.

Indeed, more controversially, Goodall herself even tentatively proposes an analogy with human sexual jealousy, noting that:

“[Some] aggressive interventions [among chimpanzees] appear to be caused by feelings of sexual and social competitiveness which, if we were describing human behavior, we should label jealousy” (The Chimpanzees of Gombe: p326).

Thus, if our closest ancestors among extant primates, along with humans themselves, evince something akin to sexual jealousy and male sexual proprietariness, then it is a fair bet that our common ancestor with chimpanzees did too, and hence that mate-guarding was also practised by our prehuman ancestors, and certainly predates 3000 BC, the oddly specific date posited by Hakim and Stonehouse.

Certainly, mate-guarding does not require, or presuppose, any conscious (or indeed subconscious) awareness of the role of sexual intercourse – or even of males – in reproduction.[8]

Who Is Responsible to the Stigmatization of Promiscuity?

As for Hakim’s claim that men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’, this also seems very dubious.

Take, for example, the stigmatization of sex workers such as prostitutes, a topic to which Hakim herself devotes considerable attention. Hakim argues that this stigma results from men’s envy of women’s greater levels of erotic capital and their desire to prevent women from exploiting this advantage to the full.

Thus, she writes:

The most powerful and effective weapon deployed by men to curtail women’s use of erotic capital is the stigmatization of women who sell sexual services” (p75).

Unfortunately, however, this theory is plainly contradicted by the observation that women are actually generally more censorious of promiscuity and prostitution than are men (Baumeister and Twenge 2002).

In contrast, men, for obvious reasons, rather enjoy the company of prostitutes and other promiscuous women – although it is true that, due to concerns regarding paternity certainty, they may not wish to marry them.

Hakim, for her part, acknowledges that:

The stigma attached to selling sexual services in the Puritan Christian world… is so complete that women are just as likely as men to condemn prostitution and prostitutes. Sometimes women are even more hostile, and demand the eradication (or regulation) of the industry more fiercely than men, a pattern now encouraged by many feminists” (p76).

In an associated endnote, going further, she even concedes:

In Sweden, the 1996 sex survey showed women objected to prostitutes twice as often as men: two fifths of women versus one fifth of men thought that both buyers and sellers should be treated as criminals” (p282).

Yet this pattern is by no means limited to Sweden, but rather appears to be universal. Thus, Baumeister and Twenge report:

Women seem consistently more opposed than men to prostitution and pornography. Klassen, Williams, and Levitt (1989) reported the results of a survey asking whether prostitution is ‘always wrong’. A majority (69%) of women, but only a minority (45%) of men, were willing to condemn prostitution in such categorical terms. At the opposite extreme, about three times as many men (17%) as women (6%) responded that prostitution is not wrong at all” (Baumeister and Twenge 2002).

Indeed, men appear to more liberal, permissive and tolerant, and women more censorious, in respect of virtually aspects of sexual morality. Thus, women are much more likely than men to disapprove of pornography, promiscuity, prostitution, premarital sex, sex with robots and household appliances and other such fun and healthy recreational activities (see Baumeister and Twenge 2002).[9]

Faced with this overwhelming evidence, Hakim is forced to acknowledge:

If women in Northern Europe object to the commercial sex industry more strongly than men, this seems to destroy my argument that the stigmatization and criminalization of prostitution is promoted by patriarchal men” (p76).

However, Hakim has a ready, if not entirely convincing, response, maintaining that:

Over time women have come to accept and actively support ideologies that constrain them” (p77).

And also that:

Women have generally had the main responsibility for enforcing constraints but did not invent them” (p273, note 20).

However, this effectively reduces women to mindless puppets without agency of their own.

It also fails to explain why women are actually more puritanical than are men themselves.

Perhaps evil, devious, villainous, patriarchal men could somehow have manipulated women, against their own better interests, into being somewhat puritanical, or perhaps even as puritanical as are men themselves. However, they are unlikely to have succeeded in manipulating women into becoming even more puritanical than those evil male geniuses supposedly doing the manipulation and persuading.

Hakim’s Mythical ‘Male Sex Right

Hakim suggests that sexual morality reflects what she calls a “male sex right” (p82).

Thus, she argues that the moral opprobrium attaching to gold-diggers and prostitutes reflects the supposed patriarchal assumption that:

Men should get what they want for free, especially sex” (p79).

Men should not have to pay women for sexual favours or erotic entertainments [and] men should get what they want for free” (p98).

However, this theory is plainly contradicted by three incontestable facts.

First, promiscuous sex is stigmatized even where it does not involve payment. Thus, if prostitutes are indeed stigmatized, so are ‘sluts’ who engage in sex promiscuously but without any demand for payment.

Secondly, marriage is not condemned by moralists but rather held up as a moral ideal despite the fact that, as Hakim herself acknowledges, it usually involves a trade of sexual access in return for financial support – i.e. disguised (and overpriced) prostitution.

Third, far from advocating, as suggested by Hakim, that men should ‘get sex for free’, Christian moralists traditionally promoted abstinence and celibacy, especially before marriage, outside of marriage, and, for those held in highest regard by the church (i.e. nuns, monks and priests), permanently.[10]

In short, what seems to be condemned by moralists seems to be the promiscuity itself, not the demand for payment.

After all, if there really were  a “male sex right”, as contended by Hakim, then rape would presumably be, not a crime, but rather a basic, universal and inalienable human right!

Puritanism and Prudery as Price-fixing Among Prostitutes

A more plausible theory of the stigmatization of sex work might be sought, not in the absurd fallacies of feminism, but in the ‘dismal science’ of economics.

On this view, what is stigmatized is not the sale of sex itself, but rather its availability at too low a price.

Sex available at too low a price runs undercutting other women and driving down the prices the latter can themselves hope to demand for sexual services.

On this view, if men can get bargain basement blowjobs outside of marriage or similar ‘committed’ relationships, then they will have no need to pursue such relationships and women will lose the economic security with which these relationships provide them.

Hakim claims that sexual morality reflects the assumption that:

Men should get what they want for free, especially sex” (p79).

My own view is almost the opposite. Sexual morality reflects the assumption, not that men should be able to get sex for free, but rather that they should be obliged to pay a hefty price (e.g. the ultimate price – marriage), and certainly a lot more than is typically demanded by prostitutes.

Aside from myself, this view has been most comprehensively developed by psychologist Roy Baumeister and colleagues. Baumeister and Vohs (2006: p358) write:

“The so-called ‘cheap’ woman (the common use of this economic term does not strike us as accidental), who dispenses sexual favors more freely than the going rate, undermines the bargaining position of all other women in the community, and they become faced with the dilemma of either lowering their own expectations of what men will give them in exchange for sex or running the risk that their male suitors will abandon them in favor of other women who offer a better deal” (Baumeister and Vohs 2006: p358).

On this view, women’s efforts to prevent other women from capitalizing on their sex appeal is, as Baumeister and Vohs put it, analogous to:

Other rational economic strategies, such as OPEC‘s efforts to drive up the world price of oil by inducing member nations to restrict their production” (Baumeister and Vohs 2006: p357).

Interestingly, an identical analogy – between the supply of oil and of sex – had earlier been adopted by Warren Farrell in his excellent The Myth of Male Power (which I have reviewed here), where he wrote:

In the Middle East, female sex and beauty are to Middle Eastern men what oil and gas are to Americans: the shorter the supply the higher the price. The more women ‘gave’ away sex for free, or for a small price, the more the value of every woman’s prize would be undermined, which is why anger toward prostitution, purdah violation (removing the veil), and pornography runs so deep, especially among women. It is also why parents told daughters, ‘Don’t be cheap.’ ‘Cheap’ sex floods the market” (The Myth of Male Power: p77).

This then explains why women are generally more puritanical and censorious of promiscuity, prostitution and pornography than are men.

It might also explain why feminism and puritanical anti-sex attitudes tend to go together.

Hakim herself insists that feminist campaigners against prostitution, pornography and other such fun and healthy recreational activities are the unwitting dupes of their patriarchal oppressors, having inadvertently internalized ‘patriarchal’ norms that demonize sex work and women’s legitimate exploitation of their erotic capital for financial gain.

In fact, however, the feminists are probably acting in their own selfish best interests by opposing such activities. As Donald Symons explains in his excellent The Evolution of Human Sexuality (which I have reviewed here):

The gain in power to control heterosexual interaction that accompanies the reduction of sexual pleasure is probably one reason… that feminism and antisexuality often go together… As with more recent feminist movements the militant suffrage movement in England before World War I ‘never made sexual freedom a goal, and indeed the tone of its pronouncements was more likely to be puritanical and censorious on sexual matters than permissive: ‘Votes for women and chastity for men’ was one of Mrs Pankhurst’s slogans’… Much recent feminist writing about female sexuality… emphasize[s] masturbation and, not infrequently, lesbianism, which in some respects are politically equivalent to antisexuality”  (The Evolution of Human Sexuality: p262).

However, if feminist prudery is rational in reflecting the interests of feminist prudes, it does not reflect the interests of women in general. Indeed, to represent the interests of women as a whole (as feminists typically purport to do) is almost impossible, because the interests of different women conflict, not least since women are in reproductive competition primarily with one another. Thus, Symons observes:

Feminist prostitutes and many nonprostitute, heterosexual feminists are in direct competition, and it should be no surprise that they are often to be found at one another’s throats” (The Evolution of Human Sexuality: p260).

This, he explains, is because:

To the extent that heterosexual men purchase the services of prostitutes and pornographic masturbation aids, the market for the sexual services of nonprostitute women is diminished and their bargaining position vis-à-vis men is weakened… The implicit belief of heterosexual feminists such as Brownmiller that, in the absence of prostitution and pornography, men will come to want the same kinds of heterosexual relationships that women want may be an attempt to underpin morally a political program whose primary goal is to improve the feminists’ own bargaining position”  (The Evolution of Human Sexuality: p260).

Hakim does not really address this alternative and, in my view, far more plausible theory of the origins of, and rationale behind, sexual prudery and puritanism. Indeed, she does not even mention this alternative explanation for the stigmatization and criminalization of sex work anywhere in the main body of her text, instead only acknowledging its existence in two endnotes (p273 & p283).

In both endnotes, she gives little consideration to the theory, but rather summarily and rather dismissively rejects the theory. On the first occasion, she gives no real reason for rejecting this theory, merely commenting that, in her opinion, Baumeister and Twenge (2002), who champion this theory:

Confuse distal and proximate causes, policy-making and policy implementation. Women generally have the main responsibility for enforcing constraints but do not invent them” (p273, note 20).

On the second occasion, she simply claims, in a single throwaway sentence:

The trouble with this argument is of course that marital relationships are not comparable with casual relationships” (p283, note 8).

However, although this sentence includes the words “of course”, its conclusion is by no means self-evident, and Hakim provides no evidence in support of this conclusion in the endnote.

Admittedly, she does briefly expand upon the same idea at a different point her text, where she similarly contends:

The dividing line between the two markets [i.e. mating markets involving short-term relationships and long-term relationships] is sufficiently important for there to be little or no competition between the two markets” (p235).

This, however, seems doubtful. From a male perspective, both long-term and short-term relationships may serve identical ends – namely access to regular sex.[11]

Therefore, paying a prostitute may represent an alternative (often cheaper) substitute for the time and expense of conventional courtship.

As Donald Symons puts it:

The payment of money and the payment of commitment are not psychologically equivalent, but they may be economically equivalent in the heterosexual marketplace” (The Evolution of Human Sexuality: p260).

Indeed, conventional courtship often, indeed almost invariably, involves the payment of monies by the male partner (e.g. for dates).

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

Thus, according to Baumeister and Twenge:

Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification” (Baumeister and Twenge 2002: p172).

As explained by Tobias and Mary Marcy in their forgotten early twentieth century Marxist-masculist masterpiece, Women As Sex Vendors (which I have reviewed here and here), street prostitutes, especially those supporting a pimp, are stigmatized simply because:

These women are selling below market or scabbing on the job” (Women As Sex Vendors: p29).

What’s that got to do with the Price of Prostitutes?

Particularly naïve, if not borderline economically illiterate, is Hakim’s conclusions regarding the likely effect of the decriminalization of prostitution on the prices prostitutes are able to demand for their services. Thus, she writes:

The only realistic solution to the male sex deficit is the complete decriminalization of the sex industry. It should be allowed to flourish like other leisure industries. The imbalance in sexual interest would be resolved by the laws of supply and demand, as it is in other entertainments. Men would probably find they have to pay more than they are used to” (p98).

In fact, far from men “find[ing] they have to pay more than they are used to”, the usual consequence of the decriminalization of the sale of a commodity is a fall in the value of this commodity, not a rise.

This is because criminalization produces additional costs for suppliers, not least the risk of prosecution, which are almost invariably more than enough to offset lack of regulation and taxes, and the reduced demand attendant to criminalization, which generally reflects the generally lesser risk of prosecution associated with consumption as opposed to supply.[12]

Thus, with the passage into force of the Volstead Act in 1920, which banned the sale and purchase of alcoholic beverages throughout the USA, the price of alcohol is said to have roughly tripled or even quadrupled.

Similarly, the legalization of marijuana in many US states seems to have been associated with a drop in its price, albeit not as great a fall as some opponents (and no few advocates!) of legalization apparently anticipated.

Indeed, later in her book rather contradicting herself, Hakim admits:

In countries where the [sex] trade is criminalized, such as the United States and Sweden, the local price of sexual services can be pushed higher, due to higher risks” (p165).

And also that:

In countries where prostitution is criminalized, fees can sometimes be higher than in countries where it is legal, due to scarcity and higher risks” (p87).

In short, all the evidence suggests that, if prostitution were entirely decriminalized, or, better still, destigmatized as well, then, far from men “find[ing] they have to pay more than they are used to”, in fact the price of prostitutes would drop considerably.

Hakim writes:

Women offering sexual services can earn anywhere between double and fifty times more than they could earn in ordinary jobs, especially jobs at a comparable level of education. This world of greater opportunity is something that men would prefer women not know about. This is the principal reason why providing sexual services is stigmatized… to ensure women never learn anything about it” (p229).

In reality, however, far from this being something that “men would prefer women not know about”, men would benefit if more women were aware of, and took advantage of, the high earnings available to them in the sex industry – because then more women would presumably enter this line of work and hence prices would be driven down by increased competition.

In addition, if more women worked in the sex industry, fewer would be competing for jobs with men in other industries.

In contrast, the main losers would be existing sex workers, who find that they would have to drop their prices in order to cope with increased competition from other service providers – and perhaps also women in pursuit of husbands, who would find that, with bargain basement blowjobs available from prossies, more and more men find have little need to subject themselves to the inequities and indignities of marriage and conventional courtship, which, of course, offer huge economic benefits to women precisely because they are, compared to purchasing the services of prostitutes, such a bad deal for men.

Sexual Double-Standards Cut Both Ways

Arguing that the stigmatization of sex work is “the most powerful and effective weapon deployed by men to curtail women’s use of erotic capital”, Hakim points to the fact that this “stigma… never affects men who sell sex quite so much” as evidence that this stigma was invented by, and hence serves the interests of, evil male oppressors.

Thus, she contends:

The patriarchal nature of… [negative] stereotypes [about sex workers] is exposed by quite different perceptions of men who sell sex: attitudes here are ambivalent, conflicted, unsure” (p76).

I would contend that there is a more convincing economic explanation as why males providing sexual services are relatively less stigmatized – namely, gigolos and rent-boys, in offering services to women and homosexual men, do not threaten to undercut the prices demanded by non-prostitute women on the hunt for husbands.

Indeed, the proof that there is nothing whatever patriarchal about these differing perceptions is provided by the fact that, in respect of long-term relationships, these ‘double-standards’ are reversed.

Thus, whereas homemaker’ or ‘housewife is a respectable occupation for a woman, attitudes towards ‘househusbands’ who are financially dependent on their wives are – to adopt Hakim’s own phraseology – ‘ambivalent, conflicted, unsure’.

Meanwhile, men who are financially dependent on their partners and whose partners happen to work in the sex industry – i.e. pimps – are actually criminalized for their purportedly exploitative lifestyle.

However, the lifestyle of a pimp is actually directly analogous to that of a housewife/homemaker – both are economically dependent on their sexual partners and both are notorious for spending an exorbitant proportion of their sexual partner’s earnings on items such as clothing and jewellery.

Women’s Sexual Power – Innate or Justly Earned?

Hakim argues that exploitation of sex appeal for financial gain – e.g. working in the sex industry, marrying for money or flirting with the boss for promotions – ought to be regarded as a perfectly legitimate means of social, occupational and economic advancement.

In defending this proposition, she resorts to ad hominem, asserting (without citing data) that disapproval of the exploitation of erotic capitalalmost invariably comes from people who are remarkably unattractive and socially clumsy” (p246).

I will not stoop to respond to this schoolyard-tier substitution of personal abuse for rational debate (roughly, ‘if you disagree with me it’s only because you’re ugly!’), save to comment that the important question is not whether such people is ugly – but rather whether they are right.

Defending women’ exploitation of the male sexual drive, Hakim protests

Apparently is fine for men to exploit any advantage they have in wealth or status, but rules are invented to prevent women exploiting their advantage in erotic capital” (p149).

However, this ignores the fact that, whereas men’s greater earnings are a consequence of the fact that they work longer hours, for a greater proportion of their adult lives, in more dangerous and unpleasant working conditions, women’s greater level of sex appeal merely reflects their good fortune in being born female.

Yet Hakim denies erotic capital is “entirely inherited”, instead insisting:

All aspects of erotic capital can be developed, just like intelligence”.[13]

However, no amount of make-up, howsoever skillfully applied, can disguise excessively irregular features and even expensive plastic surgery and silicone enhancements are recognized as inferior to the real thing.

Moreover, even Hakim would presumably be hard-pressed to deny that the huge advantages incumbent on being born female are indeed “entirely inherited”. Indeed, even men who undergo costly gender reassignment surgery are rarely as attractive as even the average woman.

However, Hakim insists that:

Women generally have higher erotic capital than men because they work harder at it” (p244).

Here, I suspect Hakim has her causation precisely backwards. In fact, women work harder at being attractive (e.g. applying makeup, spending copious amounts of money on clothes, jewelry etc.) precisely because the rightly realize that good looks has bigger pay-offs for women than for men.

Indeed, Hakim herself admits:

Even if men and women had identical levels of erotic capital, the male sex deficit automatically gives women the upper-hand in private relationships” (p244).[14]

A Darwinian perspective suggests that both women’s greater erotic capital and the male sex deficit result ultimately from the fact that females biologically make a greater investment in offspring and therefore represent the limiting factor in mammalian reproduction.

In short, no amount of hard work will grant to men the sexual power conferred upon women simply by virtue of their fortune in being born as a member of the privileged sex.

Disadvantage, Discrimination and Double-Standards

Given that she believes erotic capital can be enhanced through the investment of time and effort, Hakim denies that the advantages accruing to attractive people are in any way unfair or discriminatory. Similarly, she does not regard the advantages accruing to women on account of their greater erotic capital – such as their greater ability to marry up’ (‘hypergamy’) or earn lucrative salaries in the sex industry – as unfair.

However, oddly, Hakim is all too ready to invoke the malign spectre of ‘discrimination’ on those rare occasions where inequality of outcome seemingly benefits men over women.

Thus, Hakim gripes argues that:

The entertainment industry… currently recognizes and rewards erotic capital more than any other industry. However, here too there is an unfair bias against women that leads to lower rewards for higher levels of erotic capital than are observed for men. In Hollywood, male stars earn more than female stars, even though female stars do the same work, but going ‘backwards and in high heels’” (p231).

Oddly, however, Hakim neglects to observe that in Hollywood’s next door neighbour, the pornographic industry, female performers earn more than men and the disparity is much greater and affects all performers, not just A-list stars.

This is despite the fact that, in this very same paragraph quoted above, she acknowledges in parenthesis that “entertainment industry… includes the commercial sex industry” (p231).

Neither does Hakim note that, as discussed by Warren Farrell in Why Men Earn More (reviewed here):

Top women models earn about five times more, that is, about 400% more, than their male ‘equivalent’. Put another way, men models earn about 20% of the pay for the same work” (Why Men Earn More: p97-8).

Hakim rightly decries the fact that:

The concept of discrimination is too readily applied in situations where there is differential treatment or outcomes. In many cases, there are simple explanations for such outcomes that do not involve unfair favoritism or intentional bias” (p131-2).

Yet, oddly, despite this wise counsel, Hakim fails to follow her own advice, being all too ready to invoke discrimination as an explanation, especially malign patriarchal discrimination, wherever she finds women at a seeming disadvantage.

For example, many studies find that more physically attractive people earn somewhat higher salaries, on average, than do relatively less attractive people (e.g. Scholz & Sicinski 2015).

However, perhaps surprisingly, the wage premium associated with good looks is generally found to be somewhat greater for males than for females (e.g. Frieze, Olson & Russell 1991).[15]

This is, for Hakim, a form of “hidden sex discrimination” (p194). Thus, she protests:

Attractive men receive a larger beauty premium than do women. This is clear evidence of sex discrimination, especially as all studies show women score higher than men on attractiveness scales” (p246).

At first glance, it may indeed seem anomalous that the wage premium associated with physical attractiveness is rather greater for men than for women. However, rather than rushing to invoke the malign spectre of sexual discrimination, a simpler explanation is readily at hand.

Perhaps relatively more attractive women simply reduce their efforts in the workplace because other means of social advancement are opened up to them by virtue of their physical attractiveness – not least marriage.

After all, as Hakim herself emphasizes elsewhere in her book:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labor market to women. All the evidence suggests that both routes can be equally important paths to social status and wealth for women in modern societies” (p142).

Therefore, rather than expend effort to advance herself through her career, a young woman, especially an attractive young woman, instead focuses her attention on marriage as a form of advancement. As the redoubtable HL Mencken put it in his book In Defense of Women:

The time is too short and the incentive too feeble. Before the woman employee of twenty-one can master a tenth of the idiotic ‘knowledge’ in the head of the male clerk of thirty, or even convince herself that it is worth mastering, she has married the head of the establishment or maybe the clerk himself, and so abandons the business” (In Defense of Women: p70).

Or, as Matthew Fitzgerald puts it in his delightfully subtitled Sex-ploytation: How Women Use Their Bodies to Extort Money From Men:

It takes far less effort to warm the bed of a millionaire than to earn a million dollars yourself” (Sex-ploytation: p10)

In short, why work for money when you have the easier option of marrying it instead?

Moreover, evidence suggests that relatively more physically attractive women are indeed able to marry men with higher levels of income and accumulated capital than are relatively less physically attractive women (Elder 1969; Hamermesh and Biddle 1994; Udry & Eckland 1984).

Indeed, some of the same studies that show the lesser benefits of attractiveness for women in terms of earnings and occupational advancement also show greater benefits for women in terms of marriage prospects (e.g Elder 1969; Udry & Eckland 1984).

Thus, psychologist Nancy Etcoff writes, in her book Survival of the Prettiest (which I have reviewed here):

“The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (Survival of the Prettiest: p65)

Yet, in stark contrast, as even Hakim herself acknowledges, ‘marrying up’ is not an option for even the handsomest of males simply because:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)… Even today, most women admit that their goal was always to marry a higher-earning man, and most achieve their goal” (p141).[16]

In short, it seems that Hakim regards any advantage accruing to women on account of their greater erotic capital as natural and legitimate, not to mention fair game for women to exploit to the full and at the expense of men.

However, in those rare instances where sexual attractiveness seemingly benefits men more than it does women, this advantage is then necessarily attributed by Hakim to a “hidden sex discrimination” and hence viewed as inherently malign.

Are Women Wealthier Than Men?

Hakim claims that the importance of what she calls erotic capital has been ignored or overlooked due to what she claims is “the patriarchal bias in social science” (p75).

As anyone who is remotely aware of the current state of the social sciences should be all too aware, there is little evidence for “patriarchal bias in social science”. On the contrary, for over half a century at least, the social sciences have been heavily infested with feminism.

My own view is almost the opposite of Hakim’s – namely, it is not “patriarchal bias”, but rather feminist bias that has led social scientists to ignore the importance of sexual attractiveness in social and economic relations – because feminists, in their efforts to portray women as a ‘disadvantaged and oppressed group, have felt the need to ignore or downplay women’s sexual power over men.

In fact, although Hakim accuses them of being unwitting agents of patriarchy, feminists have probably been wise to play down women’s sexual power over men – because once this power is admitted, the fundamental underlying premise of feminism, namely that women represent an oppressed group, is exposed as fallacious.

Indeed, much of data reviewed by Hakim herself inadvertently proves precisely this.

For example, Hakim observes that:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labour market to women. All the evidence suggests that both routes can be equally important paths to wealth for women in modern societies” (p142).

As a consequence, Hakim observes that:

There are more female than male millionaires in a modern country such as Britain. Normally, men can only make their fortune through their jobs and businesses. Women achieve the same wealthy lifestyle and social advantages through marriage as well as through career success” (p24).

There are more female than male millionaires in Britain. Some women get rich through their own efforts, while others are wealthy widows and divorcées who married well” (p142).

Here, though, I suspect Hakim actually downplays the extent of the gender differential. Certainly, she is right that in observing that “normally, men can only make their fortune through their jobs and businesses” and hence that:

Handsome men who marry into money are still rare compared to the numbers of beautiful women who do this” (p24).

However, while she is right that “some women get rich through their own efforts, while others are wealthy widows and divorcées who married well”, I suspect she is exaggerating when she claims “both routes can be equally important paths to wealth for women in modern societies”.

In fact, while many women become rich through marriage or inheritance, self-made millionaires seem to be overwhelmingly male.

Thus, most self-made millionaires make their fortunes through business and investment. However, as Warren Farrell observes in his excellent Why Men Earn More (reviewed here and here), whereas feminists blame the lower average earnings of women as compared to men on discrimination by employers, in fact, among the self-employed and business owners, where discrimination by employers is not a factor, the disparity in earnings between men and women is even greater than among employees.

Thus, Farrell reports:

When there was no boss to ‘hold women back’, women who owned their own businesses netted, at the time (1970s through 1990s) between 29% and 35% of what men netted; today, women who own their own businesses net only 49% of their male counterparts’ net earnings” (Why Men Earn More: pxx).

On the other hand, focussing on the ultra-rich, in the latest 2023 Forbes 400 list of the richest Americans, there are only sixty women, just fifteen percent of the total, of whom only twelve (i.e. just twenty percent) are, Forbes magazine reports, ‘self-made’, in contrast to fully seventy percent of the men in the list.

None of the six richest women on the list seem to have played any part in accumulating their own wealth, each either inheriting it from a deceased father or husband, or expropriating it from their husbands in the divorce courts.[17]

As Ernest Belfort Bax wrote over a century ago, in collaboration with an anonymous Irish jurist, in The Legal Subjection of Men (which I have reviewed here):

The bulk of women’s property, in 99 out of every 100 cases, is not earned by them at all. It arises from gift or inheritance from parents, relatives, or even the despised husband. Whenever there is any earning in the matter it is notoriously earning by some mere man or other. Nevertheless, under the operation of the law, property is steadily being concentrated into women’s hands” (The Legal Subjection of Men: p9).

This, of course, suggests that it is men rather than women who should be campaigning for ‘equal opportunity’, because, whereas most traditionally male careers are now open to both sexes, the opportunity to advance oneself through marriage remains almost the exclusive preserve of women, since, as Hakim herself acknowledges:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)” (p141).

Women also have other career opportunities available to them that are largely closed to men, or at least to heterosexual men – namely, careers in the sex industry.

Yet such careers can be highly lucrative. Thus, Hakim herself reports that:

Women offering sexual services can earn anywhere between twice and fifty times what they could earn in ordinary jobs, especially jobs at a comparable level of education” (p229).

Yet men are not only denied these easy and lucrative means of financial enrichment but are also driven by the Hakim calls the ‘male sex deficit’ to spend a large portion of whatever wealth they can acquire attempting to buy the sexual services and affection of women, whether through paying for sex workers or through conventional courtship.

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

As a consequence, despite working fewer hours, for a lesser proportion of their adult lives in safer and more pleasant working environments, women are estimated by researchers in the marketing industry to control around 80% of consumer spending.

Yet Hakim goes even further, arguing that both what she calls the ‘male sex deficit’ and the greater levels of erotic capital possessed by women place women at an advantage over men in all their interactions with one another, on account of what she refers to as ‘the principle of least interest’.

In other words, since men want sex with women more than women want sex with men, all else being equal, women almost always have the upper-hand in their relationships with men.[18]

Indeed, Hakim goes so far as to claim that men are condemned to a:

Semi-permanent state of sexual desire and frustration… Suppressed and unfulfilled desires permeate all of men’s interactions with women” (p228).

Yet, here, Hakim surely exaggerates.

Indeed, to take Hakim’s words literally, one would almost be led to believe that men walk around with permanent erections.

I doubt any man is ever really consumed with overwhelming “suppressed and unfulfilled desires” when conversing with, say, the average fat middle-aged woman in the contemporary west. Indeed, even when engaging in polite pleasantries, routine conversation, or even mild flirtation with genuinely attractive young women, most men are capable of maintaining their composure without visibly salivating or contemplating rape.

Yet, for all her absurd exaggeration, Hakim does have a point. Indeed, she calls to mind Camille Paglia’s memorable and characteristically insightful description of men as:

Sexual exiles… [who] wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19).

Therefore, Hakim is right to claim that, by virtue of the ‘the principle of least interest’, women generally have the upper-hand in interactions with men.

Indeed, her conclusions are dramatic – and, though she seemingly does not fully appreciate their implications – actually directly contradict and undercut the underlying premises of feminism – namely that women are disadvantaged as compared to men.[19]

Thus, she observes that:

At the national level, men may have more power than women as a group – they run governments, international organizations, the biggest corporation and trade unions. However, this does not automatically translate into men having more power at the personal level. At this level, erotic capital and sexuality are just as important as education, earnings and social networks… Fertilityfurther enhances women’s power” (p245).

 On the contrary, she therefore concludes:

In societies where men retain power at the national level, it is entirely feasible for women to have greater power… for private relationships” (p245).

Yet women’s power over their husbands, and women’s sexual power over men in general, also confers upon women both huge economic power and even indirect political power, especially given that men, including powerful men, have a disposition to behave chivalrously and protectively towards women.

Thus, one is reminded of Arthur Schopenhauer’s observation, in his brilliant, celebrated and infinitely insightful essay On Women, of how:

Man strives in everything for a direct domination over things, either by comprehending or by subduing them. But woman is everywhere and always relegated to a merely indirect domination, which is achieved by means of man, who is consequently the only thing she has to dominate directly” (Schopenhauer, On Women).

Indeed, in this light, we might do no better than contemplate in relation to our own cultures the question Aristotle posed of the ancient Spartans over two thousand years ago:

What difference does it make whether women rule, or the rulers are ruled by women?” (Aristotle, Politics II).

References

Alexander & Fisher (2003) Truth and consequences: Using the bogus pipeline to examine sex differences in self-reported sexuality, Journal of Sex Research 40(1): 27-35.
Bateman (1948), Intra-sexual selection in Drosophila, Heredity 2 (Pt. 3): 349-368.
Baumeister & Vohs (2004) Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions, Personality and Social Psychology Review 8(4) 339-363.
Baumseister & Twenge (2002) Cultural Suppression of Female Sexuality, Review of General Psychology 6(2): 166-203.
Brewer, Garrett, Muth & Kasprzyk (2000) Prostitution and the sex discrepancy in reported number of sexual partners, Proceedings of the National Academy of Sciences of the United States of America; USA 2000, 12385.
Buss (1989) Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures, Behavioral and Brain Science 12(1):1-14.
Buss, Larson, Westen & Semmelroth (1992) Sex Differences in Jealousy: Evolution, Physiology, and Psychology, Psychological Science 3(4):251-255.
Elder (1969) Appearance and education in marriage mobility. American Sociological Review, 34, 519-533.
Frieze, Olson & Russell (1991) Attractiveness and Income for Men and Women in Management, Journal of Applied Social Psychology 21(13): 1039-1057.
Hamermesh & Biddle (1984) Beauty and the labor market. American Economic Review, 84, 1174-1194.
Kanazawa (2011) Intelligence and physical attractiveness. Intelligence 39(1): 7-14.
Kanazawa and Still (2018) Is there really a beauty premium or an ugliness penalty on earnings?Journal of Business and Psychology 33: 249–262.
Scholz & Sicinski (2015) Facial Attractiveness and Lifetime Earnings: Evidence from a Cohort Study, Review of Economics and Statistics (2015) 97 (1): 14–28.
Trivers (1972) Parental investment and sexual selection. In B. Campbell (Ed.) Sexual Selection and the Descent of Man, 1871-1971 (pp 136-179). Chicago, Aldine.
Udry and Eckland (1984) Benefits of being attractive: Differential payoffs for men and women.Psychological Reports, 54: 47-56.
Wilson & Daly (1992) The man who mistook his wife for a chattel. In: Barkow, Cosmides & Tooby, eds. The Adapted Mind, New York: Oxford University Press,1992: 289-322.


[1] Both editions appear to be largely identical in their contents, though I do recall noticing a few minor differences. Page numbers cited in the current review refer to the former edition, namely Money Honey: the Power of Erotic Capital, published in 2011 by Allen Lane, which is the edition of which this post is a review.

[2] One is inevitably reminded here of Richard Dawkins’s ‘First Law of the Conservation of Difficulty’, whereby Dawkins not inaccurately observes ‘obscurantism in an academic subject is said to expand to fill the vacuum of its intrinsic simplicity’.

[3] In this context, it is interesting to note that Arnold Schwarzenegger and other bodybuilders with extremely muscular physiques do not seem to be generally regarded as especially handsome and attractive by women. Anecdotally, women seem to prefer men of a more lean and athletic physique, in preference to the almost comically exaggerated musculature of most modern bodybuilders. As Nancy Etcoff puts it in Survival of the Prettiest (reviewed here), women seem to prefer:

Men [who] look masculine but not exaggeratedly masculine” (Survival of the Prettiest: p159).

In writing this, Etcoff seemed to have in mind primarily male facial attractiveness. However, it seems to apply equally to male musculature. For more detailed discussion on this topic, see here.

[4] Although I here attribute beautiful women’s unpopularity among other women to jealousy on the part of the latter, there are other possible explanations for this phenomenon. As I discuss in my review of Etcoff’s book (available here), another possibility is that beautiful women are indeed simply less likeable in terms of their personality. Perhaps, having grown accustomed to being fawned over and receiving special privileges on account of their looks, especially from men, they gradually become, over time, entitled and spoilt, something that is especially apparent to other women, who are immune to their physical charms.

[5] Hakim mentions evolutionary psychology as an approach, to my recollection, only once, in passing, in the main body of her text. Here, she associates the approach with ‘essentialism’, a scare-word, and straw man, employed by social scientists to refer to biological theories of sex and race differences, which Hakim herself defines as referring to “a specific outdated theory that there are important and unalterable biological differences between men and women”, as indeed there undoubtedly are (p88).
Evolutionary psychology as an approach is also mentioned, again in passing, in one of Hakim’s endnotes (p320, note 22). As mentioned above, Hakim also cites several studies conducted by evolutionary psychologists to test specifically evolutionary hypotheses (e.g. Kanazawa 2011; Buss 1989). Therefore, it cannot be that Hakim is simply unaware of this active research programme and theoretical approach.
Rather, it appears that she either does not understand how Bateman’s principle both anticipates, and provides a compelling explanation for the phenomena she purports to undercover (namely, the ‘male sex deficit’ and greater ‘erotic capital’ of women); or that she disingenuously decided not to discuss evolutionary psychology and sociobiology precisely because she recognizes the extent to which it deprives her own theory of its claims to originality.

[6] Actually, due to greater male mortality and the longer average lifespan of women, there are actually somewhat more women than men in the adult population. However, this is not sufficient to account for the disparity in number of sex partners reported in sex surveys, especially since the disparity becomes more pronounced only in older cohorts, who tend to be less sexually active. Indeed, since female fertility is more tightly contrained by age than is male fertility, the operational sex ratio may actually reveal a relative deficit of fertile females.

[7] Before they discovered of the role of men in impregnating women, and in those premodern societies where “this idea never emerged”, there was, Hakim reports, ‘free love’ and rampant promiscuity, sexual jealousy presumably being unknown (p79). Of course, we have heard these sorts of ideas before, not least in the discredited Marxian concept of ‘primitive communism’ and in Margaret Mead’s famous study of adolescence in Samoa. Unfortunately, however, Mead’s claims have been thoroughly debunked, at least with regard to Samoan culture. Indeed, it is notable that, in the examples of such premodern cultures supposedly practising ‘free love’ that are cited by Hakim, Samoa is conspicuously absent.

[8] This error is analogous to the so-called ‘Sahlins fallacy’, so christened by Richard Dawkins in his paper ‘Twelve misunderstandings of kin selection’, whereby celebrated cultural anthropologist (and left-wing political activist) Marshall Sahlins, in his book The Use and Abuse of Biology (reviewed here), assumed that, for humans, or other animals, to direct altruism towards biological relatives proportionate to their degree of relatedness as envisaged by kin selection and inclusive fitness theory, they must necessarily understand the mathematical concept of fractions.

[9] Only in respect of homosexuality, especially male homosexuality, are these attitudes oddly reversed. Here, women are more accepting and tolerant, whereas men are much more likely to disapprove of and indeed be repulsed by the idea of male homosexuality in particular (though heterosexual men often find the idea of lesbian sex arousing, at least until they witness for themselves what most real lesbian women actually look like).

[10] Thus, Hakim herself observes that, under Christian morality:

Celibacy was praised as admirable, then enforced on Catholic priests, monks, and nuns” (p80)

[11] If both long-term and short-term sexual relationships both serve similar functions for men – namely, a means of obtaining regular sexual intercourse – perhaps women do indeed conceive of such relationships as representing entirely separate marketplaces, since, unlike for heterosexual men, short-term commitment-free sex is much easier to obtain for women than is a long-term relationship. This then might explain Hakim’s assumption that the two markets are entirely separate, since, as herself a female, this is how she personally has always perceived it.
However, I suspect that, even for women, the two spheres are not entirely conceptually separate. For example, women sometimes enter short-term commitment-free sexual relationships with men, especially high-status men, in the hope that such a relationship might later develop into a long-term romantic relationship.

[12] Besides the risk of criminal prosecution, the costs for suppliers associated with criminalization include the inability of suppliers to resort to legal mechanisms either for protection or to enforce contracts. This is among the reasons that, in many jurisdictions were prostitution is criminalized, both prostitutes and their clients are at considerable risk of violence, including extortion, blackmail, rape and robbery. It is also why suppliers often turn instead to other means of protection, providing an opening for organized crime elements.

[13] In fact, it is a fallacy to suggest that because something can be enhanced or improved by “time and effort”, this means it is not “entirely inherited”, since the tendency to successfully devote “time and effort” to self-improvement is at least partly a heritable aspect of personality, associated with the personality factor identified by psychometricians as conscientiousness. Behavioural dispositions are, in principle, no less heritable than morphology.

[14] This, of course, implies that the greater female level of ‘erotic capital’ is separable from the ‘male sex deficit’, when, in reality, as I have already discussed the ‘male sex deficit’ provides an obvious explanation for why women have greater sex appeal, since, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

[15] Although there is a robust and well-established correlation between attractiveness and earnings, this does not necessarily prove that it is attractiveness itself that causes attractive people to earn more. In particular, Kanazawa and Still argue that more attractive people also tend to be more intelligent, and also have other personality traits, that are themselves associated with higher earnings (Kanazawa and Still 2018).

[16] Indeed, more affluent women are actually even more selective regarding the socio-economic status that they demand in a prospective partner, preferring partners who are even higher in socioeconomic status than they are themselves (Wiederman & Allgeier 1992; Townsend 1989).
This, of course, contradicts the feminist claim that women only aspire to marry up because, due to supposed discrimination, ‘patriarchy’, male privilege and other feminist myths, women lack the means to advance in social status through occupational means.
In fact, the evidence implies that the feminists have their causation exactly backwards. Rather than women looking to marriage for social advancement because they lack the means to achieve wealth through their careers due to discrimination, instead the better view is that women do not expend great effort in seeking to advance themselves through their careers precisely because they have the easier option of achieving wealth and privilege by simply marrying into it.
Unfortunately, the fact that even women with high salaries and of high socioeconomic status insist on marrying men of similarly high, or preferably even higher, socioeconomic status than themselves means that feminist efforts to increase the number of women in high status occupations, including by methods such as affirmative action and other forms of overt and covert discrimination against men, also have the secondary effect of reducing rates of marriage and hence of fertility, since the higher the socioeconomic status and earnings of women the fewer men there are of the same or higher socioeconomic status for them to marry, particularly because other high status high income occupations are similarly occupied increasingly by other women. This may be one major causal factor underlying one of the leading problems facing developed economies today, namely their failure to reproduce at replacement levels. This is one of many reasons we must stridently oppose such feminist policies.

[17] Of course, being ‘self-made’ is a matter of degree. Many of Among the six richest women in America listed by Forbes, the only ambiguous case, who might have some claim (albeit very weak) to having herself earned some small part of her own fortune, rather than merely inherited it, is the sixth richest woman in America, Abigail Johnson, who is currently CEO of the company established by her grandfather and formerly run by her father. Although she certainly did not build her own fortune, but rather very much inherited it, she nevertheless has been involved in running the family business that she inherited. The five richest women in America, in contrast, have no claim whatsoever to having earned their own fortunes. On the contrary, all seemingly inherited their wealth from male relatives (e.g. husbands, fathers), except for the former wife of Jeff Bezos, who instead expropriated the monies of her husband through divorce. According to Forbes the richest ‘self-made’ woman on the list is the seventh richest woman in America, and thirty-eighth richest person overall, Diana Hedricks. However, since she founded the company upon which her fortune is built with her then-husband, it is reasonable to suppose, given the rarity of ‘self-made’ female millionairs, that he in fact played the decisive role in establishing the family’s wealth.

[18] Actually, however, the situation is more complex. While men certainly want sex more than women do, especially promiscuous sex outside a committed relationship, women surely have a greater desire for long-term, committed, romantic relationships than men do. This complicates the calculus with respect to who has the least interest in a given relationship.
On the other hand, however, the reason why women have a strong desire for long-term committed romantic relationships is, at least in part, the financial benefits and security with which such relationships typically provide them. These one-sided benefits are, of course, further evidence that women do indeed have the upper-hand in their relationships with men, even, perhaps especially, in long-term committed relationships.
Yet men can also obtain sex outside of committed relationships, not least through prostitutes. Yet the very fact that heterosexual prostitution almost invariably involves the man paying the woman for sex rather than vice versa is, of course, further proof that, again, women do indeed have the upper-hand, on account of ‘the principle of least interest’.

[19] A full understanding of the extent to which women’s sexual power over men confers upon them an economically privileged position is provided by several works pre-dating Hakim’s own, namely Esther Vilar’s The Manipulated Man (which I have reviewed here), Matthew Fitzgereld’s delightfully subtitled Sex-Ploytation: How Women Use Their Bodies to Extort Money from Men, Tobias and Mary Marcy’s forgotten early twentieth century Marxist-masculist masterpiece Women As Sex Vendors (which I have reviewed here) and Warren Farrell’s The Myth of Male Power (which I have reviewed here and here).

A Rational Realist Review of Matt Ridley’s ‘The Rational Optimist’

Matt Ridley, The Rational Optimist (London: Fourth Estate, 2011)

Evolutionary psychology and sociobiology are fields usually associated with cynicism about human nature and skepticism regarding our capacity to change this fundamental nature in order to produce the utopian societies envisaged by Marxists, feminists and other such hopeless idealists.

It is therefore perhaps surprising that several popular science writers formerly known for writing books about evolutionary psychology have recently turned their pens to a very different topic – namely, that of human progress, and, in the process, concluded that, not only is societal progress real, but also that it is likely to continue in the foreseeable future.

Robert Wright, author of The Moral Animal, was the trailblazer back in 1999, with his ambitiously titled Nonzero: The Logic of Human Destiny, which argued that human history (and indeed evolutionary history as well) is characterized by progressive increases in the levels of non-zero-sum interactions, resulting in increased cooperation and prosperity.

Meanwhile, the latest onboard this particular bandwagon is the redoubtable Steven Pinker, whose books, The Better Angels of Our Nature, published in 2011, and Enlightenment Now, published seven years later in 2018, both focused on societal progress, the former focusing on supposed declines in levels of violence, while the latter is more general in its themes.

Ridley’s ‘The Rational Optimist’, first published just a year before Pinker’s The Better Angels of Our Nature, was also more general in its theme, but focuses primarily on improvements in living standards.

Ridley argues that, not only is human progress real, but that it has, a few temporary blips and hiccups apart, occurred throughout virtually the entirety of human history and is in no danger of stalling or slowing down, let alone going into reverse any time soon.

From Futurology to History

For a book whose ostensible theme is optimism regarding the future, Ridley spends an awful lot of his time talking about the past. Thus, most of his book is not about the probability of progress in the future, but rather the certainty of its occurrence during much of our past.

We have a tendency to look back on the past with nostalgia as a ‘Golden Age or ‘Lost Eden’. In reality, however, the life of the vast majority of people in all eras periods prior to the present was, to adopt the phraseology of Thomas Hobbes, compared to our lives today, ‘short, nasty and brutish’.

As Ridley bluntly observes:

It is easy to wax elegiac for the life of a peasant when you do not have to use a long-drop toilet” (p12).

Although we all habitually moan about rising prices, in fact, he argues, almost everything worth having has become cheaper, at least when one measures prices, not in dollars, cents or euros (which is, of course, misleading because it fails to take into account inflation and other factors), but rather in what Ridley regards as their true cost – namely the hours of human labour required to fund the purchase.

Indeed, Ridley claims:

Even housing has probably gotten cheaper tooThe average family house probably costs slightly less today than it did in 1900 or even 1700, despite including far more modern conveniences like electricity, telephone and plumbing” (p20).

Moreover, he insists:

Housing… is itching to get cheaper, but for confused reasons governments go to great lengths to prevent it” (p25).

In Britain, he protests, the main problem is “planning and zoning laws”. These are the laws and regulations that which prevent developers from simply buying up land and putting up housing estates and tower blocks in much of the countryside and green belt (p25).

Unfortunately, however, Britain is a small island, and, in the precise places where there is greatest demand for new housing (i.e. the South-East), it is already quite densely populated.[1]

Giving developers a free hand to put up new housing estates on what little remains of Britain’s countryside is a strange proposed solution to rising housing prices for someone who, elsewhere in his book, claims to “like wilderness” (p239). It is certainly a policy unlikely to find support among environmentalists, or indeed anyone concerned about protecting what remains of our once ‘green and pleasant land’.

Ridley is certainly right that there is a shortage of available housing in the UK, owing to both:

  1. The greater number of people divorcing or separating or never marrying or cohabiting in the first place and hence requiring separate accommodation; and
  2. A rising population.

Yet, with fertility rates in Britain having been at well below replacement levels since the 1970s, the increase in population that is occurring is entirely a product of inward migration from overseas.

However, rather than destroying what remains of Britain’s countryside in order to provide additional housing for ever increasing numbers of immigrants, perhaps the more sustainable solution is not more housing, but rather fewer people (see below).

Pollution

Ridley is on firmer ground in claiming, again contrary to popular opinion and environmentalist dogma, that, at least in developed western economies, pollution has actually diminished over the course of the twentieth century.

Thus, smog was formerly quite common in many British cities such as London until as recently as the Sixties, but is now all but unknown in the UK.

Thus, Ridley reports how, in a typical case of media scaremongering:

In 1970, Life magazine promised its readers that scientists had ‘solid experimental and theoretical evidence’ that ‘within a decade, urban dwellers will have to wear gas masks to survive air pollution … by 1985 air pollution will have reduced the amount of sunlight reaching earth by one half.’ Urban smog and other forms of air pollution refused to follow the script, as technology and regulation rapidly improved air quality” (p304).

On the other hand, however, while air quality may indeed have greatly improved in advanced Western economies such as Western Europe and North America, the direction of change in much of the so-called ‘Developing World’ has been very different, precisely because much of the Developing World has indeed so rapidly economically developed.

Moreover, a case can be made that improvement in air quality in the west have been possible only because developed western economies have outsourced much of their industrial production, and, with it, much of their pollution, overseas, to developing economies, where labour is cheaper and environmental protection regulations much laxer, and where many of the goods consumed in western economies are now increasingly manufactured.

This means that, while Britain may have reduced its carbon emissions, albeit at the cost of abandoning its manufacturing base and thereby crippling its economy with unecessary environmental regulations, this will have had no effect whatsoever in reducing the pace of climate change, let alone reversing it, since any decrease in carbon emissions emanating from the west are more than offset by increases in carbon emissions emanating from the Developing World.

It also suggests that, while parts of the Developing World have indeed imitated the West in industrializing, and hence experiencing declining levels of air quality, they will not be successful at imitating the West in ‘deindustrializing’, and hence improving air quality, unless they too are able to outsource their industrial production to other parts of the ‘Developing World’ that have yet to ‘develop’. But, in the end, we will run out of places to ‘develop’.

Thus, when I was a child we were taught in school (or perhaps politically propagandized at) about how wonderfully environmentally friendly the communist Chinese were because, instead of driving cars to work, they all rode bicycles, and we were shown remarkable photographs from Chinese cities with hundreds of Chinese people cycling to work during rush-hour.

Now, however, with increasing levels of wealth, industrialization and development, the Chinese have largely abandoned bikes for cars, and Chinese cities seemingly have as big a problem with smog and air quality as Britain did in the early twentieth century. There are similar problems regarding air pollution in many other cities across the Developing World, especially in Southeast Asia.

Yet a case can be made that even cars themselves represented an environmental improvement. Thus, before the spread of the much-maligned motor car, a major source of pollution was the emissions emitted by the form of transport that preceded the motor car – namely, horses.

Thus, in late-nineteenth and early-twentieth century, the streets of major cities were said to be fast disappearing under rising mountains of horse dung and the motor car was initially hailed as an “environmental savior” (SuperFreakonomics: p15).

Indeed, automobiles have themselves become less polluting over time.

The removal of lead from fuel is well-known, and may even have contributed to declining levels of violent crime, but Ridley goes further, also claiming, rather remarkably, that:

Today, a car emits less pollution travelling at full speed than a parked car did in 1970 from leaks” (p17).

However, Ridley’s sources for this claim are rather obscure and difficult to verify.

Elaborating on his source for this claim in a blog post on his website, he cites a book by Johan Norberg, När Människan Skapade Världen, written in Swedish and apparently unavailable in English translation, together with a blog post by Henry Payne, published at National Review, which, in turn, cites an article from the motoring magazine, Autoweek, that does not currently seem to be accessible online.

Moreover, investigating his sources more closely, it appears that the reference by Ridley to “a car” from today, and “a parked car” from 1970, seems to mean just that – namely, just one particular model from each era (namely, the 1970 and 2010 Ford Mustangs).

Whether this claim generalizes to other models is unclear (see Payne 2010; Ridley 2010).

Blips in History?

Ridley argues that progress has been long-standing, and the even worst catastrophes in history were at most mere temporary setbacks.

Thus, during the Great Depression, Ridley readily concedes, living standards did indeed decline precipitously. However, he is at pains to emphasize, the Great Depression itself lasted barely a decade, and, once it was over, living standards soon recovered and soon thereafter surpassed even those standards of living enjoyed during the economic boom of the Roaring Twenties that immediately preceded the Great Depression.

Ridley also argues against the view, fashionable among anthropologists, that hunter-gatherer cultures represented, in anthropologist anthropologist Marshall Sahlins’s famous phrase, the original affluent society, and that the transition to agriculture actually paradoxically lowered living standards and reduced available leisure-time.

Indeed, not just the agricultural revolution but also the industrial revolution was, according to Ridley, associated with improved living standards.

The immediate aftermath of the industrial revolution is popularly associated with Dickensian conditions of poverty and child labour. However, according to Ridley, the industrial revolution was actually associated with improvements in living standards, not just for wealthy industrialists, but for society as a whole – indeed, even for what became the urban proletariat.

After all, he explains, the Victorian-era urban proletariat were, for the most part, the descendants of what had formerly been the rural peasantry, and, while the Dickensian conditions under which they lived and laboured in nineteenth century cities may seem spartan to us, compared to the conditions under which people laboured a generation or two before, they represented a marked improvement. This is why so many so gladly left their rural villages behind for the towns and cities.

On the other hand, however, the conventional view has it that, far from happily leaving rural villages behind because of superior living conditions offered in industrial cities, people were actually forced to leave because jobs were destroyed in the countryside by factors such as enclosure, the mechanization of agriculture and traditional cottage industries being outcompeted and destroyed by more efficient factory production in the cities.

On this view, while living conditions may indeed have been better in the cities than in the countryside at this time, this was only because job opportunities and living standards in had declined so steeply in rural areas.

Yet, according to Ridley, the only reason that the industrial revolution came to be associated with poverty and squalor was, not because of declines in living standards, but rather simply because, Ridley tells us, this was the first time activists, campaigners, politicians, and authors drew attention to the plight of the poor.

The reason for this change in attitudes was that this was the first time that society was sufficiently wealthy that it could afford to start doing something about the plight of the poor. This rising concern for the poor was therefore itself paradoxically a product of the increasing prosperity that the industrial revolution ushered in (p220).

Past Progress and the Problem of Induction

In a book ostensibly promoting optimism regarding the future, why then does Ridley spend so much time talking about the past?

The essence of his argument seems to be thus: Given all this improvement in the past, why is there any reason to believe that this pattern will suddenly cease tomorrow?

Thus, he quotes Whig historian Macaulay as demanding back in 1930 that:

On what principle is it, that when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?” (p11).

Thus, Macaulay concluded:

We cannot absolutely prove… that those are in error who tell us that society has reached a turning point, that we have seen our best days. But so said all who came before us, and with just as much apparent reason” (p287).

Unfortunately, this argument seems to be vulnerable to what philosophers call the problem of induction’.

In short, just because something has long been occurring throughout the past, is no reason to believe that it will continue occurring in the future, any more than, to quote a famous example, the fact that all the swans I have seen previously have thus far proven to be white necessarily proves that I won’t run into a black swan tomorrow.[2]

In other words, just because previous generations have always invented new technologies that have improved standards of living, or discovered new energy sources before the previously discovered ones have been depleted does not necessarily mean that future generations will be so fortunate.

In the end there might simply be no new technologies to invent or no new energy sources left to be discovered.

Self-Sufficiency vs Exchange

The only threat to continuing improvements in human living conditions across the world, in Ridley’s telling, is misguided governmental interference.

He attacks, in particular, several misguided but fashionable policy proposals.

First in Ridley’s firing line is what we might term the cult of self-sufficiency.

Following Adam Smith, Ridley believes that increasing prosperity is in large part a product of the twin processes of specialization and exchange.

These two processes go hand in hand.

On the one hand, it is only through exchange that we are able to specialize. After all, if we were unable to exchange the product of our own specialist labour for food, clothes and housing, then we would have to farm our own food, and knit our own clothes and construct our own housing.

On the other hand, it is only because of specialization and the increased efficiency of specialists that exchange is so profitable.

Thus, Ridley is much taken with Ricardo’s law of comparative advantage, which he writes has been described as “the only proposition in the whole of the social sciences that is both true and surprising” (p75).

In contrast, self-sufficiency, whether at the individual or familial level (e.g. living off the land, growing your own food, building your own home, making your own clothes), or at the national level (autarky, protectionism, embargoes, tariffs on imports), is a sure recipe for perpetual poverty.[3]

Thus, making your own clothes now costs more than buying them in a store. Likewise, DIY may (or may not) be a fun and relaxing hobby, but for well-qualified people with high salaries, it may be a more efficient use of time and money to hire a specialist.

Indeed, even the recent much maligned trend towards eating out and buying takeaways instead of cooking for oneself may reflect the same process towards increasing specialization first identified by Adam Smith.

Thus, Ridley, himself a large landowner and the heir to a peerage, observes that:

You may have no chefs, but you can decide on a whim to choose between scores of nearby bistros, or Italian, Chinese, Japanese or Indian restaurants, in each of which a team of skilled chefs is waiting to serve your family at less than an hour’s notice. Think of this: never before this generation has the average person been able to afford to have somebody else prepare his meals” (p36-7).[4]

Environmentally-Unfriendly ‘Environmentalism

Other misguided policies skewered by Ridley’s mighty pen include various fashionable environmentalist causes – or, rather, causes which masquerade as environmentally-friendly but are, in practice, as Ridley shows, anything but.

One fad that falls into the latter category is organic farming.

Organic farming is less efficient and more land intensive than modern farming techniques. It therefore requires more land to be converted for use by agriculture, which therefore requires the destruction of yet more of the rainforest and wilderness, yet nevertheless still produces much less food per acre.

Yet organic farming is not only bad for the environment, it is also especially bad for the poor, since it means food will be more expensive, and, since it is the poor who, having less income to spend on luxuries, already spend a greater proportion of their income of food, it is they who will suffer most.

Ridley applies much the same argument to biofuels. Again, these would require the use of more land for farming, depleting the amount of land that can be devoted either to the production of food, or to wildlife, resulting in increasing food prices and decreasing food production, with the global poor suffering the most.

In contrast, genetically modified foods promise to make the production of food cheaper, more efficient and less land intensive. Yet many self-styled environmentalists oppose them.

Why Fossil Fuels are Good for the Environment – and Renewables Bad

Perhaps most controversially, Ridley also argues that renewable energies are, paradoxically, bad for the environment. Again, this is because they are less efficient, and more land-hungry than fossil fuels.

Thus, he reports that to supply the USA alone with its current energy consumption would require:

Solar panels the size of Spain; or wind farms the size of Kazakhstan; or woodland the size of India and Pakistan; or hayfields for horses the size of Russia and Canada combined; or hydroelectric dams with catchments one third larger than all the continents put together” (p239).

Meanwhile, to provide Britain with its current energy needs without fossil fuels would necessitate:

Sixty nuclear power stations around the coasts, wind farms… cover[ing] 10 per cent of the entire land (or a big part of the sea)… solar panels covering an area the size of Lincolnshire, eighteen Greater Londons growing bio-fuels, forty-seven New Forests growing fast-rotation harvested timber, hundreds of miles of wave machines off the coast, huge tidal barrages in the Severn estuary and Strangford Lough, and twenty-five times as many hydro dams on rivers as there are today” (p343).

The prospect would hardly appeal to most environmentalists, certainly not to conservationists, since the result would be that:

The entire country would look like a power station” (p343).

Yet, despite this, “power cuts would be frequent”, since tidal, wind and solar power are all sporadic in the energy they supply, being dependent on weather conditions. Ridley therefore concludes:

Powering the world with such renewables now is the surest way to spoil the environment” (p343).

In contrast, fossil fuels are much less land hungry relative to the amount of energy they provide.

Therefore, he concludes that, contrary to popular opinion, “fossil fuels have spared much of the landscape from industrialization” and have hence proven an environmental boon (p238).

Only in respect of solar power, does Ridley actually has rather higher hopes (p345). The sun’s power is indeed immense. We are limited only in our current ability to extract it.

Indeed, besides nuclear power, geothermal power and tidal energy, virtually all of our energy sources derive ultimately from the power of the sun.

The Industrial Revolution, Ridley proposes, was enabled by “shifting from current solar power to stored solar power” – and, since then, progress has involved the extraction of ever older stores of the sun’s power – i.e. timber, peat, coal and lastly oil and gas (p216).

Each development was an improvement on the energy source that preceded it, both in terms of efficiency and environmental impact. To turn once again to relying on more primitive sources of energy would, Ridley argues, be a step backwards in every sense.

How Fossil Fuels Freed the Slaves

Fossil fuels are not only better for the environment, Ridley argues, they are also better for mankind, and not merely in the sense that humans benefit from leaving in a better environment. In addition, there are other, more firect benefits to mankind. Indeed, according to Ridley, it was fossil slaves that ultimately freed the slaves.

Thus, Ridley’s chapter entitled ‘The Release of Slaves’ says refreshing little about the familiar historical narrative of how puritanical Christian fundamentalist do-gooders and busybodies like William Wilberforce successfully campaigned for the abolition of slavery and thereby spoiled everybody’s fun.

Instead, Ridley shows that it was the adoption of fossil fuels that ultimately made freeing slaves possible by enabling technology to replace human labour – and indeed animal labour as well.

Thus, he reports:

It would take 150 slaves, working eight-hour shifts each, to pedal you to your current lifestyle. Americans would need 660 slaves… For every family of four… there should be 600 unpaid slaves back home, living in abject poverty: if they had any better lifestyle they would need their own slaves” (p236).

Thus, Ridley concludes:

It was fossil fuels that eventually made slavery – along with animal power, and wood, wind and water – uneconomic. Wilberforce’s ambition would have been harder to achieve without fossil fuels” (p214).[5]

Will the Oil Run Out?

As for the perennial fear that our demand for fossil fuels will ultimately exceed the supply, Ridley is unconcerned.

Fossil fuels may be non-renewable, he admits, but the potential supplies are still massive. Our only current problem is accessing them deeper underground in ever more inaccessible regions.

However, Ridley maintains that, one way or another, human ingenuity and technological innovation will find a way.

By the time they do run short, if they ever do, which they probably won’t, Ridley is confident we will have long since already discovered, or invented, a replacement.

In contrast, so-called renewables energy sources, such as wind and water power, while they may indeed be renewable, can nevertheless be very limited in the power they supply, or at least our capacity to extract it. Thus, there may indeed be great power in the wind, the waves and the sun, but it is very difficult, and costly, for us to extract anything more than a very small proportion of this.

This is, of course, one reason such technologies as windmills and watermills were largely abandoned in favour of fossil fuels over a century ago.

Many species, Ridley observes, have gone extinct, or are in danger of going extinct. Yet, since species are capable of reproduction, they are, Ridley argues, ‘renewable resources’.

In contrast, he observes:

There is not a single non-renewable resource that has yet run out: not coal, oil, gas copper, iron, uranium, silicone or stone… The stone age did not come to an end for lack of stone” (p302).

The Back to Nature Cult and the Naturalistic Fallacy

What then do these misguided fads – self-sufficiency, living off the land, organic food, renewable energies, opposition to GM crops etc. – all have in common?

Although Ridley does not address this, it seems to me that all the misguided policy proposals that Ridley excoriates have one or both of two things in common:

  1. They restrict the free operation of markets; and/or
  2. They seek to restrict new technologies that are somehow perceived as ‘unnatural.

Thus, many of these misguided fads can be traced to a version of what philosophers call the naturalistic fallacy or, more specifically, the appeal to nature fallacy – namely the belief that, if something is ‘natural’, that necessarily means it is good.

Yet the lives of humans in our natural state, i.e. as nomadic foragers, were, as Hobbes rightly surmised, short, nasty and brutish, at least as compared to our lives today.[6]

Thus, renewable energy sources, biofuels and organic farming all somehow seem more ‘natural’ than burning, mining and drilling for coaloil, and gas.

Likewise, genetically modified crops (aka ‘Frankenstein foods’) seem quintessentially ‘unnatural’, with connotations of eugenics and ‘playing god’.

In fact, however, we have been genetically modifying domesticated species ever since we began domesticating them. Indeed, this is the very definition of domestication.

Moreover, organic farming and so-called renewable energies are not a return to what is ‘natural’ (whatever that means), but simply a return to technologies that were surpassed and rendered obsolete hundreds of years ago.

If anything, returning to what is natural would involve a return to subsisting by hunting and gathering, but not many environmentalists this side of the Unabomber are willing to go that far. Instead, they only want to turn back the clock so far.[7]

Similarly, nuclear power is rejected by most environmentalists, primarily because it seems quintessentially ‘unnatural’ and the very word ‘nuclear’ is, I suspect, associated in the public mind with nuclear weapons, since the very word ‘nuclear’ invariably conjures images of Hiroshima, Nagasaki and the prospect of nuclear apocalypse.[8]

Yet nuclear power is actually much less costly in terms of human lives than, say, coal mines or offshore oil rigs, both of which are extremely dangerous places to work.

Likewise, being self-sufficient and ‘living off the land’ may seem intuitively ‘natural’, in that it is the way our ancestors presumably lived in the stone age.

However, Ridley argues that this is not true, and that humans have been, for the entirety of their existence as a species, voracious traders.

Indeed, he even argues that it is humankind’s appetite for and ability to trade, rather than language or culture, that distinguishes us from the remainder of the animal kingdom (p54-60).[9]

Global Warming

Necessarily, Ridley also addresses perhaps the most popular, and certainly the most politically correct, source of pessimism regarding the future, namely the threat of global warming or climate change.

Identifying climate change as both “by far the most fashionable reason for pessimism” and, together with the prospects (or alleged lack of prospects) for economic development in Africa, as one of the “two great pessimisms of today”, Ridley begins his discussion of this topic by acknowledging that these problems “confront the rational optimist with a challenge, to say the least” and as indeed representing “acute challenges” (p315).

Having made the acknowledgement, however, in the remainder of his discussion he suggests that the threat posed by global warming is in fact vastly exaggerated.

Like most so-called global warming skeptics (e.g. Bjørn Lomborg), or at least the more intelligent, knowledgeable ones who are actually worth reading, Ridley is no ‘denier’, in that he denies neither that global warming is occurring nor that it is caused, at least in part, by human activity.

Instead, he simply questions whether the threat posed is as great as it is portrayed as being by some scientists, politicians and activists.

Thus, he begins his discussion of the topic by pointing out that climate has always changed throughout human history, and indeed prehistory, not so as to suggest that the changes that are currently occurring are of the same type and cause (i.e. not man-made), but rather to emphasize that we are more than capable of adapting, and that changes of similar magnitude will not mean the end of the world.

There were warmer periods in earth’s history in medieval times and about 6,000 years ago… and… humanity and nature survived much faster warming lurches in climate during the ice ages than anything predicted for this century” (p329).[10]

People move happily from London to Hong Kong or Boston to Miami and do not die from heat, so why should they die if their home city gradually warms by a few degrees?” (p336).

Indeed, far from denying the reality of climate change, Ridley follows former British Chancellor of the Exchequer Nigel Lawson, in his interesting book An Appeal to Reason, in actually, at least hypothetically and for the sake of argument, accepting the projections of the mainstream Intergovernmental Panel on Climate Change (IPCC) regarding future temperature increases.

Yet the key point emphasized by both Lawson and Ridley is that, under all the IPCC’s various projections, increased global warming results from increased carbon emissions, which themselves result from economic growth, particularly in what is today the Developing World.

This means that those projections which anticipate the greatest temperature increases also anticipate the greatest economic growth, which, in turn, means, not only that the future descendants on whose behalf we today are asked to make sacrifices will be vastly wealthier than are the people asked to make these sacrifices, but also that they will have far greater resources – and, of course, more advanced technology – with which to deal with the problems posed by rising temperatures

Thus, with regard to rising sea levels for example, one of the most often cited threats said to result from global warming, it is notable that much of the Netherlands would be underwater at high tide were it not for land reclamation (e.g. the building of dykes and pumps).

Much of this successful land reclamation in the Netherlands occurred in previous centuries, when the technologies and resources available were much more limited. In the future, with increased prosperity and advances in technology, our ability to cope with rising sea levels will be even greater.

Ridley also points out that there are likely to be benefits associated with global warming as well as problems.

For example, he cites data showing that, all around the world, more people actually die from the extreme cold than from extreme heat (Zhau et al 2021).

Globally the number of excess deaths during cold weather continues to exceed the number of excess deaths during heat waves by a large margin – by about five to one in most of Europe” (p335).

This suggests that global warming will actually save lives overall, especially since global warming is anticipated to reduce the severity of conditions of extreme cold to a greater extent than it increases the temperature in warm conditions.

Thus, Lomborg reports:

Global warming increases cold temperatures much more than warm temperatures, thus it increases night and winter temperatures much more than day and summer temperatures… Likewise, global warming increases temperatures in temperate and Arctic regions much more than in tropical areas” (Cool it: p12).

Indeed, with regard to food supply and farm yealds, Ridley concludes:

The global food supply will probably increase if temperature rises by up to 3°C. Not only will the warmth improve yields from cold lands and the rainfall improve yields from some dry lands, but the increased carbon dioxide will itself enhance yields, especially in dry areasLess habitat will probably be lost to farming in a warmer world” (p337).[11]

Finally, Ridley concludes by reporting:

Economists estimate that a dollar spent on mitigating climate change brings ninety cents of benefits compared with $20 benefits per dollar spent on healthcare and $16 per dollar spent on hunger” (p388).

Actually, however, judging by Ridley’s own associated endnote, this is not the conclusion of “economists” in general at all, but rather of one particular writer who is not an economist either by training or profession – namely, leading climate change skeptic Bjørn Lomborg.

Overpopulation?

Though conveniently left off the agenda of most modern mainstream environmentalists, a strong case can be made that it is overpopulation that represents the ultimate and most fundamental environmental issue. Other environmental problems are strictly secondary – because the reason why we wreak environmental damage is precisely because we need to provide for the increasing demands of a growing population.

Thus, concerned do-gooders who seek to lower their carbon footprints by cycling to work every day would arguably do better to simply forgo reproduction, since, by having children, they do not so much increase their own carbon footprint, as create another whole person complete with a carbon footprint all of their own.

However, in recent decades, talk of overpopulation has become politically-incorrect and taboo, because restricting reproductive rights seems redolent of eugenics and forced sterilizations, which are now, for entirely wrongheaded reasons, regarded as a bad thing.

Moreover, since population growth is now occurring largely among non-whites, especially black Africans, with whites themselves (and many other groups, not least East Asians) reproducing at well below replacement levels and fast being demographically displaced by nonwhites, even in their own indigenous ethnic homelands, it also has the faint whiff of racism and eugenics, making it especially politically incorrect.

Overpopulation has thus become ‘the environmental issue that dare not speak its name’.

Ridley concludes, however, that overpopulation is not a major concern because it handily solves itself through a curious though well-documented phenomenon known to demographers as the demographic transition, whereby increasing economic development is seemingly invariably accompanied by a decline in fertility.

There are, however, several problems with this rather convenient conclusion.

For one thing, while fertility rates have indeed fallen precipitously in developed economies in recent decades in concert with economic growth, no one really fully understands why this is happening.

Indeed, Ridley himself admits that it is “mysterious”, “an unexplained phenomenon” and that “demographic transition theory is a splendidly confused field” (p207).

Indeed, from an evolutionary psychological, sociobiological or Darwinian perspective, the so-called demographic transition is especially paradoxical, since it is elementary Darwinism 101 that organisms should respond to resource abundance by channeling the additional resources into increased rates of reproduction so as to maximize their Darwinian fitness.

Although Ridley admits that the reasons behind this phenomenon are not fully understood, he identifies factors such as increased urbanization, female education and reduced infant mortality as the likely causal factors.[12]

However, uncertainty as to its causes does not dampen Ridley’s conviction that the phenomenon is universal and will soon be replicated in the so-called ‘Developing World’ just as surely as it occurred in ‘developed economies’.

Yet, with the stakes potentially so high, can we really place such confidence in the continuation, and global spread, of a process whose causes remain so little understood?

The second problem with seeing the demographic transition as a simple, hands-off, laissez faire solution to overpopulation is that the observed association between economic development and population growth and stagnation is much more complex than Ridley makes out.

Thus, as we have seen, according to Ridley, living standards have been rising throughout pretty much the entirety of recorded history, and indeed prehistory. However, the below replacement level fertility rates now observed in most developed economies date only to the latter half of the twentieth century. Indeed, even as recently as the immediate post-war decades in the middle of the twentieth century, there was a famous baby boom.

Until then, fertility rates had indeed already been in decline for some time. However, this decline was more than offset by massive reductions in the levels of infant mortality owing to improved health, nutrition and sanitation, such that industrialization and improved living standards were actually, until very recently, accompanied by massive increases, not decreases, in population-size.

Given that much of the so-called ‘Developing World’, especially in Africa, is obviously still at a much earlier stage of development than is the contemporary west, we may still expect many decades more of population growth in Africa before any reductions eventually set in, if indeed they ever do.

Finally, this assumption that decreased fertility will inevitably accompany economic growth in the ‘Developing World’ itself presupposes that the entirety of the so-called ‘Developing World’ will indeed experience economic growth and development.

This is by no means a foregone conclusion.

Indeed, the very term ‘Developing World’, presupposing as it does that these parts of the world will indeed economically ‘develop’, may turn out to be a case of wishful thinking.

A case in point is the nation of South Africa, which was, as recently as the 1960s, widely regarded as the only ‘developed economies’ in Africa. Today, however, South Africa is usually classed as a developing economy’.

This suggests that South Africa is indeed, in some sense, ‘developing’, but, unfortunately, that it just happens to be ‘developing’ in altogether the wrong direction.

Africa, Aid and Development

This leads to a related issue: if Ridley’s conclusions regarding overpopulation strike me as overly optimistic, then his prognosis for Africa seems similarly naïve, if not borderline utopian.

Critiquing international aid programmes as having failed to bring about economic development and even as representing part of the problem, Ridley instead implicates various factors as responsible for Africa’s perceived ‘underdevelopment’. Primary among these is a lack of recognition given to property rights, which, he observes, deters both investment and the saving of resources necessary for economic growth.

Yet, Ridley insists, entrepreneurialism is rife in Africa and just waiting to be provided with a successful economic infrastructure (e.g. property rights) necessary to encourage and harness it to the general welfare.

Certainly Ridley is right that there is nothing intrinsic to the African soil or air that prevents economic development as has occurred elsewhere in the world.

However, Ridley fails to explain why the factors that he implicates as holding Africa back (e.g. corrupt government, lack of property rights) are seemingly so endemic throughout much of Africa but not elsewhere in the world.

Neither does he explain why similar problems (e.g. high rates of violent crime, poverty) also beset, not just Africa itself, but also other parts of the world populated by people of predominantly sub-Saharan African ancestry, from Haiti and Jamaica to Baltimore and Detroit.

This, of course, suggests the politically-incorrect possibility that the perceived underdevelopment’ of much of sub-Saharan Africa simply reflects something innate in the psychology of the indigenous people of that continent.

Immigration and Overpopulation

Yet, if Africa does not develop, then it presumably will not undergo the demographic transition either, since the latter, whatever its proximate explanation, seems to be dependent on economic growth and modernization.

This would mean that population in Africa would continue to grow, and, as population growth stalls, or even goes into reverse, in the developed world, people of sub-Saharan African descent will come to constitute an ever-increasing proportion of the world population.

Of course, population growth in a ‘Developing World’ that fails to ‘develop’ is, from a purely environmental perspective, less worrisome, since living standards are lower and hence the environmental impact, and carbon footprint, of each additional person is lower.

However, permissive immigration policies throughout much of the West have resulted in African populations, and populations from elsewhere in the Developing World, migrating into Europe, North America and other First World economies at ever increasing rates and, moreover, fast becoming acclimatized to western living standards, but also, in addition to being younger on average and hence having more of their reproductive careers ahead of them, often retaining higher fertility levels than the indigenous population for several generations after migrating.

Thus, open-door immigration policies are transforming a Third World overpopulation problem into a First World overpopulation problem and into a global environmental issue as well.

The result is that white Europeans will soon find themselves as minorities even in their own indigenous European homelands. As a result, European peoples will effectively become stateless nations without a country to call their own or whose destiny they can control through the electoral system.

Of course, we are repeatedly reassured that this is not a problem, and that anyone who suggests it might be a problem is a loathsome racist, since immigrant communities and their descendants will, of course, undoubtedly successfully integrate into western culture and become westerners.

History, however, suggests that this is unlikely to be the case.

On the contrary, the assimilation of culturally, religiously and racially distinct immigrants has proven, at best, a difficult and fraught process.

Thus, in America, successive waves of European-descended immigrants (Irish, Poles, Italians, Jews) have indeed successfully assimilated into mainstream American society and lost most of their cultural uniqueness. However, African-Americans remain very much a separate community, with their own neighbourhoods, dialect and culture, despite their ancestors having been resident in the USA longer than any of these European descended newcomers, and longer even than many of the so-called ‘Anglos’.

This cannot be attributed to the unique historical experience of the African diaspora population in America (i.e. slavery, segregation etc.), since the experience of European polities in assimilating, or attempting to assimilate, nonwhite immigrant communities in the post-war period has proved similarly fraught.

Thus, quite apart from the environmental impact of a rising population with First World living standards and carbon footprints to match, to which I have already alluded, various problems are likely to result from the demographic transformation of the west, which may threaten the very survival of western civilization, at least in the form in which we have hitherto known it.

After all, civilizations and cultures are ultimately the product of the people who compose them. A Europe composed increasingly of Muslims will no longer be a western civilization but rather, in all likelihood, a Muslim one.

Meanwhile, other peoples have arguably failed to independently found civilizations of any type sufficient to warrant the designation ‘civilization, nor arguably even to maintain advanced civilizations bequeathed to them, as the post-colonial experience in much of sub-Saharan Africa well illustrates.

Yet it is, as we have seen, these peoples who will, on current projections, come to constitute an increasing proportion of the world population, and hence presumably of immigrants to the west as well, over the course of the coming century.

This suggests that western civilization may not survive the replacement of its founding stock.[13]

Moreover, increasing ethnic diversity will also likely foreshadow other problems, in particular the sort of ethnic conflict that seemingly inevitably besets multiethnic polities.

Thus, multiethnic states – from Lebanon and the former Yugoslavia to Rwanda and Northern Ireland – have frequently been beset by interethnic conflict and violence, and even those multiethnic polities whose divisions have yet to result in outright violence and civil war (e.g. Belgium) remain very much divided states.[14]

In transforming what were formerly monoracial, if not monoethnic, states into multiracial states, European elites are seemingly voluntarily importing the exact same sorts of ethnic conflict into their own societies.

On this view, the Muslim terrorist attacks, and various race riots, which various European countries have experienced in recent decades may prove an early foretaste and harbinger of things to come.

In addition, if western populations are currently undergoing a radical transformation in their racial and ethnic composition, these problems are only exacerbated by dysgenic fertility patterns even among white westerners ourselves, whereby it is those women with the traits least conducive to maintaining an advanced technological civilization (e.g. low intelligence, conscientiousness, work ethic) who are, on average, the most fecund, and hence disproportionately bequeath their genes to the next generation, while improved medical care increasing facilitates the survival and reproduction of those among the sick and ill who otherwise would have been weeded out by selection.[15]

However, besides a few paragraphs dismissing and deriding the apocalyptic prognoses of early twentieth century eugenicists (p288), these are rational, if politically incorrect, reasons for pessimism that Ridley, the self-styled rational optimist, evidently does not deign – or perhaps dare – to discuss.

The Perverse Appeal of the Apocalypse

Ridley is right to observe that tales of imminent apocalypse have proven paradoxically popular throughout history.

Indeed, despite only being barely an Xennial and having lived most of my life in Britain, I have nevertheless already been fortunate enough to have survived several widely-prophesized apocalypses, from a Cold War nuclear apocalypse, to widely anticipated epidemics of BSE, HIV, SARS, bird flu, swine flu, the coronavirus and the ‘millennium bug’, all of which proved damp squibs.

Yet prophesizing imminent apocalypse is, on reflection, a rather odd prediction to make. It is rather like making a bet you cannot win: If you are right, then everyone dies, and nobody is around to congratulate you on your far-seeing prescience – and neither, in all probability, are you.

It is rather like betting on your own death (i.e. paying for life insurance). If you win (if you could call it ‘winning’), then, by definition, you will not be around to collect your winnings.

Why then are stories about the coming apocalypse so paradoxically popular? After all, no one surely relishes the prospect of imminent Armageddon.

One reason is that catastrophism sells. Scare-story headlines about imminent disaster sell more newspapers to anxious readers (or, in contemporary formulation, attract more clicks) than do headlines berating us for how good we have it.

Activist groups also have an incentive to exaggerate the scale of problems in order to attract funding. The same is true even of scientists, who likewise have every incentive to exaggerate the scale of the problems they are investigating (e.g. climate change), or at least neglect to correct the inflated claims of activists, in order to attract research funding.

Yet I suspect the paradoxical human appetite for pessimism is rooted ultimately in what psychiatrist Randolph Nesse refers to, in a medical contex, as ‘the Smoke Detector Principle’ – namely the observation that, when it comes to potential apocalypses, since false positives are less costly than false negatives, it is wise to err on the side of caution and prepare for the worst, just in case.

Our penchant for apocalypses may even have religious roots.

Belief in the imminence of the end time is a pervasive religious belief.

Thus, the early Christians, including in all probability Jesus himself (so historians speculate), believed that Judgement Day would occur within their own lifetimes.

Later on, Jehovah’s Witnesses believed the same thing, and actually set a date, or rather a succession of dates, rescheduling the apocalypse each time the much-heralded end time, like a British Rail train in the 1980s, invariably and inconsiderately failed to arrive on due schedule.

The same is true of countless other apocalyptic Millennarial religious cults, scattered across history.

Interestingly, former British Chancellor of the Exchequer, Nigel Lawson, suggests the scare over global warming thus reflects an ancient religious belief translated into the language of ostensibly secular modern science.

Thus, he observes, throughout history, God’s vengeance on the people for their sins has been conceived of as occurring through the medium of the weather (e.g. storms, floods, lightning bolts):

Throughout the ages… the weather has been an important part of the religious narrative. In primitive societies it was customary for extreme weather events to be explained as punishment from the gods for the sins of the people; and there is no shortage of examples of this theme in the the Bible, either, particularly, but not exclusively, in the Old Testament” (An Appeal to Reason: A Cool Look at Global Warming: p102-3).

Thus, Lawson concludes that, with the decline of traditional religion:

It is the quasi-religion of green alarmism and what has been well described as global salvationism… which has filled the vacuum, with reasoned questioning of its mantras regarded as little short of sacrilege” (An Appeal to Reason: p102)

In doing so, climate change alarmism has also replaced another substitute religion for the pseudo-secular that, like Christianity itself, now appears to be in its death throes, and that brought only suffering and destruction in its wake – namely, Marxism.

Thus, Lawson observes:

With the collapse of Marxism, and to all intents and purposes of other forms of socialism too, those who dislike capitalism, not least on the global scale, and its foremost exemplar, the United States, with equal passion, have been obliged to find a new creed. For many of them, green is the new red” (An Appeal to Reason: p101).

Global warming alarmism thus provides an ostensibly secular and scientific substitute for eschatology for the resolutely irreligious.

The Cult of Progress

On the other hand, Ridley surely exaggerates the ubiquity of pessimism.

While there is indeed a market for gloom-mongering prophets of doom, belief in the reality, and the inevitability, of social, economic, political and moral progress is also pervasive, especially (but not exclusively) on the political left.

Thus, Marxists have long held that the coming of communist utopia is not just desirable but wholly inevitable, if not just around the corner, as a necessary resolution of the contradictions allegedly inherent in capitalism, as Marx himself purported to have proven scientifically.

This belief too may have religious roots. The Marxist belief that we pass into communist utopia (i.e. heaven on earth) after the revolution may reflect a perversion of the Christian belief that we pass into heaven after death and the Apocalypse. Thus, Marxism is, as Edmund Wilson first put it, “the opiate of the intellectuals”.

Nowadays, though Marx has belatedly fallen from favour, leftists retain their belief in the inevitability of social and political progress. Indeed, they have even taken to referring to themselves as ‘progressives’ and dismissing anyone who does not agree with them of being ‘on the wrong side of history’.

On this view, the process of liberation began with the abolition of slavery, continued with the defeat of Nazi Germany and the granting of independence to former European colonies, proceeded onwards with the so-called civil rights movement in the USA in the 1950s and 60s, then successively degenerated into socalled women’s liberation, feminism, gay rights, gay pride, disabled rights, animal rights, transsexual rights etc.

Quite where this process will lead next, no one, least of all leftists themselves, seems very sure. Indeed, one suspects they dare not speculate.

Yesterday’s reductio ad absurdum of what was, in Britain, once dismissed ‘loony leftism’, the prospect of which everyone, just a few decades earlier, would have dismissed as preposterous scaremongering, is today’s reality, tomorrow’s mainstream, and the day after tomorrow’s relentlessly policed dogma and new orthodoxy. Of this, the recent furores over, first, gay marriage, and now transsexual bathroom rights, represent very much cases in point.

Yet the pervasive faith in progress is by no means not restricted to the left. On the contrary, as the disastrous invasions and occupations of Iraq and Afghanistan proved all too well, neoconservatives believe that Islamic tribal societies, and former Soviet republics, can be transformed into capitalist liberal democracies just as surely as unreconstructed Marxists once believed (and, in some cases, still do believe) that Islamic tribal societies and capitalist liberal democracies would themselves inevitably give way to communism.

Indeed, neoconservative political scientist Francis Fukuyama arguably went even further than Marx: the latter merely prophesized the coming end of history, the former insisted it had already occurred, and, in so doing, became instantly famous for being proven almost instantly wrong.

Meanwhile, free market libertarians like Ridley himself believe that Western-style economic development, industrialization and prosperity can come to Africa just as surely as surely as it came to Europe and East Asia.

Indeed, even Hitler was a believer in progress and utopia, his own envisaged Thousand Year Reich being almost as hopelessly utopian and unrealistic as the communist society envisaged by the Marxists.

Marx thought progress involved taking the means of production into common ownership; Thatcher thought that progress involved privatizing public utilities; Hitler though progress involved eliminating allegedly inferior races.

In short, left and right agree on the inevitability of progress. Each are, in this sense, ‘progressives’. They differ only on what they believe ‘progress’ entails.

Scientific and Political Progress

In conclusion, I agree with Ridley that scientific and technological advances will continue inexorably.

Scientific and technological progress is indeed inevitable and unstoppable. Any state or person that unilaterally renounces modern technologies will be outcompeted, both economically and militarily, and ultimately displaced, by those who wisely opt to do otherwise.

However, although technology improves, the uses to which technologies are employed will remain the same, since human nature itself remains so stubbornly intransigent.

Thus, as philosopher John Gray writes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):

Though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive animal that is also one of the most predatory and destructive” (Straw Dogs: p28, p4).

The inevitable result, Gray concludes, is that:

Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (Straw Dogs: p123).

References

Payne H (2010) Environmental Progress: The Parked Mustang Test, at Planet Gore: The Hot Blog, National Review, April 23, 2010.

Ridley M (2010) The Mustang Test, RationalOptimist.com, 25 May, 2010.


[1] Of course, the reason that there is a demand for additional housing in the UK is that the population of the country is rising, and population is rising entirely as a consequence immigration, since the settled population of Britain actually reproduces at well below replacement levels. The topic of immigration is one to which I return later in this review (see above). Another factor is increasing proportions of people living alone, due to reduced levels of marriage and cohabitation, and increased rates of divorce and separation.

[2] This example is said to be actually historical, and not purely hypoethetical. Thus, the idea that black swans did not actually exist was widely believed in Europe for centuries, supposedly originating with ancient Roman poet Juvenal’s poem Satire IV in the late first or early second century AD. Yet this conventional wisdom was supposedly overturned when Dutch explorer Willem de Vlamingh sighted a black swan in Australia, to which continent the species is indigenous, in January 1697.

[3] Given his trenchant opposition to autarky, protectionism and tariffs, and support for free trade, it is interesting to note that Ridley was nevertheless a supporter of Brexit, despite the fact that promoting trade, competition and the free movement of goods, services and workers across international borders was a fundamental objective of European integration.
Presumably, like many Eurosceptics, Ridley believed that integration in the EU had now way beyond this sort of purely economic integration (i.e. a common market), as indeed it has, and that the benefits of continued membership of the EU, in terms of the free movement of goods, services, labour and capital was outweighed by the negatives.
However, it ought to be pointed out that European integration was, from its post-war inception, never purely economic. Indeed, economic integration necessarily entails some loss of political sovereignty, since economic policy is itself an aspect of politics.

[4] I agree with Ridley that free trade is indeed beneficial, and tariffs and protectionism counterproductive, at least in purely economic terms. However, I believe that there is a case for retaining some degree of self-sufficiency at the national level (i.e. autarky), so that, in the event that international trade breaks down, for example during wartime, the population is nevertheless able to subsist and maintain itself. Today, we in the west tend to see the prospect of a war that would affect us in this way as remote. This, however, may prove to be naïve.
Perhaps analogously a similar case can be made for maintaining some ability to ‘live off the land’ and, if necessary, become self-sufficient at the individual level (e.g. by hunting, fishing, and growing your own crops), so as to prepare for the unlikely circumstance of the domestic economic system breaking down, whether due to natural disaster, civil war and foreign invasion. This is, of course, the objective of so-called survivalists.

[5] Although slavery may indeed eventually have become “uneconomic”, as claimed by Ridley, thanks to fossil fuels, this is not, contrary to the implication of the quoted passage, the reason slavery was abolished in the nineteenth centiry, although it is indeed true that, at the time, many economists claimed that it would be cheaper to simply pay slaves rather incur the expense of forcibly enslaving and effectively imprisoning them, with all the costs this entailed. In fact, however, on the abolition of slavery in the British Empire, former slaves were unwilling to work in the horrendous conditions on sugar plantations of the Caribbean, preferring to eek out an existence through subsitance farming, and the plantations themselves became “uneconomic” until indentured labourers (slaves in all but name) were imported from Asia to take the place of the freed slaves.

[6] Despite pervasive myths of ‘noble savages’ existing in benign harmony with both nature and one another, the ‘nastiness’ and ‘brutishness’ of primitive premodern humans is beyond dispute. Indeed, even the !Kung San bushmen of the Kalahari Desert in Southern Africa, long extolled by anthropologists as ‘the gentle people’ and ‘the harmless people’, actually “have a murder rate higher than that of American inner cities” (The Blank Slate: p56). Thus, Steven Pinker reports:

The !Kung San of the Kalahari Desert are often held out as a relatively peaceful people, and so they are, compared with other foragers: their murder rate is only as high as Detroit’s” (How the Mind Works: p51).

However, if the life of early man was indeed ‘nasty and brutish’, the ‘shortness’ of the lives of premodern peoples is sometimes exaggerated. Thus, it is often claimed that, prior to recent times, the average lifespan was only about thirty years of age. In fact, however, this is misleading, and largely reflects the much higher rates of infant mortality.

[7] In fact, a return to a foraging lifestyle would not be ‘natural’ for most humans, since most humans are now to some extent evolutionarily adapted to agriculture, and some may even have become adapted to the industrial and post-industrial societies in which many of us now live. The prospect of returning to what is ‘natural’ is, then, simply impossible, because there is no such thing in the first place. Though evolutionary psychologists like to talk about the environment of evolutionary adaptedness, this is, in truth, a composite of environments, not a single time or place any researcher could identify and visit with the aid merely of a compass, a research grant and a time machine.

[8] The comically villainous Mr Burns in the hugely popular animated cartoon ‘The Simpsons’ both illustrates and reinforces the general perception of nuclear power in the western world. Of course, no doubt many wealthy businessmen and investors do indeed make large amounts of money out of nuclear energy. But many wealthy businessmen and investors also make large amounts of money investing in renewable energies.

[9] In fact, of course, there is not one single factor that distinguishes us from other animals – there are many such things, albeit mostly differences of degree rather than of kind.

[10] While Ridley may be right that “nature” as a whole “survived much faster warming lurches in climate during the ice ages than anything predicted for this century”, many individuals species did not. On the contrary, many species are thought to have gone extinct during these historical shifts between ice ages and interglacials.
Humans have indeed proven resilient in surviving in many different climates around the world. However, this is largely on account of our cultural inventiveness. Thus, on migrating to colder climates, we are able to keep warm by making fire, wearing clothes and building shelter, rather than having to gradually evolve thicker fur and other physiological adaptations to cold as other animals must do. Other animals lack this adaptability.
Therefore, if our concern extends beyond our own species, perhaps we should be concerned about such fluctuations in temperature. On the other hand, however, it is almost certainly the destiny of all species, humans very much included, to ultimately go extinct, or at least evolve into something new.

[11] More specifically, at least according to Bjørn Lomborg in his book Cool It, global warming will reduce farm yields and agricultural output in Africa and other tropical regions, but increase farm yields in Europe and other temperate zones, and the increases in the latter will be more than sufficient to offset the reduced agricultural output in Africa and the tropics.

[12] Many of the frequently offered for the decline in fertility rates in the west do not hold up to analysis. For example, many authorities credit (or sometimes blame) feminism, or the increase in female labour force participation, for the development. However, this theory seems to be falsified by the fact that fertility rates are even lower in countries such as Japan and South Korea, although rates of female labour force participation, and of feminist ideology, seem to have been much lower, at least until very recently.
My own favoured theory for the demographic transition, not mentioned by Ridley, implicates the greater availability of effective contraception technologies. Effective and widely available contraceptive technologies represent a recent invention and hence an evolutionary novelty’ that our species has not yet had sufficient time to evolve psychological mechanisms to deal effectively with yet.
The problem with testing this theory is that, until recently, many forms of contraception were illegal in many jurisdictions, and also taboo, and therefore use was often covert and surreptitious, such that it is difficult to gauge just how widely available and widely used various contraceptive technologies were, until recently.
However, some evidence in support of this theory is provided by the decline in fertility rates in countries such as the US and UK. Thus, in the US, the baby boom reached its peak, and thenceforth began a steep decline in 1960, exactly the same time that the contraceptive pill first came on the market. In Britain, the availability of the pill was initially quite restricted and, perhaps partly as a consequence, fertility rates peaked, and the downward trend began, somewhat later.
However, looking at the overall trends in fertility rates over time, the availability of contraception certainly cannot be the sole explanation for the changes observed.

[13] In fact, the survival of western civilization, and the form it may come to take, may depend, in part, upon which peoples and ethnicities western populations come to be predominantly replaced by.
Thus, it is often claimed by immigration restrictionists, especially those of a racialist bent, that immigrants from developing economies invariably recreate in the host nation to which they migrate the same problems that beset the country they left behind, often, ironically, the very factors (e.g. poverty, corruption) that motivated them to leave this previous homeland behind.
In fact, however, this is not always true. For example, though heirs to among the oldest and greatest civilizations of the ancient world, both India and China are, today, despite recent economic growth, still relatively  poor countries, at least as compared to countries in the West. Yet, perhaps paradoxically, people in the west of Indian and Chinese ancestry resident in the west (and indeed in other parts of the world as well) tend to be disproportionately wealthy, substantially wealthier, on average, than the white western populations among whom they live.
However, Chinese and Indian populations resident in the west also seem to have low birth rates, as does China itself, while the fertility rate in India, while still just around replacement levels in the latest available data, seems to be in free fall. In short, for better or worse, it appears that the future is African, or, as increasing numbers of Africans migrate abroad, at least of African descent.

[14] For example, much is made, and rightly so, of the success of the peace process, and subsequent settlement, in bringing (relative) peace to Northern Ireland. Yet Northern Ireland nevertheless remains, today, very much a divided society, in which ethnic tensions simmer below the surface, and no one would hold it up a good example of a united, cohesive, functional polity, let alone as an example that all but the most conflict-ridden and divided of polities should ever seek to emulate.

[15] Of course, concerns regarding overpopulation, which I have discussed earlier in this piece, will only exacerbate dysgenic fertility patterns, since it is only those with high levels of altruism who even care about the problems posed for future generations by overpopulation, and it is only those with high levels of self-control who will be able to actually act of these concerns by restricting their fertility, and all these personality traits are socially desirable traits that we would wish to impart upon future generations and also partly heritable.

Mental Illness, Medicine, Malingering and Morality: The Myth of Mental Illness vs The Myth of Free Will

Thomas Szasz, Psychiatry: The Science of Lies New York: Syracuse University Press, 2008

The notion that psychiatric conditions, including schizophrenia, ADHD, depression, alcoholism and gambling addiction, are all illnesses ‘just like any other disease’ (i.e. just like smallpox, malaria or the flu) is obvious nonsense. 

Just as political pressure led to the reclasification of homosexuality as, not a mental illness, but a normal variation of human sexuality, so a similar campaign is currently underway in respect of gender dysphoria. Today, if someone is under the delusion that they are a member of the opposite sex, we pander to the delusion and provide them with hormone therapy, hormone blockers and sex reassignment surgery. It is as if, where a patient suffers from the delusion that they are Napoleon, then, instead of treating them for this delusion, we instead provide them with legions of troops with which to invade Prussia.

If indeed these conditions are to be called ‘diseases’, which, of course, depends on how we define ‘disease’, they are clearly diseases very much unlike the infections of pathogens with which we usually associate the word ‘disease’. 

For this reason, I had long meant to read the work of Thomas Szasz, a psychiatrist whose famous (or perhaps infamous) paper, The Myth of Mental Illness (Szasz 1960), and book of the same title, questioned the concept of mental illness and, in the process, rocked the very foundations of psychiatry when first published in the 1960s. I was moreover, as the preceding two paragraphs would suggest, in principle open, even sympathetic, to what I understood to be its central thesis. 

Eventually, I got around to reading instead Psychiatry: The Science of Lies, a more recent, and hence, I not unreasonably imagined, more up-to-date, work of Szasz’s on the same topic.[1]

I found that Szasz does indeed marshal many powerful arguments against what is sometimes called the disease model’ of mental health

Unfortunately, however, the paradigm with which he proposes to replace this model, namely a moralistic one based on the notion of ‘malingering’ and the concept of free will, is even more problematic, and less scientific, than the disease model that he proposes to do away with.  

Physiological Basis of Illness 

For Szasz, mental illness is simply a metaphor that has come to be taken altogether too literally. 

Mental illness is a metaphorical disease; that, in other words, bodily illness stands in the same relation to mental illness as a defective television stands to an objectionable television programme. To be sure, the word ‘sick’ is often used metaphorically… but only when we call minds ‘sick’ do we systematically mistake metaphor for fact; and send a doctor to ‘cure’ the ‘illness’. It’s as if a television viewer were to send for a TV repairman because he disapproves of the programme he is watching” (Myth of Mental Illness: p11). 

But what is a disease? What we habitually refer to as diseases are actually quite diverse in aetiology. 

Perhaps the paradigmatic disease is an infection. Thus, modern medicine began with, and much of modern medicine is still based on, the so-called ‘germ theory of disease’, which assumes that what we refer to as disease is caused by the effects of germs or ‘pathogens’ – i.e. microscopic parasites (e.g. bacteria, viruses), which inhabit and pass between human and animal hosts, causing the symptoms by which disease is diagnosed as part of their own life-cycle and evolutionary strategy.[2]

However, this model seemingly has little to offer psychiatry. 

Perhaps some mental illnesses are indeed caused by infections. 

Indeed, physicist-turned-anthropologist Gregory Cochran even controversially contends that homosexuality (which is not now considered by psychiatrists as a mental illness, despite its obviously biologically maladaptive effects – see below) may be caused by a virus

However, this is surely not true of the vast majority of what we term ‘mental illnesses’. 

However, not all physical diseases are caused by pathogens either. 

For example, developmental disorders and inherited conditions are also sometimes referred to as diseases, but these are caused by genes rather than germs

Likewise, cancer is sometimes called a disease, yet, while some cancers are indeed sometimes caused by an infection (for example, cervical cancer is usually caused by HPV, a sexually transmitted virus), many are not. 

What then do all these examples of ‘disease’ have in common and how, according to Szasz, do so-called mental illnesses differ conventional, bodily ailments? 

For Szasz, the key distinguishing factor is an identified underlying physiological cause for, or at least correlate of, the symptoms observed. Thus, Szasz writes: 

The traditional medical criterion for distinguishing the genuine from the facsimile – that is, real illness from malingering – was the presence of demonstrable change in bodily structure as revealed by means of clinical examination of the patient, laboratory tests on bodily fluids, or post-mortem study of the cadaver” (Myth of Mental Illness: p27) 

Thus, in all cases of what Szasz regards as ‘real’ disease, a real physiological correlate of some sort has been discovered, whether a microbe, a gene or a cancerous growth. 

In contrast, so-called mental illnesses were first identified, and named, purely on the basis of their symptomology, without any understanding of their underlying physiological cause. 

Of course, many diseases, including physical diseases, are, in practice, diagnosed by the symptoms they produce. A GP, for example, will typically diagnose flu without actually observing and identifying the flu virus itself inside the patient under a microscope. 

However, the existence of the virus, and its causal role in producing the symptoms observed, has indeed been demonstrated scientifically in other individuals afflicted with the same or similar symptoms. We therefore recognise the underlying cause of these symptoms (i.e. the virus) independently from the symptoms they produce. 

This is not true, however, for mental illnesses. The latter were named, identified and diagnosed long before there was any understanding of their underlying physiological basis. 

Rather than diseases, we might then more accurately call them syndromes, a word deriving from the Greek ‘σύνδρομον’, meaning ‘concurrence’, which is usually employed in medicine to refer simply to a cluster of signs and symptoms that seem to correlate together, whether or not the underlying cause is or is not understood.[3]

Causes and Correlates 

The main problem for Szasz’s position is that our understanding of the underlying physiological causes of psychiatric conditions – neurological, genetic and hormonal – has progressed enormously since he first authored The Myth of Mental Illness, the paper and the book, at the beginning of the 1960s. 

Yet reading ‘Psychiatry: The Science of Lies’, published in 2008, it seems that Szasz’s own position has advanced but little.[4]

Yet psychiatry, and psychology, have come a long way in the intervening half-century. 

Thus, in 1960, American psychiatry was still largely dominated by Freudian Fruedian psychoanalysis, a pseudoscience roughly on a par with phrenology, of which Szasz is rightly dismissive.[5]

Of particular relevance to Szasz’s thesis, the study of the underlying physiological basis for psychiatric disorders has also progressed massively.  

Every month, in a wide array of scientific journals, studies are published identifying neurological, genetic, hormonal and other physiological correlates for psychiatric conditions. 

In contrast, Szasz, although he never spells this out, seems to subscribe to an implicit Cartesian dualism, whereby human emotions, psychological states and behaviour are a priori assumed, in principle, to be irreducible to mere physiological processes.[6]

Szasz claims in Psychiatry: The Science of Lies that, once an underlying neurological basis for a mental illness has been identified, it ceases to be classified as a mental illness, and is instead classed as a neurological disorder. His paradigmatic example of this is Alzheimer’s disease (p2).[7]

Yet, today, the neurological correlates of many mental illnesses are increasingly understood. 

Nevertheless, despite the progress that has been made in identifying physiological correlates for mental disorders, there remains at least two differences between these correlates (neurological, genetic, hormonal etc.) and the recognised causes of both physiological and neurological diseases. 

First, in the case of mental illnesses, the neurological, genetic, hormonal and other physiological correlates remain just that, i.e. mere correlates

Here, I am not merely reiterating the familiar caution that correlation does not imply causation, but also emphasizing that the correlations in question tend to be far from perfect, and do not form the basis for a diagnosis, even in principle. 

In other words, as a rule, few such identified correlates are present in every single person diagnosed with the condition in question. The correlation is established only at the aggregate statistical level. 

Moreover, those persons who present the symptoms of a mental illness but who do not share the physiological correlate that has been shown to be associated with this mental illness are not henceforth identified as not truly suffering from the mental illness in question. 

In other words, not only is diagnosis determined, as a matter of convenience and practicality, by reference to symptoms (as is also often true for many physical illnesses), but mental illnesses remain, in the last instance, defined by the symptoms they produce, not any underlying physiological cause. 

Any physiological correlates for the condition are ultimately incidental and have not caused physicians to alter their basic definition of the condition itself. 

Second, the identified correlates are, again as a general rule, multiple, complex and cumulative in their effects. In other words, there is not one single identified physiological correlate of a given mental illness, but rather multiple identified correlates, often each having small cumulative effects of the probability of a person presenting symptoms. 

This second point might be taken as vindicating Szasz’s position that mental illnesses are not really illnesses. 

Thus, recent research on the genetic correlates of mental illnesses, as recently summarized by Robert Plomin in his book Blueprint: How DNA Makes Us Who We Are, has found that the genetic variants that cause psychiatric disorders are the exact same genetic variants which, when present in lesser magnitude, also cause normal, non-pathological variation in personality and temperament. 

This suggests that, at least at the genetic level (and thus presumably at the phenotypic level too), what we call mental illness is just an extreme presentation of what is normal variation in personality and behaviour. 

In other words, so-called mental illness simply represents the extreme tail-end of the normal bell curve distribution in personality attributes. 

This is most obviously true of the so-called personality disorders. Thus, a person extremely low in empathy, or the factor of personality referred to by psychometricians as agreeableness, might be diagnosed with anti-social personality disorder (or psychopathy). 

However, it is also true for so-called other mental disorders. For example, ADHD (attention deficit hyperactivity disorder) seems to be mere medical jargon for someone who is very impulsive, with a short attention span, and lacking self-discipline (i.e. low in the factor of personality that psychometricians call conscientiousness) – all traits which vary on a spectrum across the whole population. 

On the other hand, clinical depression, unlike personality, is a temporary condition from which most people recover. Nevertheless, it is so strongly predicted by the factor of personality known to psychometricians as neuroticism that psychologist Daniel Nettle writes: 

Neuroticism is not just a risk factor for depression. It is so closely associated with it that it is hard to see them as completely distinct” (Personality: p114). 

Yet calling someone ‘ill’ because they are at the extreme of a given facet of personality or temperament is not very helpful. It is roughly equivalent to calling a basketballer ‘ill’ because he is exceptionally tall, a jockey ‘ill’ because he is exceptionally small, or Albert Einsteinill’ because he was exceptionally intelligent

Mental illness or Malingering?

While Szasz has therefore correctly identified problems with the conventional disease model of mental health, the model which he proposes in its place is, in my view, even more problematic, and less scientific, than the disease model that he has rightly rejected as probematic and misleading. 

Most unhelpful is the central place given in his theory to the notion of malingering, i.e. the deliberate faking of symptoms by the patient. 

This analysis may be a useful way to understand the nineteenth century outbreak of so-called hysteria, to which Szasz devotes considerable attention, or indeed the modern diagnosis of Munchausen syndrome, which again involves complaining of imagined or exaggerated physical symptoms. 

It may also be a useful way to understand the recently developed diagnosis of chronic fatigue syndrome (CFS, formerly ME), which, like hysteria, involves the patient complaining of physical symptoms for which no physical cause has yet been identified. 

Interestingly from a psychological perspective, all three of these conditions are overwhelmingly diagnosed among women and girls rather than men and boys. 

However, malingering may also be a useful way to understand another psychiatric complaint that was primarily complained of by men, albeit for obvious historical reasons – namely, so-called ‘shell shock’ (now, classed as PTSD) among soldiers during World War One.[8]

Here, unlike with hysteria and CFS, the patient’s motive and rationale for faking the symptoms in question (if this is indeed what they were doing) is altogether more rational and comprehensible – namely, to avoid the horrors of trench warfare (from which women were, of course, exempt). 

However, this model of ‘malingering’ is clearly much less readily applicable to sufferers of, say, schizophrenia

Here, far from malingering or faking illness, those afflicted will often vehemently protest that they are not ill and that there is nothing wrong with them. However, their delusions are often such that, by any ordinary criteria, they are undoubtedly, in the colloquial if not the strict medical sense, completely fucking bonkers. 

The model of malingering can, therefore, only be taken so far. 

Defining Mental Illness? 

The fundamental fallacy at the heart of psychiatry is, according to Szasz, the mistaking of moral problems for medical ones. Thus, he opines: 

Psychiatrists cannot expect to solve moral problems by medical methods” (Myth of Mental Illness: p24). 

Szasz has a point. Despite employing the language of science, there is undoubtedly a moral dimension to defining what constitutes mental illness. 

Whether a given cluster of associated behaviours represents just a cluster of associated behaviours or a mental illness is not determined on the basis of objective scientific criteria. 

Rather, most American psychiatrists simply regard as a mental illness whatever the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association classifies as a mental disorder. 

This manual is treated as gospel by psychiatrists, yet there is no systematic or agreed criteria for inclusion within this supposedly authoritative work. 

Popular cliché has it that mental illnesses are caused by a ‘chemical imbalance’ in the brain.  

Certainly, if we are materialists, we must accept that it is ultimately the chemical composition of the brain that causes behaviour, pathological or otherwise. 

But on what criteria are we to say that a certain chemical composition of the brain is an ‘imbalance’ and another is ‘balanced’, one behaviour ‘pathological’ and one ‘normal’? 

The criteria on which we make this judgement is, as I see it, primarily a moral one.[9]

More specifically, mental illnesses are defined as such, at least in part, because the behavioral symptoms that they produce tend to cause suffering or distress either to the person defined as suffering from the illness, or to those around them. 

Thus, a person diagnosed with depression is themselves the victim of suffering or distress resulting from the condition; a person diagnosed with psychopathy, on the other hand, is likely to cause psychological distress to those around them with whom they come into contact. 

This is a moral, not a scientific, criterium, depending as it does on the notion of suffering or harm

Indeed, it is not only a moral question, but it is also one that has, in recent years, been heavily politicized. 

Thus, gay right activists actively and aggressively campaigned for many years to have homosexuality withdrawn from the DSM and reclassified as non-pathological, and, in 1974, they were successful.[10]

This campaign may have had laudable motives, namely to reduce the stigma associated with homosexuality and prejudice against homosexuals. Yet it clearly had nothing to do with science and everything to do with politics and morality. 

Indeed, homosexuality satisfies many criteria for illness.[11]

First, it is, despite some ingenious and some not so ingenious attempts to show otherwise, obviously biologically maladaptive. 

Whereas the politically correct view is that homosexuality is entirely natural, normal and non-pathological variation of normal sexuality, from a Darwinian perspective this view is obviously untenable. 

Homosexual sex cannot produce offspring. Homosexuality therefore involves a maladaptive misdirection of mating effort, which would surely strongly selected against by natural selection.[12]

Homosexuality is therefore best viewed as a malfunctioning of normal sexuality, just as cancer is a kind of malfunctioning of cell growth and division. In this sense, then, homosexuality is indeed best viewed as something akin to an illness. 

Second, homosexuality shows some degree of comorbidity with other forms of mental illness, such as depression.[13]

Finally, homosexuality is associated other undesirable life-outcomes, such as reduced longevity and, at least for male homosexuals, a greater lifetime susceptibility to various STDs.[14]

Yet, just as homosexuals successfully campaigned for the removal of homosexuality from the DSM, so ‘trans rights’ campaigners are currently embarking on a similar campaign in respect of gender dysphoria

The politically correct consensus today holds that an adult or child who claims to identify of the opposite ‘gender’ to their biological sex should be encouraged and supported in their ‘transition’, and provided with hormone therapy, hormone blockers and sex reassignment surgery, as requested. 

This is roughly the equivalent of, if a person is mentally ill and thinks they are Napoleon, then, instead of telling them that they are not Napoleon, instead we provide them with legions of troops with which to invade Prussia. 

Moving beyond the sphere of sexuality, some self-styled ‘neurodiversity’ activists have sought to reclassify autism as a normal variation of mental functioning, a claim that may appear superficially plausible in respect of certain forms of so-called ‘high functioning autism’, but is clearly untenable in respect of ‘low functioning autism’.[15]

Yet, on the other hand, there is oddly no similar, high-profile campaign to reclassify, say, anti-social personality disorder (ASPD) or psychopathy as a normal, non-pathological variant of human psychology. 

Yet psychopathy may indeed be biologically adaptive at least under some conditions (Mealey 1995). 

Yet no one proposes treating psychopathy as normal or natural variation in personality, even though it is likely just that. 

The reason that there is no campaign to remove psychopathy from the DSM is, of course, because, unlike homosexuals, transexuals and autistic people, psychopaths are hugely disproportionately likely to cause harm to innocent non-consenting third-parties. 

This is indeed a good reason to treat psychopathy and anti-social personality disorder as a problem for society at large. However, this is a moral not a scientific reason for regarding it as problematic. 

To return to the question of disorders of sexuality, another useful point of comparison is provided by paedophilia

From a purely biological perspective, paedophilia is analogous to homosexuality. Both are biologically maladaptive because they involve sexual attraction to a partner with whom reproduction is, for biological reasons, impossible.[16]

Yet, unlike in the case of homosexuality, there has been no mainstream political push for paedophilia to be reclassified as nonpathological or removed from the Diagnostic and Statistical Manual of Mental Disorders of the AMA.[17]

The reason for this is again, of course, obvious and entirely reasonable, yet it equally obviously has nothing to do with science and everything to do with morality – namely, whereas homosexual behaviour as between consenting adults is largely harmless, the same cannot be said for child sexual abuse.[18]

Perhaps an even better analogy would be between homosexuality and, say, necrophilia

Necrophilic sexual activity, like homosexual sexual activity, but quite unlike paedophilic sexual activity, represents something of a victimless crime. A corpse, by virtue of being dead, cannot suffer by virtue of being violated.[19]

Yet no one would argue that necrophilia is a healthy and natural variation on normal human sexuality. 

Of course, although numbers are hard to come by due to the attendent stigma, necrophilia is presumably much less common, and hence much less ‘normal’, than is homosexuality. However, if this is a legitimate reason for regarding homosexuality as more ‘normal’ than is necrophilia, then it is also a legitimate reason for regarding homosexuality itself as ‘abnormal’, because homosexuality is, of course, much less common than heterosexuality.

Necrophile rights is, therefore, the reductio ad absurdum of gay rights.[20]

Medicine or Morality? 

The encroachment of medicine upon morality continues apace, as part of what Szasz calls the medicalization of everyday life Thus, there is seemingly no moral failing or character defect that is not capable of being redefined as a mental disorder. 

Selfish people are now psychopaths, people lacking in willpower and with short attention spans now have ADHD

But if these are simply variations of personality, does it make much sense to call them diseases? 

Yet the distinction between ‘mad’ and ‘bad also has practical application in the operation of the criminal justice system. 

The assumption is that mentally ill offenders should not be punished for their wrongdoing, but rather treated for their illness, because they are not responsible for their actions. 

But, if we accept a materialist conception of mind, then all behaviour must have a basis in the brain. On what basis, then, do we determine that one person is mentally ill while another is in control of his faculties?

As Robert Wright observes: 

“[Since] in both British and American courts, women have used premenstrual syndrome to partly insulate themselves from criminal responsibility… can a ‘high-testosterone’ defense of male murderers be far behind?… If defense lawyers get their way and we persist in removing biochemically mediated actions from the realm of free will, then within decades [as science progresses] the realm will be infinitesimal” (The Moral Animal: p352-3).[21]

Yet a man claiming that, say, high testosterone caused his criminal behaviour is unlikely to be let off on this account, because, if high testosterone does indeed cause crime, then we have good reason to lock up high testosterone men precisely because they are likely to commit crimes.[22]

Szasz wants to resurrect the concept of free will and hold everyone, even those with mental illnesses, responsible for their actions. 

My view is the opposite: No one has free will. All behaviour, normal or pathological, is determined by the physical composition of the brain, which is, in turn, determined by some combination of heredity and environment. 

Indeed, determinism is not so much a finding of science as its basic underlying assumption and premise.[23]

In short, science rests on the assumption that all events have causes and that, by understanding the causes, we can predict behaviour. If this were not true, then there would be no point in doing science, and science would not be able to make any successful predictions. 

In short, criminal punishment must be based on consequentialist utilitarian considerations such as deterrence, incapacitation and rehabilitation rather than such unscientific moralistic notions as free will, just deserts and blame.[24]

A Moral Component to All Medicine? 

Szasz is right, then, to claim that there is a moral dimension to psychiatric diagnoses. 

This is why psychopathy is still regarded as a mental disorder even though it is likely an adaptive behavioural strategy and life history in certain circumstances (Mealey 1995). 

It is also why homosexuality is no longer regarded as a mental illness, despite its obviously biologically maladaptive consequences, yet there is no similar campaign to remove paedophilia from the DSM. 

Yet what Szasz fails to recognise is that there is also a moral element to the identification and diagnosis of physical illnesses too. 

Thus, physical illnesses, like psychiatric illnesses, are called illnesses, at least in part, because they cause pain, suffering and impairment in normal functioning to the person diagnosed as suffering from the illness. 

If, on the other hand, an infection did not produce any unpleasant symptoms, then the patient would surely never bother to seek medical treatment and thus the infection would probably never come to the attention of the medical profession in the first place. 

If it did come to their attention, would they still call it a disease? Would they expect time and resources attempting to ‘cure’ it? Hopefully not, as to do so would be a waste of time and resources. 

Extending this thought experiment, what if the infection in question, not only caused no negative symptoms, but actually had positive effects on the person infected.   

What if the infection in question caused people to be fitter, smarter, happier, kinder and more successful at their jobs? 

Would doctors still call the infection a ‘disease’, and the microscopic organism underlying it a ‘germ’? 

Actually, this hypothetical thought experiment may not be entirely hypothetical. 

After all, there are indeed surely many microorganisms that infect humans which have few or negligible effects, positive or negative, and with which neither patients nor doctors are especially concerned. 

On the other hand, some infections may be positively beneficial to their hosts. 

Take, for example, gastrointestinal microbiota (also known as gut microbiota). 

These are microorganisms that inhabit our digestive tracts, and those of other organisms, and are thought to have a positive beneficial effect on the health and functioning of the host organism. They have even been marketed as probiotics and good bacteria in the advertising campaigns for certain yoghurt-like drinks. 

Another less obvious example is perhaps provided by mitochondrial DNA

In our ancient evolutionary history, this began as the DNA of a separate organism, a bacterium, that infected host organisms, but ultimately formed a symbiotic and mutualistic relationship with us, and now plays a key role in the functioning of those organisms whose distant ancestors it first infected. 

In short, all medicine has a moral dimension.  

This is because medicine is an applied, not a pure, science. 

In other words, medicine aims not merely to understand disease in the abstract, but to treat it. 

We treat diseases to minimize human suffering, and the minimization of human suffering is ultimately a moral (or perhaps economic, since doctors are paid, and provide a service to their patients), rather than a purely scientific, endeavour. 

Endnotes

[1] Although this post is a review of Thomas Szasz’s Pyschiatry: The Science of Lies, readers may note that many of the quotations from Szasz in the review are actually taken from his earlier, more famous book, The Myth of Mental Illness, published some several decades previously. By way of explanation, while this essay is a review of Szasz’s Psychiatry: The Science of Lies, I listened to an audiobook version of this book, and do not have access to a print copy. It was therefore difficult to find source quotes from this book. In contrast, I own a copy of The Myth of Mental Illness, but have yet to read it in full. I thought it more useful to read a more recent statement of Szasz’s views, so as to find out how he has dealt with recent findings in biological psychiatry and behavioural genetics. Unfortunately, as I discuss above, it seems that Szasz has reacted to recent findings in biological psychiatry and behavioural genetics hardly at all, and includes few if any references to such developments in his more recent book.

[2] Thus, proponents of Darwinian medicine contend that many infections produce symptoms such as coughing, sneezing and diarrhea precisely because these symptoms facilitate the spread of the disease through contact with the bodily fluids expelled, hence promoting the pathogens’ own Darwinian fitness or reproductive success.

[3] For example, the underlying physical cause of chronic fatigue syndrome (CFS) is not fully understood. On the other hand, the underlying cause of acquired immunodeficiency syndrome (AIDS) is now understood, namely HIV infection, but, presumably because it involves increased susceptibility to many different infections, it is still referred to as a syndrome rather than a disease in and of itself.

[4] Indeed, according to Szasz himself, in an autobiographical interlude in ‘Psychiatry: The Science of Lies’, he had arrived at his opinion regarding the scientific status of psychiatry even earlier, when first making the decision to train to become a psychiatrist. Indeed, he claims to have made the decision to study psychiatry and qualify as a psychiatrist precisely in order to attack the field from within, with the authority which this professional qualification would confer upon him. This, it hardly needs to be said, is a very odd reason for a career choice.

[5] Attacking modern psychiatry by a critique of Freud is a bit like attacking neuroscience by critiquing nineteenth century phrenology. It involves constructing a straw man version of modern psychiatry. I am reminded in particular of Arthur Jensen’s review of infamous charlatan Stephen Jay Gould’s discredited The Mismeasure of Man, which Jensen titled The debunking of scientific fossils and straw persons, where he described Gould’s method of trying to discredit the modern science of IQ testing and intelligence research by citing the errors of nineteenth-century craniologists as roughly akin to “trying to condemn the modern automobile by merely pointing out the faults of the Model T”.

[6] In The Myth of Mental Illness, Szasz, writes: 

There remains a wide circle of physicians and allied scientists whose basic position concerning the problem of mental illness is essentially that expressed in Carl Wernicke’s famous dictum: ‘Mental diseases are brain diseases’. Because, in one sense, this is true of such conditions as paresis and the psychoses associated with systemic intoxications, it is argued that it is also true for all other things called mental diseases. It follows that it is only a matter of time until the correct physicochemical, including genetic, bases or cause’, of these disorders will be discovered. It is conceivable, of course, that significant physicochemical disturbances will be found in some ‘mental patients’ and in some ‘conditions’ now labeled ‘mental illnesses’. But this does not mean that all so-called mental diseases have biological ‘causes’, for the simple reason that it has become customary to use the term ‘mental illness’ to stigmatize, and thus control, those persons whose behavior offends society—or the psychiatrist making the ‘diagnosis’” (The Myth of Mental Illness: p103). 

Yet, if we accept a materialist conception of mind, then all behaviours, including those diagnostic of mental illness, must have a cause in the brain, though it is true that the same behaviours may result from quite different neuroanatomical causes.
It is certainly true that the concept of mental illness has been used to “stigmatize, and thus control, those persons whose behavior offends society”. So-called drapetomania provides an obvious example, albeit one that was never widely recognised by physicians, at least outside the American South. Another example would be the diagnosis of sluggish schizophrenia used to institutionalize political dissidents in the Soviet Union. Likewise, psychopathy (aka sociopathy or anti-social personality disorder) may, as I argue later in this post, have been classified as a mental disorder primarily because the behaviour of people diagnosed with this condition does indeed “offend society” and arguably demand the “control”, and sometimes detention, of such people.
However, this does not mean that the behaviours complained of (e.g. political dissidence, or anti-social behaviours) will not have neural or other physiological correlates. On the contrary they undoubtedly do, and psychologists have also investigated the neural and other physiological correlates of all behavours, not just those labelled as pathological and as ‘mental illnesses’.
However, Szasz does not quite go so far as to deny that behaviours have physical causes. On the contrary, in The Myth of Mental Illness, hedging his bets against future scientific advances, Szasz acknowledges:

I do not contend that human relations, or mental events, take place in a neurophysiological vacuum. It is more than likely that if a person, say an Englishman, decides to study French, certain chemical (or other) changes will occur in his brain as he learns the language. Nevertheless, I think it would be a mistake to infer from this assumption that the most significant or useful statements about this learning process must be expressed in the language of physics. This, however, is exactly what the organicist claims” (The Myth of Mental Illness: p102- 3). 

Here, Szasz makes a good point – but only up to a point. Whether we are what Szasz calls ‘organicists’ or not, I’m sure we can all agree that, for most purposes, it is not useful to explain the decision to learn French in terms of neurophysiology. To do so would be an example of what philosopher Daniel Dennett, in Darwin’s Dangerous Idea, calls ‘greedy reductionism’, which he distinguished from ‘good reductionism’, which is central to science.
However, it is not clear that the same is true of what we call mental illnesses. Often it may indeed be useful to understand mental illnesses in terms of their underlying physiological causes, including for therapeutic reasons, since understanding the physiological basis for behaviour that we deem undesirable may provide a means of changing these behaviours by altering the physical composition of the brain. For example, if the hormone serotonin is involved in regulating mood, then manipulating levels of serotonin in the brain, or their reabsorption may be a way of treating depression, anxiety and other mood disorders. Thus, SSRIs and SNRIs, which are thought to do just this, have been found to be effective in doing just this.
However, for other purposes, it may be useful to look at a different level of causation. For example, as I discuss in a later endnote, although it may be scientifically a nonsense, it may nevertheless be useful to inculcate a belief in free will among some psychiatric patients, since it may encourage them to overcome their problems rather adopting the fatalistic view that they are ill and there is hence nothing they can do to improve their predicament. Szasz sometimes seems to be arguing for something along these lines.

[7] In The Myth of Mental Illness, as quoted in the preceding endnote, Szasz also gives as examples of behavioural conditions with well-established physiological causes “paresis and the psychoses associated with systemic intoxications(The Myth of Mental Illness: p103).

[8] I hasten to emphasize in this context, lest I am misunderstood, I am not saying that Szasz’s model of ‘malingeringis indeed the appropriate way to understand conditions such as hysteria, Munchausen syndrome, chronic fatigue syndrome or shell shock – only that a reasonable case can be made to this effect. Personally, I do not regard myself as having a sufficient expertise on the topic to be willing to venture an opinion either way.

[9] Of course, we could determine whether a certain composition and structure of the brain is ‘balanced’ ‘imbalanced’ on non-moralistic, Darwinian criteria. In other words, if a certain composition/structure and the behaviour it produces is adaptive (i.e. contributes to the reproductive success or fitness of the organism) then we could call it ‘balanced’; if, on the other hand, it produces maladaptive behaviour we could call it ‘imbalanced’. However, this would produce a quite different inventory and classification of mental illnesses than that provided by the DSM of the APA and other similar publications, since, as we will see, homosexuality, being obviously biologically maladaptive, would presumably be classified as an ‘imbalance’ and hence a mental illness, whereas psychopathy, since it may well, under certain conditions, be adaptive, would be classed as non-pathological and hence ‘balanced’. This analysis, however, has little to do with mental illness as the concept is currently conceived.

[10] Oddly, Szasz himself is sometimes lauded by some politically correct-types as being among the first psychiatrists to deny that homosexuality was a mental illness. Yet, since he also denied that schizophrenia was a mental illness, and indeed rejected the whole concept of ‘mental illness’ as it is currently conceived, this is not necessarily as ‘progressive’ and ‘enlightened’ a view as it is sometimes credited as having been.

[11] Here, a few caveats are in order. Describing homosexuality as a mental illness no more indicates hatred towards homosexuals than describing schizophrenia as a mental illness indicates hatred towards people suffering from schizophrenia, or describing cancer as an illness indicates hatred towards people afflicted with cancer. In fact, regarding a person as suffering from an illness is generally more likely to elicit sympathy for the person so described than it is hatred.
Of course, being diagnosed with a disease may involve some stigma. But this is not the same as hatred.
Moreover, as should be clear from my conclusion, I am not, in fact, arguing that homosexuality should indeed be classified as a mental illness. Rather, I am simply pointing out that it is difficult a frame a useful definition of what constitutes a ‘mental disorder’ unless that definition includes moral criteria, which are necessarily extra-scientific. However, in the final section of this piece, I argue that there is indeed a moral component to all medicine, psychiatry included.
Of course, as I also discuss above, there are indeed some moral reasons for regarding homosexuality as undesirable, for example its association with reduced longevity, which is generally regarded as an undesirable outcome. However, whether homosexuality should indeed be classed as a ‘mental disorder’ strikes me as debatable and also dependent on the exact definition of ‘mental disorder’ adopted.

[12] If homosexuality is therefore maladaptive, this, of course, raises the question as to why homosexuality has not indeed been eliminated by natural selection. The first point to make here is that homosexuality is in fact quite rare. Although Kinsey famously originated the since-popularized claim that as many as 10% of the population are homosexual, reputable estimates using representative samples generally suggest less than 5% of the population identifies as exclusively or preferentially homosexual (though a larger proportion of people may have had homosexual experiences at some time, and the ‘closet factor’ makes it possible to argue that, even in an age of unprecedented tolerance and indeed celebration of homosexuality, and even in anonymous surveys, this may represent an underestimate due to underreporting).
Admittedly, there has recently been a massive increase in the numbers of teenage girls identifying as non-heterosexual, with numbers among this age group now slightly exceeding 10%. However, I suspect that this is more a matter of fashion than of sexuality. Thus, it is notable that the largest increase has been for identification as ‘bisexual’, which provides a convenient cover by which teenage girls can identify with the so-called ‘LGBT+ community’ while still pursuing normal, healthy relationships with opposite-sex boys or men. The vast majority of these girls will, I suspect, grow up to have sexual and romantic relationships primarily with members of the opposite sex.
Yet even these low figures are perhaps higher than one might expect, given that homosexuality would be strongly selected against by evolution. (However, it is important to remember that, when homosexuals were persecuted and hence mostly remained in the ‘closet’, homosexuality would have been less selected against, precisely because so many gay men and women would have married members of the opposite sex and reproduced if only to evade accusations of homosexuality. With greater tolerance, however, they no longer have any need to do so. The liberation of homosexuals may therefore, paradoxically, lead to their gradual disappearance through selection.)
A second point to emphasize is that, contrary to popular perception, homosexuality is not especially heritable. Indeed, it is rather less heritable than other behavioural traits about which it is much less politically correct to speculate regarding the heritability (e.g. criminality, intelligence).
If homosexuality is primarily caused by environmental factors, not genetics, then it would be more difficult for natural selection to weed it out. However, given that exclusive or preferential homosexuality would be strongly selected against by natural selection, humans should have evolved to be resistant to developing exclusive or preferential homosexuality under all environmental conditions that were encountered during evolutionary history. It is possible, however, environmental novelties atypical of the environments in which our psychological adaptations evolved are responsible for causing homosexuality.
For what it’s worth, my own favourite theory (although not necessarily the best supported theory) for the evolution of male homosexuality proposes that genes located on the X chromosome predispose a person to be sexually attracted to males. This attraction is adaptive for females, but maladaptive for males. However, since females have two X chromosomes and males only one, any X chromosome genes will find themselves in females twice as often as they find themselves in males. Therefore, any increase in fitness for females bearing these X chromosome genes only has to be half as great as the reproductive cost to males for the genes in question to be positively selected for.
This is sometimes called the ‘balancing selection theory of male homosexuality’. However, perahps more descriptive and memorable is Satoshi Kanazawa’s coinage, ‘the horny sister hypothesis’.
This theory also has some support, in that there is some evidence the female relatives of male homosexuals have a greater number of offspring than average and also that gay men report having more gay uncles on their mother’s than their father’s side, consistent with an X chromosome-linked trait (Hamer et al 1993; Camperio-Ciani et al 2004). Some genes on the X chromosome have also been linked to homosexuality (Hamer et al 1993; Hamer 1999).
On the other hand, other studies find no support for the hypothesis. For example, Bailey et al (1999) found that rates of reported homosexuality were no higher among maternal than among paternal male relatives, as did McKnight & Malcolm (1996). At any rate, as explained by Wilson and Rahman in their excellent book Born Gay: The Psychobiology of Sexual Orientation:

Increased rates of gay maternal relatives might also appear because of decreased rates of reproduction among gay men. A gay gene is unlikely to be inherited from a gay father because a gay man is unlikely to have children” (Born Gay: p51; see also Risch et al 1993).

[13] Gay rights activists assert that the only reason that homosexuality is associated with other forms of mental illness is because of the stigma to which homosexuals are subject on account of their sexuality. This has sometimes been termed the ‘social stress hypothesis’, ‘social stress model’ or ‘minority stress model’. There is indeed statistical support for the theory that the social stigma is indeed associated with higher rates of depression and other mental illnesses.
It is also notable that, while homosexuality is indeed consistently associated with higher levels of depression and suicide, conditions that can obviously be viewed as a direct response to social stigma, I am not aware of any evidence suggesting higher rates of, say, schizophrenia among homosexuals, which would less obviously, or at least less directly, result from social stress. However, I tend to agree with the conclusions of Mayer and McHugh, in their excellent review of the literature on this subject, that, while social stress may indeed explain some of the increased rate of mental illness among homosexuals, it is unlikely to account for the totality of it (Mayer & McHugh 2016).

[14] Yet, in describing the life outcomes associated with homosexuality, as undesirable, I am, of course, making am extra-scientific value judgement. Of course, the value judgement in question – namely that dying earlier and being disproportionately likely to contract STDs is a bad thing – is not especially controversial. However, it still illustrates the extent to which, as I discuss later in this post, definitions of mental illnesses, and indeed physical illnesses, always include a moral dimension – i.e. diseases are defined, in part, by the fact that they cause suffering, either to the person afflicted, or, in the case of some mental illnesses, to the people in contact with them.

[15] That autism is indeed maladaptive and pathological is also suggested by the well-established correlation between paternal age and autism in offspring, since this has been interpreted as reflecting the build up of deleterious mutations in the sperm of older males.

[16] Indeed, from a purely biological perspective, homosexuality is arguably even more biologically maladaptive than is paedophilia, since even very young children can, in some exceptional cases, become pregnant and even successfully birth offspring, yet same-sex partners are obviously completely incapable of producing offspring with one another.

[17] Indeed, far from there being any political pressure to remove paedophilia from the DSM of the AMA, as ocurred with homosexuality, there is instead increasing pressure to add hebephilia (i.e. attraction to pubescent and early-post-pubescent adolescents) to the DSM. If successful, this would probably lead to pressure to also add ‘ephebophilia’ (i.e. the biologically adaptive and normal male attraction to mid- to late-adolescents) to the DSM, and thereby effectively pathologize and medicalize, and further stigmatize, normal male sexuality.

[18] Of course, homosexual sex does have some dangers, such as STDs. However, the same is also true of heterosexual sex, although, for gay male sex, the risks are vastly elevated. Yet other perceived dangers result from only from heterosexual sex (e.g. unwanted pregnancies, marriage). Meanwhile, the other negative life outcomes associated with homosexuality (e.g. elevated risk of depression and suicide) probably result from a homosexual orientation rather than from gay sex as such. Thus, a celibate gay man is, I suspect, just as likely, if not more likely, to suffer depression than is a highly promiscuous gay man.
Yet, while gay sex may be mostly harmless, the same cannot, of course, be said for child sexual abuse. It may indeed be true that the long-term psychological effects of child sexual abuse are exaggerated. This was, of course, the infamous conclusion of the Rind et al meta-analysis, which resulted in much moral panic in the late-1990s (Rind et al 1998). This is especially likely to be the case when the sexual activity in question is consensual and involves post-pubertal, sexually mature (but still legally underage) teenagers. However, in such cases the sexual activity in question should not really be defined as ‘child sexual abuse’ in the first place, since it neither involves immature children in the biological sense, nor is it necessarily abusive. Yet, it must be emphasized, even if child sexual abuse does not cause long-term psychological harm, it may still cause immediate harm, namely the distress experienced by the victim at the time of the abuse.

[19] Of course, one might argue that the relatives of the deceased may suffer as a result of the idea of their dead relatives’ bodies being violated by necrophiles. However, much the same is also true of homosexuality. So-called ‘homophobes’, for example, may dislike the idea of their adult homosexual sons having consensual homosexual sex. Indeed, they may even dislike the idea of unrelated adult strangers being allowed to have consensual homosexual sex. This was indeed presumably the reason why homosexuality has been criminalized and prohibited in so many cultures across history in the first place, i.e. because other people were disgusted by the thought of it. However, we no longer regard this sort of puritanical, disapproval other people’s private lives as a sufficient reason to justify the criminalization of homosexual behaviour. Why then should it be a reason for criminalizing necrophilia?

[20] Other similar thought experiments involve the prohibitions on other sexual behaviours such as zoophilia and incest. In both these cases, however, the case is morally more complex, in the case of zoophilia on account of whether the animal participant suffers harm or has consented, and, in the case of incest, because of eugenic considerations, namely the higher rate of the expression of deleterious mutations among the offspring of incestuous unions.

[21] Indeed, the courts, in both Britain and America, have been all too willing to invent bogus pseudo-psychiatric diagnoses in order to excuse women, in particular, for culpability in their crimes, especially murder. For example, in Britain, the Infanticide Acts of 1922 and 1938 provide a defence against murder for women who kill their helpless new-born infants where “at the time of the act… the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child”. In terms of biology, physiology and psychology, this is, of course, a nonsense, and, of course, no equivalent defence is available for fathers, though, in practice, the treatment of mothers guilty of infanticide is more lenient still (Wilczynski and Morris 1993).
Similarly, in both Britain and America, women guilty of killing their husbands, often while the latter was asleep or otherwise similarly incapacitated, have been able to avoid being a murder conviction by claiming to have been suffering from so-called ‘battered women syndrome’. There is, of course, no equivalent defence for men, despite the consistent finding that men are somewhat more likely to be the victim of violence from their female intimate partners than women are to have been victimized by their male intimate partners (Fiebert 2014). This may partly explain why men who kill their wives receive, on average, sentences three time as long as women who kill their husbands (Langan & Dawson 1995).

[22] Of course, another possibility might be some form of hormone therapy to reduce the offender’s testosterone. Also, it must be acknowledged that this discussion is hypothetical. Whether testosterone is indeed correlated with criminal or violent behaviour is actually the subject of some dispute. Thus, Alan Mazur, a leading researcher in this area, argues that testosterone is not associated with aggression or violence as such, but rather only with dominance behaviours, which can also be manifested in non-violent ways. For example, a high-powered business tycoon is likely to be high in social dominance behaviours, but relatively unlikely to commit violent crimes. On the other hand, a prisoner, being of low status, may be able to exercise dominance only through violence. I am therefore giving the example of high testosterone only as a simplified hypothetical thought experiment.

[23] Of course, one finding of science, namely quantum indeterminism, complicates this assumption. Ironically, while determinism is the underlying premise of all scientific enquiry, nevertheless one finding of such enquiry is that, at the most fundamental level, determinism does not hold.

[24] Nevertheless, I am persuaded that there may be some value in the concept of free will, after all. Although it is a nonsense, it may, like some forms of religious belief, nevertheless be a useful nonsense, at least in some circumstances.
Thus, if a person is told that there is no free will, and that their behaviours are inevitable, this may encourage a certain fatalism and the belief that people cannot change their behaviours for the better. In fact, this is a fallacy. Actually, determinism does not suggest that people cannot change their behaviours. It merely concludes that whether people do indeed change their behaviours is itself determined. However, this philosophical distinction may be beyond many people’s understanding.
Thus, if people are led to believe that they cannot alter their own behaviour, then this may become something of a self-fulfilling prophecy, and thereby prevent self-improvement.
Therefore, just as religious beliefs may be untrue, but nevertheless serve a useful function in giving people a reason to live and to behave prosocially and for the benefit of society as a whole, so it may be beneficial to inculcate and encourage a belief in free will in order to encourage self-improvement, including among the mentally ill.

References

Bailey et al (1999) A Family History Study of Male Sexual Orientation Using Three Independent Samples, Behavior Genetics 29(2): 79–86. 
Camperio-Ciani (2004) Evidence for maternally inherited factors favouring male homosexuality and promoting female fecundity, Proceedings of the Royal Society B: Biological Sciences 271(1554): 2217–2221. 
Fiebert (2014) References Examining Assaults by Women on Their Spouses or Male Partners: An Updated Annotated Bibliography, Sexuality & Culture 18(2):405-467. 
Hammer et al (1993) A linkage between DNA markers on the X chromosome and male sexual orientation, Science 261(5119):321-7.  
Hammer (1999) Genetics and Male Sexual Orientation, Science 285(5429): 803. 
Langan & Dawson (1995) Spouse Murder Defendants in Large Urban Counties, U.S. Department of Justice Office of Justice Programs, Bureau of Justice Statistics: Executive Summary (NCJ-156831), September 1995. 
Mayer & McHugh (2016) Sexuality and Gender Findings from the Biological, Psychological, and Social Sciences, New Atlantis 50: Fall 2016. 
McKnight & Malcolm (2000) Is male homosexuality maternally linked? Evolution and Gender 2(3):229-252. 
Mealey (1995) The sociobiology of sociopathy: An integrated evolutionary model. Behavioral and Brain Sciences, 18(3): 523–599.
Rind et al(1998). A Meta-Analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples, Psychological Bulletin 124 (1): 22–53.
Risch et al (1993) Male Sexual Orientation and Genetic Evidence, Science 262(5142): 2063-2065. 
Szasz 1960 The Myth of Mental Illness. American Psychologist, 15, 113-118. 
Wilczynski & Morris (1993) Parents Who Kill their children, Criminal Law Review, 31-6.

Hitler, Hicks, Nietzsche and Nazism

Nietzsche and the Nazis: A Personal View by Stephen Hicks (Ockham’s Razor Publishing 2010) 

Scholarly (and not so scholarly) interpretations of Nietzsche always remind me of biblical interpretation

In both cases, the interpretations always seem to say at least as much about the philosophy, worldview and politics of the person doing the interpretation as they do about the content of the work ostensibly being interpreted. 

Thus, just as Christians can, depending on preference, choose between, say, Exodus 21:23–25 (an eye for an eye) or Matthew 5:39 (turn the other cheek), so authors of diametrically opposed political and philosophical worldviews can, it seems, always find some passage buried somewhere deep within Nietzsche’s corpus of writing that seems, at least when quoted in isolation, to agree with their own views. 

Thus, in HL Mencken’s The Philosophy of Friedrich Nietzsche, Nietzsche is portrayed as an aristocratic elitist, opposed to Christianity, Christian ethics and egalitarianism, but also as a scientific materialist –much like… well, HL Mencken himself.

Yet, among leftist postmodernists, Nietzsche’s moral philosophy is largely ignored, and he is cited instead as an opponent of scientific materialism who rejects the very concept of objective truth, including scientific truth – just like the postmodernists.

Similarly, whereas German National Socialists in the 1930s selectively quoted passages from Nietzsche that appear highly critical of Jews, so contemporary Nietzscheans, keen to absolve their idol of any assoication with Nazism, cite other passages where he seemingly professes great admiration for Jewish people, and other passages where he is undoubtedly highly critical of both Germans and anti-Semites.  

There are indeed passages in Nietzsche’s work that, at least when quoted in isolation, can be interpreted as supporting any of these mutually contradictory perspectives. Yet, in each of these divergent and selective readings, many elements of Nietzsche’s philosophy are downplayed or convieniently omitted altogether.

In his short book Nietzsche and the Nazis, professor of philosophy Stephen Hicks discusses the association between the thought of Friedrich Nietzsche and the most controversial of the many twentieth century movements, political and philosophical, to claim Nietzsche as their philosophical and intellectual precursor, namely the National Socialist movement and regime in early- to mid-twentieth century Germany. 

Since he is a professor of philosophy rather than a historian, it is perhaps unsurprising that Hicks demonstrates a rather better understanding of the philosophy of Nietzsche than he does of the ideology of Hitler and the German National Socialist movement. 

Thus, if the Nazis stand accused of misinterpreting, misappropriating or misrepresenting the philosophy of Nietzsche, then Hicks can claim to have outdone even them – for he has managed to misrepresent, not only the philosophy of Nietzsche, but also that of the Nazis as well. 

Philosophy as a Driving Force in History 

Hicks begins his book by making a powerful case for the importance of philosophy as both a major force in history and a factor in the rise of German National Socialism in particular. 

Thus, he argues: 

The primary cause of Nazism lies in philosophy… The legacy of World War I, persistent economic troubles, modern communication technologies, and the personal psychologies of the Nazi leadership did play a role. But the most significant factor was the power of a set of abstract, philosophical ideas. National Socialism was a philosophy-intensive movement” (p10-1). 

This claim – namely, that “National Socialism was a philosophy-intensive movement” – may seem an odd one, especially since German National Socialism is usually regarded, not enitrely unjustifiably, as a profoundly anti-intellectual movement. 

Moreover, to achieve any degree of success and longevity, all political movements, and political regimes, must inevitably make ideological compromises in the face of practical necessity, such that their actual policies are dictated at least as much pragmatic considerations of circumstance, opportunity and realpolitik as they is by pure ideological dictate.[1]

Yet, up to a point, Hicks is right. 

Indeed, Hitler even saw himself as being, in some sense, a philosopher in his own right – something akin Plato’s notion of a philosopher king, or, in Yvonne Sherratt’s turn of phrase, a Philosopher Führer.

Thus,  historian Ian Kershaw, in his celebrated biography of the German Führer, Hitler, 1889-1936: Hubris, observes: 

“In Mein Kampf, Hitler pictured himself as a rare genius who combined the qualities of the ‘programmatist’ and the ‘politician’. The ‘programmatist’ of a movement was the theoretician who did not concern himself with practical realities, but with ‘eternal truth’, as the great religious leaders had done. The ‘greatness’ of the ‘politician’ lay in the successful practical implementation of the ‘idea’ advanced by the ‘programmatist’. ‘Over long periods of humanity,’ he wrote, ‘it can once happen that the politician is wedded to the programmatist.’ His work did not concern short-term demands that any petty bourgeois could grasp, but looked to the future, with ‘aims which only the fewest grasp’… Seldom was it the case, in his view, that ‘a great theoretician’ was also ‘a great leader’… He concluded: ‘the combination of theoretician, organizer, and leader in one person is the rarest thing that can be found on this earth; this combination makes the great man.’ Unmistakably, Hitler meant himself” (Hitler, 1889-1936: Hubris: p251–2). 

Moreover, philosophical ideas have undoubtedly had a major impact on history in other times and places. 

For example, the French revolution, American revolution and Bolshevik Revolution may have been triggered and made possible by social and economic conditions then prevailing – but the regimes established in their aftermath were, at least in theory, based on the ideas of philosophers and political theorists.  

Thus, if the French revolution was modelled on the ideas of thinkers such as Locke, Rousseau and Voltaire, the American revolution on those of LockeMontesquieu, Benjamin Franklin, Thomas Jefferson and Thomas Paine and the Bolshevik Revolution on those of Marx, Lenin and Trotsky, who then were the key thinkers, if any, behind the National Socialist movement in Germany? 

Hicks, for his part, tentatively ventures several prospective candidates: 

Georg Hegel, Johann Fichte, even elements from Karl Marx” (p49).[2]

In an earlier chapter, as part of his attempt to argue against the notion that German National Socialism had no intellectual credibility, he also mentions several contemporaneous thinkers who, he claims, “supported the Nazis long before they came to power” and who could perhaps be themselves be considered intellectual forerunners for National Socialism, including Oswald Spengler, Martin Heidegger, and the legal theorist Carl Schmitt (p9).[3]

Besides Hitler himself, and Rosenberg, each of whom considered themselves, howsoever deludedly, as serious philosophical thinkers in their own right, other candidates who might merit honourable (or perhaps dishonourable) mention in this context include Hitler’s own early mentor Dietrich Eckart, the racial theorists Arthur De Gobineau and Houston Stewart Chamberlain, the American Madison Grant, biologist Ernst Haeckel, geopolitical theorist Karl Haushofer, and, of course, the composer Richard Wagner – though most of these are not, of course, philosophers in the narrow sense.

Yet, at least according to Hicks, the best known and most controversial name atop any such list is almost inevitably going to be Friedrich Nietzsche (p49). 

Nietzsche’s Philosophy 

Although the association between Nietzsche with the Nazis continues to linger large in the popular imagination, mainstream Nietzsche scholarship in the years since World War II, especially the work of the influential Jewish philosopher and poet, Walter Kaufmann, has done much rehabilitate the reputation of Nietzsche in academic circles, sanitize his philosophy and absolve him of any association with, let alone responsibility for, Fascism or National Socialism. 

Hick’s own treatment is rather more balanced. 

Before directly comparing and contrasting the various commonalities and differences between Nietzsche’s philosophy and that of the National Socialist movement and regime, Hick devotes one chapter to discussing the political philosophy and ideology of the Nazis, another to discussing their policies once in power, and a third to discussion of Nietzsche’s own philosophy, especially his views on morality and religion

As I have already mentioned, although Nietzche’s philosophy is the subject of many divergent interpretations, Hicks, in my view, mostly gets Nietzsche’s philosophy right. There are, however, a few minor points upon which he and I differ that I will address here.

Some are relatively trivial, perhaps even purely semantic. For example, Hicks equates Nietzsche’s Übermensch with Zarathustra himself, writing:

Nietzsche gives a name to his anticipated overman: He calls him Zarathustra, and he names his greatest literary and philosophical work in his honor” (p74)

Actually, as I understood Nietzsche’s Thus Spake Zarathustra (which is to say, not very much at all, since it is a notoriously incomprehensible work, and, in my view, far from Nietzsche’s “greatest literary and philosophical work”, as Hicks describes it), Nietzsche envisaged his fictional Zarathustra, not as himself the Übermensch, but rather only as his herald and prophet.

Indeed, to my recollection, not only does Zarathustra never himself even claim to embody the Übermensch, but he also repeatedly asserts that the most contemporary man, Zarathustra himself presumably included, can ever even aspire to be is a bridge’ to the Übermensch, rather than an Übermensch himself.

A perhaps more substantial problem relates to Hick’s understanding of Nietzsche’s contrasting master’ and ‘slave moralities. Hicks associates the former with various traits, including:  

Pride, Self-esteem; Wealth; Ambition, boldness; Vengeance; Justice… Pleasure, Sensuality… Indulgence” (p60). 

Most of these associations are indeed unproblematically associated with Nietzsche’s ‘master morality’, but a few require further elaboration. 

For example, it may be true that Nietzsche’s ‘master morality’ is associated with the idea of “vengeance” as a virtue. However, associating the related, but distinct, concept of “justice” exclusively with Nietzsche’s ‘master morality’ as Hicks does (p60; p62) strikes me as altogether more questionable. 

After all, the ‘slave morality’ of Christianity also concerns itself a great deal with “justice”. It just has a different conception of what constitutes justice, and also sometimes defers the achievement of “justice” to the afterlife, or to the Last Judgement and coming Kingdom of God (or, in pseudo-secular modern leftist versions, the coming communist utopia). 

Similarly problematic is Hicks’s characterization of Nietzsche’s ‘master morality’ as championing “indulgence”, as well as “pleasure [and] sensuality”, over “self-restraint” (p62; p60). 

This strikes me as, at best, an oversimplification of Nietzsche’s philosophy 

On the one hand, it is true that Nietzsche disparages and associates with ‘slave morality’ what Hume termed ‘the monkish values’, namely ideals of self-denial and asceticism. He sees them as both a sign of weakness and a denial of life itself, writing in Twilight of the Idols

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The same means, castration and extirpation, are instinctively chosen for waging war against a passion, by those who are too weak of will, too degenerate, to impose some sort of moderation upon it” (Twilight of the Idols: iv,2.). 

The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: ii, 4). 

Yet it is clear that Nietzsche does not advocate complete surrender to indulgence, pleasure and sensuality either. 

Thus, in the first of the two passages quoted above, he envisages the strong as also imposing “some sort of moderation” without the need for complete abstinence. 

Indeed, in The Antichrist, Nietzsche goes further still, extolling: 

The most intelligent men, like the strongest [who] find their happiness where others would find only disaster: in the labyrinth, in being hard with themselves and with others, in effort; their delight is in self-mastery; in them asceticism becomes second nature, a necessity, an instinct” (The Antichrist: 57) 

Indeed, advocating complete and unrestrained surrender to indulgence, sensuality and pleasure is an obviously self-defeating philosophy. If someone really completely surrendered himself to indulgence, he would do presumably nothing all day except masturbate, shoot up heroin and eat cake. He would therefore achieve nothing of value. 

Thus, throughout his corpus of writing, Nietzsche repeatedly champions what he calls self-overcoming, which, though it goes well beyond this, clearly entails self-control

In short, to be effectively put into practice, the Nietzschean Will to Power necessarily requires willpower

Individualism vs Collectivism (and Authoritarianism) 

Another matter upon which Hicks arguably misreads Nietzsche is the question the extent to which Nietzsche’s philosophy is to be regarded as either individualist or a collectivist in ethos/orientation. 

This topic is, Hicks acknowledges, a controversial one upon which Nietzsche scholars disagree. It is, however, a topic of direct relevance to the extent of relationship between Nietzsche’s philosophy and the ideology of the Nazis, since the Nazis themselves were indisputably extremely collectivist in ethos, the collective to which they subordinated all other concerns, including individual rights and wants, being that of the nation, Volk or race

Hicks himself concludes that Nietzsche was much more of a collectivist than an individualist

“[Although] Nietzsche has a reputation for being an individualist [and] there certainly are individualist elements in Nietzsche’s philosophy… in my judgment his reputation for individualism is often much overstated (p87). 

Yet, elsewhere, Hicks comes close to contradicting himself, for, among the qualities that he associates with Nietzsche’s ‘master morality’, which Nietzsche himself clearly favours over the ‘slave morality’ of Christianity, are “Independence”, “Autonomy” and indeed “Individualism” (p60; p62). Yet these are all clearly individualist virtues.[4]

In reaching his conclusion that Nietzsche is primarily to be considered a collectivist rather than a true individualist, Hicks distinguishes three separate questions and, in the process, three different forms of individualism, namely: 

  1. Do individuals shape their own identities—or are their identities created by forces beyond their control?”; 
  1. Are individuals ends in themselves, with their own lives and purposes to pursue—or do individuals exist for the sake of something beyond themselves to which they are expected to subordinate their interests?”; and 
  1. Do the decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen—or are the decisive events of history a matter of collective action or larger forces at work?” (p88). 

With regard to the first of these questions, Nietzsche, according to Hicks, denies that men are masters of their own fate. Instead, Hicks contends that Nietzsche believes: 

Individuals are a product of their biological heritage” (p88). 

This may be correct, and certainly there is much in Nietzsche’s writing to support this conclusion.

Thus, for example, in Twilight of the Idols Nietzsche declares:

“The individual… is nothing in himself, no atom, no ‘link in the chain,’ no mere heritage from the past,—he represents the whole direct line of mankind up to his own life” (Twilight of the Idols: viii: 33).

And in Beyond Good and Evil, he pronounces:

“It is quite impossible for a man not to have the qualities and predilections of his parents and ancestors in his constitution, whatever appearances may suggest to the contrary. This is the problem of race” (Beyond Good and Evil: 264).

This, of course, reflects a crude and distinctly pre-Mendelian understanding of human heredity, but a recognisably hereditarian understanding of human psychology and behaviour nonetheless.

However, even if human behaviour, and human decisions, are indeed a product of heredity, this does not in fact, strictly speaking, deny that individuals are nevertheless the authors of their own destiny. It merely asserts that the way in which we may indeed shape our own destiny is itself a product of our heredity

In other words, our actions and decisions may indeed be predetermined by biological and hereditary factors, but they are still our decisions, simply because we ourselves are a product of these same biological forces

However, it is not at all clear that Nietzsche believes that all men determine their own fate. Rather, the great mass of mankind, whom he dismisses as ‘herd animals’, are, for Nietzsche, quite incapable of true individualism of this kind, and it is only men of a superior type who are truly free, membership of this superior caste itself being largely determined by heredity

Indeed, for Nietzsche, the superior type of man determines not only his own fate, but also often that of the society in which he lives and of mankind as a whole. 

This leads to the third of Hicks’s three types of individualism, namely the question of whether the “decisive events in human life and history occur because individuals, generally exceptional individuals, make them happen”, or whether they are the consequence of factors outside of individual control such as economic factors, or perhaps the unfolding of some divine plan. 

On this topic, I suspect Nietzsche would side with Thomas Carlyle, and perhaps Hegel, that history is indeed shaped, in large part, by the actions of so-called ‘great men, or, in Hegelian terms, world historical figures’. This is among the reasons he places such importance on the emerging Übermensch.

Admittedly, Nietzsche repeatedly disparages Carlyle in many of his writings, and, in Ecce Homo, repudiates any notion of equating of his Übermensch with what he dismisses as Carlyle’s “hero cult” (Ecce Homo: iii, 1).

However, as Will Durant writes in The Story of Philosophy, Nietzsche often reserved his greatest scorn for those contemporaries, or near-contemporaries (e.g. the Darwinians and Social Darwinists), who had independently developed ideas that, in some respects, paralleled or anticipated his own, if only as a means of emphasizing his own originality and claim to priority, or, as Durant puts it, of “covering up his debts” (The Story of Philosophy: p373).

Indeed, we might even characterize this tendency of Nietzsche to disparage those whose ideas had anticipated his own as a form of what Nietzsche himself would characterize as ‘ressentiment’.

Hitler, of course, would also surely have agreed with Carlyle regarding the importance of great men, and indeed saw himself as just such a ‘world historical figure’.

Indeed, for better or worse, given Hitler’s gargantuan impact on world history from his coming to power in Germany in the 1930s arguably right up to the present day, we might even find ourselves reluctantly forced to agree with him.[5]

As Isaiah Berlin is said to have first observed, the much-maligned Great Man Theory of History’, as famously espoused by Thomas Carlyle, became perennially unfashionable among historians right about exactly the same time that, in the persons of first Lenin and later Hitler, it was proven so terribly and tragically true.

Thus, just as the October revolution would surely never have occurred without Lenin as driving force and instigator, so the Nazis, though they may have existed, would surely never have come to power, let alone achieved the early diplomatic and military successes that briefly conferred upon them mastery over Europe, without Hitler as Führer and chief political and military tactician.

Yet, for Nietzsche, individual freedom is restricted, or at least should be restricted, only to such ‘great men’, or at least to a wider, but still narrow, class of superior types, and not at all extended at all to the great mass of humanity. 

Thus, I believe that we can reconcile Nietzsche’s apparently conflicting statements regarding the merits of, on the one hand, individualism, and, on the other, collectivism, by recognizing that he endorsed individualism only for a small elite cadre of superior men. 

Indeed, for Nietzsche, the vast majority of mankind, namely those whom he disparages as ‘herd animals’, are simply incapable of such individualism and should hence be subject to a strict authoritarian control in the service of the superior caste of man. They were certainly not ‘ends in themselves as contended by Kant.

Indeed, Nietzsche’s prescription for the majority of mankind is not so much collectivist, as it is authoritarian, since Nietzsche regards the lives of such people, even as a collective, as essentially worthless. 

The mass of men must be controlled and denied freedom, not for the benefit of such men themselves even as a collective, but rather for the benefit of the superior type of man.[6]

Yet if the authoritarianism to be imposed upon the mass of mindkind ultimately serves the individualism of the superior type of man, so the individualism of this superior type of man itself also serves a higher purpose, namely the higher evolution of mankind, which, in Nietzsche’s view, necessarily depends on the superior type of man.

Therefore, Hicks himself concludes that, rather than the lives of the mass of mankind serving the interests of the higher man, even the individualism accorded the higher type of man, and even that accorded the Übermensch himself, ultimately serves the interest of the collective – namely, the human species as a whole.

Thus, in Beyond Good and Evil, Nietzsche ridicules individualism as a moral law, proclaiming, “What does nature care for the individual!”, and insisting instead:

The moral imperative of nature [does not] address itself to the individual… but to nations, races, ages, and ranks; above all, however, to the animal ‘man’ generally, to mankind” (Beyond Good and Evil: v, 188). 

National Socialist Ideology 

As I have already said, however, Hicks’s understanding of Nietzsche’s philosophy is rather better than his understanding of the ideology of German National Socialism. 

This is not altogether surprising. Hicks is, after all, a professor of philosophy by background, not a historian.

Hicks lack of training in historical research is especially apparent in his handling of sources, which leaves a great deal to be desired.

For example, several quotations attributed to Hitler by Hicks are sourced, in their associated footnotes, to one of two works – namely,  The Voice of Destruction (aka Hitler Speaks) by Hermann Rauschning and Unmasked: Two Confidential Interviews with Hitler in 1931 – that are both now widely considered by historians to have been fraudulent, and to contain no authentic or reliable quotations from the Führer whatsoever.[7]

Other quotations are sourced to secondary sources, such as websites and biographies of Hitler, which makes it difficult to determine both the primary source from which the quotation is drawn, and in what context and to whom the remark was originally said or written.

This is an especially important point, not only because some sources (e.g. Rauschning) are very untrustworthy, but also because Hitler often carefully tailored his message to the specific audience he was addressing, and was certainly not above concealing or misrepresenting his real views and long-term objectives, especially when addressing the general public, specific interest groups, foreign statesmen and political rivals.

Perhaps for this reason, Hicks seemingly misunderstands the true nature of the National Socialist ideology, and Hitler’s own Weltanschauung in particular, particularly in relation to his views on Christianity (see below).

However, in Hicks’s defence, the core tenets of Nazism are almost as difficult to pin down are those of Nietzsche. 

Unlike in the case of Nietzsche, this is not so much because of either the inherent complexity of the ideas, or the impenetrability of its presentation – though admittedly, while Nazi propaganda, and Hitler’s speeches, tend to be very straightforward, even crude, Hitler’s Mein Kampf and Rosenberg’s The Myth of the Twentieth Century both make for a difficult read. 

Rather the problem is that German National Socialist thinking, or what passed for thinking among National Socialists, never really constituted a coherent ideology in the first place. 

After all, like any political party that achieves even a modicum of electoral success, let alone actually seriously aspires to win power, the Nazis necessarily represented a broad church.  

Members and supporters included people of many divergent and mutually contradictory opinions on various political, economic and social matters, not to mention ethical, philosophical and religious views and affiliations. 

If they had not done so, then the Party could never have attracted enough votes in order to win power in the first place. 

Indeed, the NSDAP was especially successful in presenting itself as ‘all things to all people’ and in adapting its message to whatever audience was being addressed at a given time. 

Therefore, it is quite difficult to pin down what exactly were the core tenets of German National Socialism, if indeed they had any. 

However, we can simplify our task somewhat by restricting ourselves to an altogether simpler question: namely what were the key tenets of Hitler’s own political philosophy? 

After all, one key tenet of German National Socialism that can surely be agreed upon is the so-called Führerprinzip’, whereby Hitler himself was to be the ultimate authority for all political decisions and policy. 

Therefore, rather than concerning ourselves with the political and philosophical views of the entire Nazi leadership, let alone the whole party or everyone who voted for them, we can instead restrict ourselves to a much simpler task – namely, determining the views of a single individual, namely the infamous Führer himself. 

This, of course, makes our task substantially easier.

However, we now encounter yet another problem: namely, it is often quite difficult to determine what Hitler’s real views actually were. 

Thus, as I have already noted, like all the best politicians, Hitler tailored and adapted his message to the audience that he was addressing at any given time. 

Thus, for example, when he delivered speeches before assembled business leaders and industrialists, his message was quite different from the one he would deliver before audiences composed predominantly of working-class socialists, and his message to foreign dignitaries, statesmen and the international community was quite different to the hawkish and militaristic one presented in Mein Kampf, to his leading generals and before audiences of fanatical German nationalists

In short, like all successful politicians, Hitler was an adept liar, and what he said in public and actually believed in private were often two very different things. 

Public and Private Positions on the Church

Perhaps the area of greatest contrast between Hitler’s public pronouncements and his private views, as well as Hicks’s own most egregious misunderstanding of Nazi ideology, concerns religion. 

According to Hicks, Hitler and the Nazis were believing Christians. Thus, he reports: 

“[Hitler] himself sounded Christian themes explicitly in public pronouncements” (p84). 

However, the key words here are “in public pronouncements”. Hitler’s real views, as expressed in private conversations among confidents, seem to have been very different.

As discussed in greater depth below, in private Hitler denounced Christianity as, among other things, “the heaviest blow that ever struck humanity”, and he is described by Goebbels in the latter’s diary as “completely anti-Christian” (Table Talk: p7; The Goebbels diaries, 1939-1941: p77).

Yet Hitler was well aware that publicly attacking Christianity would prove an unpopular stance with large sections of the public in what was then still a predominantly Christian country, and would not only alienate much of his erstwhile support but also provoke opposition from powerful figures in the churches whom he could ill afford to alienate. 

He therefore postponed his eagerly envisaged kirchenkampf, or settling of accounts with the churches, until after the war, if only because he wished to avoid fighting a war on multiple fronts.

Thus, Speer, in his post-war memoirs, noting that “in Berlin, surrounded by male cohorts, [Hitler] spoke more coarsely and bluntly than he ever did elsewhere”, quotes Hitler as declaring in such company: 

Once I have settled my other problems… I’ll have my reckoning with the church. I’ll have it reeling on the ropes” (Inside the Third Reich: p123). 

Niether were such sentiments restricted to the Führer himself. On the contrary, many leading figures in the Nazi regime,such as Goebbels, Bormann and Rosenberg, were also known to be anti-Christian in their sentiments, while one, Himmler, even went so far as to flirt with, and incorporate into SS ideology and rituals, an eccentric form of Germanic neopaganism.

Yet, as with his own pronouncements, Hitler ordered other leading figures within his government, wherever possible, to keep their opposition to Christianity out of the public domain. Thus, Speer recalls in his memoirs that, despite his own opposition to Christianity, and the ongoing conflict between church and Party:

He [Hitler] nevertheless ordered his chief associates, above all Goering and Goebbels, to remain members of the church” (Inside the Third Reich: p95-6).

This claim, and its cynical motivation, is corroborated by an entry from Goebbels’ diary, where the latter records:

The Führer is a fierce opponent of all that humbug [i.e. Christianity], but he forbids me to leave the church. For tactical reasons” (The Goebbels diaries, 1939-1941: p340).

Christianity and Judaism

In stark contrast to Nietzsche, who saw Christianity as kindred to, and an outgrowth of, Judaism, Hicks also asserts that: 

The Nazis took great pains to distinguish the Jews and the Christians, condemning Judaism and embracing a generic type of Christianity” (p83).  

In fact, the form of Christianity that was, at least in public, espoused by the Nazis, namely what they called Positive Christianity was far from “a generic type of Christianity”, but rather a very idiosyncratic, indeed quite heretical, take on the Christian faith, which attempted to divest Christianity of its Jewish influences and portray Jesus as an ‘Aryan’ hero fighting against Jewish power, while even incorporating elements of Gnosticism and Germanic paganism

Moreover, far from attempting to deny the connection between Christianity and Judaism, there is some evidence that Hitler actually followed Nietzsche, if not directly drew upon his writing, in directly linking Christianity to Jewish influence.

Thus, in his diary, Goebbels quotes Hitler directly linking Christianity and Judaism:  

“[Hitler] views Christianity as a symptom of decay. Rightly so. It is a branch of the Jewish race. This can be seen in the similarity of religious rites. Both (Judaism and Christianity) have no point of contact to the animal element” (The Goebbels Diaries, 1939-1941: p77). 

Likewise, in his Table Talk, carefully recorded by Bormann and others, Hitler declares on the night of the 11th July: 

The heaviest blow that ever struck humanity was the coming of Christianity. Bolshevism is Christianity’s illegitimate child. Both are inventions of the Jew” (Table Talk: p7). 

Here, in linking Christianity and Judaism, and attributing Jewish origins to Christianity, Hitler is, of course, following Nietzsche, since a central theme of the latter’s The Antichrist is that Christianity is indeed very much a Jewish invention. 

Indeed, the whole thrust of this quotation will immediately be familiar to anyone who has read Nietzsche’s The Antichrist. Thus, just as Hitler describes Christianity as “the heaviest blow that ever struck humanity”, so Nietzsche himself declared: 

Christianity remains to this day the greatest misfortune of humanity” (The Antichrist: 51). 

Similarly, just as Hitler describes “Bolshevism” as “Christianity’s illegitimate child”, so Nietzsche anticipates him in detecting this familial resemblance, the latter declaring in The Antichrist

The anarchist and the Christian have the same ancestry” (The Antichrist: 57). 

Thus, in this single quoted passage, Hitler aptly summarizes the central themes of The Antichrist in a single paragraph, the only difference being that, in Hitler’s rendering, the implicit anti-Semitic subtext of Nietzsche’s work is made explicit. 

Elsewhere in Table Talk, Hitler echoes other distinctly Nietzschean themes with regard to Christianity.  

Thus, just as Nietzsche famously condemned Christianity as a expression of slave morality and ‘ressentiment’ with its origins among the Jewish priestly class, so Hitler declares: 

Christianity is a prototype of Bolshevism: the mobilisation by the Jew of the masses of slaves with the object of undermining society” (Table Talk: p75-6). 

This theme is again quintessentially Nietzschean.

Another common theme is the notion of Christianity as rejection of life itself. Thus, in a passage that I have already quoted above, Nietzsche declares: 

To attack the passions at their roots, means attacking life itself at its source: the method of the Church is hostile to life… The saint in whom God is well pleased, is the ideal eunuch. Life terminates where the ‘Kingdom of God’ begins” (Twilight of the Idols: iv, 1) 

Hitler echoes a similar theme, himself declaring in one passage where he elucidates a social Darwinism ethic

Christianity is a rebellion against natural law, a protest against nature. Taken to its logical extreme, Christianity would mean the systematic cultivation of the human failure” (Table Talk: p51). 

In short, in his various condemnations of Christianity from Table Talk, Hitler is clearly drawing on his own reading of Nietzsche. Indeed, in some of the passages quoted above (e.g. Table Talk: p7; p75-6), he could almost be accused of plagiarism.

Indeed, the influence of Nietzsche on Hitler’s worldview as evidenced in his conversation was noticed much earlier by Ernst Hanfstaengl, a German-American former intimate of the Führer, who remarked how, over time:

The Nietzschian [sic] catch-phrases began to appear more frequently – Wille zur Macht, Herrenvolk, Sklavenmoral – the fight for the heroic life, against formal dead-weight education, Christian philosophy and ethics based on compassion” (Hitler: The Missing Years: p206-7).

Historians like to belittle the idea that Hitler was at all erudite or well-read, suggesting that, although famously an avid reader, his reading material was likely largely limited to such material Streicher’s Der Stürmer and a few similarly crude antisemitic pamphlets circulating in the dosshouses of pre-War Vienna. 

Hicks rightly rejects this view. From these quotations from Hitler’s Table Talk alone, I would submit that it is clear that Hitler had read his Nietzsche.

Thus, although, as we will see, Nietzsche was certainly no Nazi or proto-National Socialist, nevetheless Hitler himself may indeed have regarded himself, in his own distorted way, as in some sense a true ‘Nietzschean’.[8]

National Socialism and Socialism 

Another area where Hicks misinterprets Nazi ideology, upon which many other reviewers have rather predictably fixated, is the vexed and perennial question of the extent to which the National Socialist regime, which, of course, in name at least, purported to be socialist, can indeed accurately be described as such. 

Mainstream historians generally reject the view that the Nazis were in any sense truly socialist

Partly this rejection of the notion that the Nazis were at all socialist may reflect the fact that many of the historians writing about this period of history are themselves socialist, or at least sympathetic to socialism, and hence wish to absolve socialism of any association with, let alone responsibility for, National Socialism.[9]

Hicks, who, for his part, seems to be something of a libertarian (perhaps even a Randian) as far as I can make out, has a very different conclusion: namely that the National Socialists were indeed socialists and that socialism was in fact a central plank of their political programme. 

Thus, Hicks asserts: 

The Nazis stood for socialism and the principal of the central direction of the economy for the common good” (p106). 

Certainly, Hicks is correct that the Nazis stood for “the central direction of the economy”, albeit not so much “for the common good” of humanity, nor even of all German citizens, as for the “for the common good” only of ethnic Germans, with this “common good” being defined in Hitler’s own idiosyncratic terms and involving many of these ethnic Germans dying in his pointless wars of conquest. 

Thus, Hayek, who equates socialism with big government and a planned economy, argues in The Road to Serfdom that the Nazis, and the Fascists of Italy, were indeed socialist

However, I would argue that socialism, as the word is used today, is best defined as entailing, not only the central direction of the economy, but also economic redistribution and the promotion of socio-economic equality.[10]

Yet, in Nazi Germany, the central direction of the economy was primarily geared, not towards promoting socioeconomic equality, but rather towards preparing the nation and economy for war, in addition to various useful and not so useful public works projects and vanity architectural projects.[11]

To prove the Nazis were socialist, Hicks relies extensively on the party’s 25-point programme

Yet this document was issued in 1920, when Hitler had yet to establish full control over the nascent movement, and still reflected the socialist ethos of many of the party’s founders, whom Hitler was later to largely displace. 

Thus, German National Socialism, like Italian Fascism, did indeed very much begin on the left, attempting to combine socialism with nationalism, and thereby provide an alternative to the internationalist ethos of orthodox Marxism.  

However, long before either movement had ever even come within distant sight of power, each had already toned down, if not abandoned, much of their earlier socialist rhetoric. 

Certainly, although he declared the party programme as inviolable and immutable and blocked any attempt to amend or repudiate it, Hitler also took few steps whatever to actually implement most of the socialist provisions in the 25-point programme.[12]

Hicks also reports: 

So strong was the Nazi party’s commitment to socialism that in 1921 the party entered into negotiations to merge with another socialist party, the German Socialist Party” (p17). 

Yet the party in question, the German Socialist Party was, much like the NSDAP itself, as much nationalist in orientation and ideology as it was socialist. Moreover, although Hicks admits “the negotiations fell through”, what he does not mention is that the deal was scuppered by Hitler himself, then not yet the movement’s leader but already the NSDAP’s most dynamic organizer and speaker, who specifically vetoed any notion of a merger, threatening to resign if he did not have his way, and thereby strengthening his own position in the party. 

To further buttress his claim that the Nazis were indeed socialist, Hicks also quotes extensively from Joseph Goebbels, Hitler’s Minister for Propaganda (p18). 

Goebbels was indeed among the most powerful figures in the Nazi leadership besides Hitler himself, and the quotations attributed to him by Hicks do indeed suggest leftist, socialist sympathies

However, Goebbels was, in this respect, something of an exception and outlier among the National Socialist leadership, since he had defected from the Strasserist wing of the Party, which was indeed genuinely socialist in ethos, but which was first marginalized then suppressed under Hitler’s leadership long before the Nazis had come to power, with most remaining sympathizers, Goebbels excepted, purged during the Night of the Long Knives

Goebbels may have retained some socialist sympathies thereafter. However, despite his power and prominence in the Nazi regime, he does not seem to have had any great success at steering the regime towards socialist redistribution or other left-wing policies

In short, while National Socialism may have begun on the left, by the time the regime attained power, and certainly while they were in power, their policies were not especially socialist, at least in the sense of being economically redistributive or egalitarian

Nevertheless, it is indeed true that, with their centrally-planned economy and large government-funded public works projects, the National Socialist regime probably had more in common with the contemporary left, at least in a purely economic sense, than it would with the neoconservative, neoliberal free market ideology that has long been the dominant force in Anglo-American conservatism. 

Thus, whether the Nazis were indeed ‘socialist’, ultimately depends on precisely how we define the wordsocialist’. 

Nazi Antisemitism 

Yet one aspect of National Socialist ideology was indeed, in my view, left-wing and socialist in origin – namely their anti-Semitism

Of course, anti-Semitism is usually associated with the political right, more especially the so-called ‘far right’. 

However, in my view, anti-Semitism is always fundamentally leftist in nature. 

Thus, Marxists claim that society is controlled by a conspiracy of wealthy capitalists who control the mass media and exploit and oppress everyone else. 

Nazis and anti-Semites, on the other hand, claim that society is controlled by a conspiracy of wealthy Jewish capitalists who control the mass media and exploit and oppress everyone else. 

The distinction between Nazism and Marxism is, then, largely tangential.

Antisemites and Nazis believe that our capitalist oppressors are all, or mostly, Jewish. Marxists, on the other hand, take no stance on the matter either way and frankly prefer not to talk about it.

Indeed, columnist Rod Liddle even claims:

Many psychoanalysts believe that the Left’s aversion to capitalism is simply a displaced loathing of Jews” (Liddle 2005).

Or, as a nineteenth century German political slogan more famously put it: 

Antisemitism is the socialism of fools.

Indeed, anti-Semites who blame all the problems of the world on the Jews always remind me of Marxists who blame all the problems of the world on capitalism and capitalists, feminists who blame their problems on men, and black people who blame all their personal problems on ‘the White Man’. 

Interestingly, Nietzsche himself recognized this same parallel, writing of what he calls “ressentiment”, an important concept in his philosophy, with connotations of repressed or sublimated envy and inferiority complex, that: 

This plant blooms its prettiest at present among Anarchists and anti-Semites” (On the Genealogy of Morals: ii, 11). 

In other words, Nietzsche seems to be recognizing that both socialism and anti-Semitism reflect what modern conservatives often term ‘the politics of envy’. 

Thus, in The Will to Power, Nietzsche observes: 

The anti-Semites do not forgive the Jews for having both intellectand money’” (The Will to Power: iv, 864). 

Nietzschean Anti-Semitism

Yet Jews themselves are, in Nietzsche’s thinking, by no means immune from the “ressentiment” that he also diagnoses in socialists and anti-Semites. On the contrary, they are, for Nietzsche, “that priestly nation of resentment par excellence” (On the Genealogy of Morals: i, 16).

“If Nietzsche rejected the anti-Semitism of his sister, brother-in-law and former idol, Wagner, he nevertheless constructed in its place a new anti-Semitism all of his own, which, far from blaming the Jews for the crucifixion of Christ, instead blamed them for the genesis of Christianity itself—a theme directly echoed by Hitler in his Table Talk.”

Thus, in Nietzsche’s view, it was Jewish ressentiment vis a vis successive waves of conquerors – especially the Romans – that birthed Christianity, slave morality and the original transvaluation of values that he so deplores. 

Thus, Nietzsche relates in Beyond Good and Evil that: 

The Jews—a people ‘born for slavery,’ as Tacitus and the whole ancient world say of them; the chosen people among the nations, as they themselves say and believe—the Jews performed the miracle of the inversion of valuations, by means of which life on earth obtained a new and dangerous charm for a couple of millenniums. Their prophets fused into one the expressions ‘rich,’ ‘godless,’ ‘wicked,’ ‘violent,’ ‘sensual,’ and for the first time coined the word ‘world’ as a term of reproach. In this inversion of valuations (in which is also included the use of the word ‘poor’ as synonymous with ‘saint’ and ‘friend’) the significance of the Jewish people is to be found; it is with them that the slave-insurrection in morals commences” (Beyond Good and Evil: v, 195).[13]

Thus, in The Antichrist, Nietzsche talks of “the Christian” as “simply a Jew of the ‘reformed’ confession”, and “the Jew all over again—the threefold Jew” (The Antichrist: 44), concluding: 

Christianity is to be understood only by examining the soil from which it sprung—it is not a reaction against Jewish instincts; it is their inevitable product” (The Antichrist: 24). 

All of this, it is clear from the tone and context, is not at all intended as a complement – either to Jews or to Christians

Thus, lest we have any doubts on this matter, Nietzsche declares in Twilight of the Idols

Christianity as sprung from Jewish roots and comprehensible only as grown upon this soil, represents the counter-movement against that morality of breeding, of race and of privilege:—it is essentially an anti-Aryan religion: Christianity is the transvaluation of all Aryan values, the triumph of Chandala values, the proclaimed gospel of the poor and of the low, the general insurrection of all the down-trodden, the wretched, the bungled and the botched, against the ‘race,’—the immortal revenge of the Chandala as the religion of love” (Twilight of the Idols: vi, 4). 

While modern apologists may selectively cite passages from Nietzsche in order to portray him as a philo-Semite and admirer of the Jewish people, it is clear that, by modern political correct standards, many of Nietzsche’s statements about Jews are very politically-incorrect, and it is doubtful that he would be able to get away with them today.

Thus, if Nietzsche rejected the anti-Semitism of his sister, brother-in-law and former idol, Wagner, he nevertheless constructed in its place a new anti-Semitism all of his own, which, far from blaming the Jews for the crucifixion of Christ, instead blamed them for the genesis of Christianity itself – a theme that is, as we have seen, directly echoed by Hitler in his Table Talk

Thus, Nietzsche remarks in The Antichrist

“[Jewish] influence has so falsified the reasoning of mankind in this matter that today the Christian can cherish anti-Semitism without realizing that it is no more than the final consequence of Judaism” (The Antichrist: 24). 

An even more interesting passage regarding the Jewish people appears just a paragraph later, where Nietzsche observes: 

The Jews are the very opposite of décadents: they have simply been forced into appearing in that guise, and with a degree of skill approaching the non plus ultra of histrionic genius they have managed to put themselves at the head of all décadent movements (for example, the Christianity of Paul), and so make of them something stronger than any party… To the sort of men who reach out for power under Judaism and Christianity,—that is to say, to the priestly class—décadence is no more than a means to an end. Men of this sort have a vital interest in making mankind sick” (The Antichrist: 24). 

Here, Nietzsche echoes, or perhaps even originates, what is today a familiar theme in anti-Semitic discourse – namely, that Jews champion subversive and destructive ideologies (Marxism, feminism, multiculturalism, mass migration of unassimilable minorities) only to weaken the Gentile power structure and thereby enhance their own power.[14]

This idea finds its most sophisticated (though still flawed) contemporary exposition in the work of evolutionary psychologist and contemporary antisemite Kevin Macdonald, who, in his book, The Culture of Critique (reviewed here), conceptualizes a range of twentieth century intellectual movements such as psychoanalysis, Boasian anthropology and immigration reform as what he calls ‘group evolutionary strategies’ that function to promote the survival and success of the Jews in diaspora. 

Nietzsche, however, goes further and extends this idea to the genesis of Christianity itself. 

Thus, in Nietzsche’s view, Christianity, as an outgrowth of Judaism and an invention of Paul and the Jewish ‘priestly class’, is itself a part of what Macdonald would call a ‘Jewish group evolutionary strategy’ designed in order to undermine the goyish Roman civilization under whose yoke the Jews had been subjugated.

Indeed, here, Nietzsche becomes overtly conspiratorial in his theory of the genesis of the Christian faith, implying, rather implausibly, that even the ostensible opposition of Christianity to to Judaism, and of Jews to Christianity, is all part of a malign Jewish plot to disguise the Jewish origins of the Christian faith and hence obscure its role as a fundamentally Jewish strategy. Thus, he writes in On the Genealogy of Morals:

This Jesus of Nazareth… was he not really temptation in its most sinister and irresistible form, temptation to take the tortuous path to those very Jewish values and those very Jewish ideals? Has not Israel really obtained the final goal of its sublime revenge, by the tortuous paths of this ‘Redeemer,’ for all that he might pose as Israel‘s adversary and Israel‘s destroyer? Is it not due to the black magic of a really great policy of revenge, of a far-seeing, burrowing revenge, both acting and calculating with slowness, that Israel himself must repudiate before all the world the actual instrument of his own revenge and nail it to the cross, so that all the world—that is, all the enemies of Israel—could nibble without suspicion at this very bait?” (On the Genealogy of Morals: i, 8).

Nietzsche, a professed anti-Christian but an admirer of the ancient Greeks (or at least of some of them), and even more so of the Romans, would likely agree with Tertullian that Jerusalem has little to do with Athens – or indeed with Rome. However, Hicks observes: 

As evidence of whether Rome or Judea is winning, [Nietzsche] invites us to consider to whom one kneels down before in Rome today” (p70). 

Thus, Nietzsche characterizes “Rome against Judæa, Judæa against Rome” as the symbol of “a dreadful, thousand-year fight” over moral meaning in which neither side has yet achieved final and decisive victory. However, with regard to the question as to:

Which of them has been provisionally victorious, Rome or Judæa? but there is not a shadow of doubt; just consider to whom in Rome itself nowadays you bow down, as though before the quintessence of all the highest values—and not only in Rome, but almost over half the world” (On the Genealogy of Morals: i, 16).

Racialism and Superiority 

Yet, with regard to their racial views, Nietzsche and the Nazis differ, not only in their attitude towards Jews, but also in their attitude towards Germans. 

Thus, according to Hicks: 

The Nazis believe the German Aryan to be raciallly superior—while Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

Yet, here, Hicks is only half right. While it certainly true that the Nazis extolled the German people, and the so-called ‘Aryan race’, as a master race, it is not at all clear that Nietzsche indeed believed that the superior type of man can be found among all races

Actually, besides a few comments about Jews, mostly favourable, and a few more about Germans and the English, almost always disparaging, Nietzsche says surprisingly little about race

However, on reflection, this is not really a surprise, since, being resident throughout his life in a Europe that was then very much monoracial, Nietzsche probably little if any direct contact with nonwhite races or peoples

Moreover, living as he did in the nineteenth century, when European power was at its apex, and much of the world controlled by European colonial empires, Nietzsche, like most of his European contemporaries, probably took white European racial superiority very much for granted. 

It is therefore only natural that his primary concern was the relative superiority and status of the various European subtypes – hence his occasional comments regarding Jews, the English, Germans and the French.

Indeed, the fact that he wrote little, if anything, about non-European races is further evidence that, on this matter, he indeed subscribed to the general consensus of the time and place in which he lived – since Nietzsche had no quarms about, and indeed seemingly took great pleasure in, expressing controversial, heterodox opinions on any number of other matters, and so, if he had disagreed with the consensus view, he would no doubt have had little hesitation in saying as much.

Moreover, Nietzsche’s emphasis on the importance of heredity, upon which Hicks himself rightly lays great stress, is also eminently compatible with, and indeed implies, racialism, since racial differences are also based on heredity. Indeed, Nietzsche sometimes comes close to making this point explicitly, as where he writes:

“It is quite impossible for a man not to have the qualities and predilections of his parents and ancestors in his constitution, whatever appearances may suggest to the contrary. This is the problem of race” (Beyond Good and Evil: 264).

Thus, both his antiegalitarianism and his trenchant hereditarianism suggest that Nietzsche would be receptive to theories of racial superiority, even if he never addresses this subject directly in his writing.

Indeed, though Nietezsche certainly rejected a parochial German nationalism, there is nevertheless evidence that Nietzsche considered himself as writing for, on behalf of and as, if not a German, then at least a wider pan-European (and hence self-evidently white European) audience.

Thus, in the preface to Beyond Good and Evil, he refers to both himself and his envisaged audience as “we good Europeans”, and, later in the same work, he describes his central concern (“serious topic”) as being “the rearing of a new ruling caste for Europe”, a project he grandiloquently christens “the European problem”, and refers to supposed “unmistakable signs that Europe wishes to be one”, sentiments that could arguably be interpreted as anticipating the Nazi conception of a united Europe, later to be resurrected in the form of an envisaged pan-European nationalism by post-war neofascist theoreticians such as Mosley and Thiriart (Beyond Good and Evil: viii, 251; 256).

Contemporary German Culture

Related to his rejection of German nationalism, Hicks also reports that, despite his apparent admiration for the ancient Tuetonic tribes, Nietzsche nothing but contempt for the German culture of his own day. Thus, Hicks assertss: 

The Nazis believe contemporary German culture to be the highest and the best hope for the world—while Nietzsche holds contemporary German culture to be degenerate and to be infecting the rest of the world” (p85). 

It is indeed true that Nietzsche held contemporary German culture in low regard. Thus, among many other assorted insults and disparaging remarks, he repeatedly disparages contemporary Germans as ‘beer drinkers’, and insists that “between the old Germans and ourselves there exists scarcely a psychological, let alone a physical, relationship” (On the Genealogy of Morals: i, 11).

Indeed, it was not just modern German culture that Nietzsche held in contempt, but contemporary western culture as a whole, something he traces back to at least the genesis of Christianity, if not the thought of Plato and Socrates.

However, the claim that “The Nazis believe contemporary German culture to be the highest…” is more questionable and requires some elaboration. Again, Hicks betrays a better familiarity with the central ideas of Nietzsche than he does with the underlying ideology of the National Socialist movement.

In fact, like Nietzsche, the Nazis too believed that the Germany of their own time – namely the Weimar Republic – was decadent and corrupt. 

Indeed, a belief in both national degeneration and in the need for national spiritual rebirth and awakening has been identified by political scientist Roger Griffin as a key defining element in fascism.[15]

Thus, Nietzsche’s own belief in the decadence of contemporary western civilization, and arguably also his belief in the coming Übermensch promising spiritual revitalization, is, in many respects, a paradigmatically and prototypically fascist model. [16]

Of course, the Nazis only believed that German culture was corrupt and decadent before they had themselves come to power and hence supposedly rectified this situation.  

In contrast, Nietzsche never had the opportunity to rejuvenate the German culture and civilization of his own time – and nor did he live to see the coming Übermensch.[17]

The Blond Beast’  

Hicks contends that Nietzsche’s employment of the phrase “the blond beast” in On the Genealogy of Morals is not a racial reference to the characteristically blond hair of Nordic Germans, as it has sometimes been interpreted, but rather a reference to the blond mane of the lion. 

Actually, I suspect Nietzsche may have intended a double-meaning, referring to both the stereotypically blond complexion of the Germanic warrior and to the mane of the lion, and hence comparing the two. 

Indeed, the use of such a double-meaning would be typical of Nietzsche’s poetic, literary and distinctly non-philosophical (or at least not traditionally philosophical) style of writing. 

Thus, even in one of the passages from On the Genealogy of Morals employing this metaphor that is quoted by Hicks himself, Nietzsche explicitly refers to the “the blond Germanic beast [emphasis added]” (quoted: p78).[18]

It is true that, in another passage from the same work, Nietzche contends that “the splendid blond beast” lies at “the bottom of all these noble races”, among whom he includes, not just the Germanic, but also such distinctly non-Nordic races as “the Roman, Arabian… [and] Japanese nobility”, among others (quoted: p79). 

Here, the reference to the Japanese “nobility”, rather than the Japanese people as a whole, is key, since, as we have seen, Nietzsche clearly regards the superior type of man, if present at all, as always necessarily a minority among all peoples. 

However, in referring to “noble races”, Nietzsche necessarily implies that other races are not so “noble”. Just as to say that certain men are ‘superior’ necessarily implies that others are, by comparison, inferior, since superiority is a relative concept, so to talk of “noble races” necessarily supposes the existence of ignoble races too. 

Thus, if the superior type of man, in Nietzsche’s view, only ever represents a small minority of the population among any race, it does not necessarily follow that, in his view, such types are to be found among all races

Hicks is therefore wrong to conclude that: 

Nietzsche believes that the superior types can be manifested in any racial type” (p85). 

In short, just because Nietzsche believed that vast majority of contemporary Germans were poltroons, Chandala, ‘beer drinkers’ and ‘herd animals’, it does not necessarily follow that he also believes that an Australian Aboriginal can ever become an Übermensch

A Nordicist, Aryanist, Völkisch Milieu? 

Thus, for all his condemnation of Germans and German nationalism, one cannot help forming the impression on reading Nietzsche that he very much existed within, if not a German nationalist milieu, then at least within a broader Nordicist, Aryanist and Völkisch intellectual milieu – the same milieu that birthed certain key strands of the National Socialist Weltanschauung

This is apparent in the very opening lines of The Antichrist, where Nietzsche declares himself, and his envisaged readership, as “Hyperboreans”, a term popular among some proto-Nazi occultists, such as members of the Thule Society, the group which itself birthed what was to become the NSDAP, and which was itself named for the supposed capital of the mythical Hyperborea.[19]

It is also apparent when, in Twilight of the Idols, he disparages Christianity as specifically an “anti-Aryan religion… [and] the transvaluation of all Aryan values” (Twilight of the Idols: vi, 4). 

Apologists sometimes insist that Nietzsche, a philologist by training, was only using the word Aryan in the linguistic sense, i.e. where we would today say ‘Indo-European

However, Nietzsche was writing in a time and place, namely Germany in the nineteenth century, when Aryanist ideas were very much in vogue, and, given his own familiarity with such ideas through his sister and brother-in-law, not to mention his former idol Wagner, it would be naïve to think that Nietzsche was not all too aware of the full connotations of this term.

Indeed, Nietzsche’s references to “Aryan values” and “anti-Aryan religion”, referring, as they do, to values and religion, clearly go well beyond merely linguistic descriptors, and, though they may envisage a mere cultural inheritance from the proto-Indo-Europeans, nevertheless seem, in my reading, to envisage, not so much a scientific biological conception of race, including race differences in behaviour and psychology, as much as they anticipate the mystical, quasi-religious and slightly bonkers ‘spiritual racialism’ of Nietzsche’s self-professed successors, Spengler and Evola.

Indeed, not only did Nietzsche employ the term Aryan in an obviously racial sense, but he also associated this ostensible race with some of the exact same racial traits as did the Nazis themsevles.

Thus, in one passage from On the Genealogy of Morals, noting how words such as the Greek ‘μέλας’, though originally connoting ‘blackness’ or dark colouration, ultimately came to be employed as moral descriptors connoting ‘evil’, Nietzsche attibutes this to the supposed conquest and subjugation of darker races by the all-conquering Aryans:

The vulgar man can be distinguished as the dark-coloured, and above all as the black-haired (‘hic niger est’), as the pre-Aryan inhabitants of the Italian soil, whose complexion formed the clearest feature of distinction from the dominant blonds, namely, the Aryan conquering race:—at any rate Gaelic has afforded me the exact analogue—Fin… the distinctive word of the nobility, finally—good, noble, clean, but originally the blonde-haired man in contrast to the dark black-haired aboriginals” (On the Genealogy of Morals: i, 5).

Later in the same work, Nietzsche even equates what he refers to as “the preAryan population” of Europe with “the decline of humanity”, no less (On the Genealogy of Morals: i, 11).

Less obviously, this affinity for Nazi-style ‘Aryanism’ is also apparent in Nietzsche’s extolment for the Law of Manu and Indian caste system, and his adoption of the Sanskrit caste term, Chandala (also sometimes rendered as ‘Tschandala’ or ‘caṇḍāla’), as a derogatory term for the ‘herd animals’ whom he so disparages. This is because, although South Asians are obviously far from racially Nordic, proto-Nazi Völkisch esotericists (and their post-war successors) nevertheless had a curious obsession with Hindu religion and caste, perhaps because Hinduism was regarded as a continuation of the proto-Indo-European religion and mythology of the original so-called ‘Aryans’. Indeed, it is from India that the Nazis seemingly took both the swastika symbol and the very word ‘Aryan’. 

Indeed, even Nietzsche’s odd decision to name his prophet of the coming Übermensch, and mouthpiece for his own philosophy, after the Iranian religious figure, Zarathustra, despite the fact that the philosophy of the historical Zoroaster, at least as it is remembered today, had little in common with Nietzsche’s own philosophy, but rather represented almost its polar opposite (which may have been Nietzsche’s point), may have reflected the fact that the historical Zoroaster was, of course, Iranian, and hence quintessentially ‘Aryan’.

Will Durant, in The Story of Philosophy, writes: 

Nietzsche was the child of Darwin and the brother of Bismarck. It does not matter that he ridiculed the English evolutionists and the German nationalists: he was accustomed to denounce those who had most influenced him; it was his unconscious way of covering up his debts” (The Story of Philosophy: p373).[20]

This perhaps goes some way to making sense of Nietzsche’s ambiguous relationship to Darwin, whose theory he so often singles out for criticism, but also fails to properly understand. 

Perhaps something similar can be said of Nietzsche’s relationship, not only to German nationalism, but also to anti-Semitism, since, as a former disciple of Wagner, he existed within a German nationalist and anti-Semitic intellectual milieu, from which he sought to distinguish himself but which he never wholly relinquished. 

Thus, if Nietzsche condemned the crude antiSemitism of Wagner, his sister and brother-in-law, he nevertheless constructed in its place a new antiSemitism that blamed the Jews, not for the crucifixion of Christ, but rather for the very invention of Christianity, Christian ethics and the entire edifice of what he called ‘slave morality’ and the ‘transvaluation of values’. 

Nietzschean Philosemitism or Mere ‘Backhanded Complements’?

Thus, even Nietzsche’s many apparently favorable comments regarding the Jews can often be interpreted as backhanded complements

As a character from a Michel Houellebecq novel observes: 

All anti-Semites agree that the Jews have a certain superiority. If you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113). 

Nietzsche himself would, of course, view these implicit, inadvertant concessions of Jewish superiority in anti-Semitic literature as further proof that anti-Semitic sentiments are indeed rooted in repressed envy and what Nietzsche famously termed ‘ressentiment’.

Indeed, Nazi propaganda provides a good illustration of just this tendency for anti-Semitic sentiments to inadvertantly reveal an impicit perception of Jewish superiority

Thus, in claiming that Jews, who only ever represented only a tiny minority of the Weimar-era German population, nevertheless dominated the media, banking, commerce and the professions, Nazi propaganda often came close to inadvertently implicitly conceding Jewish superiority – since to dominate the economy of a mighty power like Germany, despite only ever representing a tiny minority of its population, is hardly a feat indicative of inferiority

Indeed, Nazi propaganda came close to self-contradiction, since, if Jews did indeed dominate the Weimar-era economy to the extent claimed in Nazi propaganda, this not only suggests that the Jews themselves are far from inferior to the German Gentile Goyim whom they had ostensibly so oppressed and subjugated, but also that the Germans themselves, in allowing themselves to be so dominated by this tiny minority of Jews in their midst, were something rather less than the Aryan Übermensch and master race of Hitler’s own demented imagining. 

Such backhanded complements can be understood, in Nietzschean terms, as a form of what Nietzsche himself would have termed ‘ressentiment’.

Thus, many antisemites have praised the Jews for their tenacity, resilience, survival, alleged clannishness and ethnocentrism, and, perhaps most ominously, their supposed racial purity

For example, Houston Stewart Chamberlain, a major influence on Nazi race theory and mentor to Hitler himself, nevertheless insisted:

The Jews deserve admiration, for they have acted with absolute consistency according to the logic and truth of their own individuality and never for a moment have they allowed themselves to forget the sacredness of physical laws because of foolish humanitarian day-dreams which they shared only when such a policy was to their advantage” (Foundations of the Nineteenth Century: p531).[21]

Similarly, contemporary antisemite Kevin MacDonald, arguing that Jews might serve as a model for allegedly less ethnocentric white westerners to emulate, professes to:

Greatly admire Jews as a group that has pursued its interests over thousands of years, while retaining its ethnic coherence and intensity of group commitment (Macdonald 2004). 

Indeed, even Hitler himself came close to philosemitism in one passage of Mein Kampf, where he declares: 

“The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self-preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf).[22]

Many of Nietzsche’s own apparently complementary remarks regarding the Jewish people directly echoe the earlier statements of these acknowledged antisemites, as where Nietzsche, like these other writers extols the Jews for their resilience, tenacity and survival under adverse conditions and alleged racial purity, writing:

“The Jews… are beyond all doubt the strongest, toughest, and purest race at present living in Europe, they know how to succeed even under the worst conditions (in fact better than under favourable ones)” (Beyond Good and Evil: viii, 251).

Thus, Hicks himself credits Nietzsche with deploring the slave morality that was their legacy, but nevertheless recognizing that this slave morality was a highly successful strategy in enabling them to survive and prosper in diaspora as a defeated and banished people. Thus, Nietzsche admires them as: 

Inheritors of a cultural tradition that has enabled them to survive and even flourish despite great adversity… [and] would at the very least have to grant, however grudgingly, that the Jews have hit upon a survival strategy and kept their cultural identity for well over two thousand years” (p82). 

Thus, in one of his many backhanded complements, Nietzsche declares:  

The Jews are the most remarkable people in the history of the world, for when they were confronted with the question, to be or not to be, they chose, with perfectly unearthly deliberation, to be at any price: this price involved a radical falsification of all nature, of all naturalness, of all reality, of the whole inner world, as well as of the outer” (The Antichrist: 24). 

Defeating Nazism 

In Hicks’s final chapter, he discusses how best Nazism can be defeated. In doing so, he seemingly presupposes that Nazism is, not only an evil that must be defeated, but moreover the ultimate evil that must be defeated at all costs and that we must therefore structure our entire economic and political system in order to achieve this goal and prevent any possibility of Nazism’s reemergence. 

In doing so, he identifies what he sees as “the direct opposite of what the Nazis stood for” as necessarily “the best antidote to National Socialism we have” and hence a basis for how we should structure society (p106-7). 

Yet, to assume that there is a “direct opposite” to each of the Nazis’ central tenets assumes that all political positions can be conceptualized on a single dimensional axis, with the Nazis at one end and Hicks’s own rational free market utopia at the other. 

In reality, the political spectrum is multidimensional and there are many quite different alternatives to each of the tenets identified by Hicks as integral to Nazism, not just a single opposite. 

More importantly, it is not at all clear that the best way to defeat an ideology is necessarily to embrace its polar opposite. 

On the contrary, embracing an opposite form of extremism often only provokes a counter-reaction and is hence counterproductive. In contrast, often the best way to defeat extremism is to actually address some of the legitimate issues raised by the extremists and offer practical, realistic solutions and compromise – i.e. moderation rather than extremism. 

Thus, in the UK, the two main post-war electoral manifestations of what was arguably a resurgent Nazi-style racial nationalism were the National Front in the 1970s and the British National Party (BNP) in the 2000s, each of whom, in their respective heydays, achieved some rather modest electoral successes at the local level, and inspired a great deal of media-led moral-panic, before quickly fading into obscurity and electoral irrelevance. 

Yet each were defeated, not by the emergence of an opposite extremism of either left or right, nor by the often violent agitation and activism of self-styled ‘anti-fascists’, who nevertheless proudly claimed the victory as their own, but rather by the emergence of political figures or movements that addressed, or at least affected to address, some of the legitimate issues raised by these extremist groups, especially regarding immigration, but cloaked them in more moderate language and offered seemingly more practicable solutions. 

Thus, in the 2000s, the BNP was largely eclypsed by the rise of the UKIP, which increasingly echoed much of the BNP’s rhetoric regarding mass immigration, but largely avoided any association with racism or white supremacism, let alone neo-Nazism. In short, UKIP outflanked the BNP by being precisely what the BNP had long pretended to be – namely, a non-racist, anti-immigration civic nationalist party – only, in the case of UKIP, the act actually had some modicum of plausibility.

Meanwhile, in the 1970s, the collapse and implosion of the National Front was largely credited to the rise of Margaret Thatcher, who, in one infamous interview, empathized with the fear of many British people that their country being “swamped by people with a different culture”, though, in truth, once in power, she did little to arrest or even slow, let alone reverse, this ongoing and now surely irreversible process of demographic transformation

Misreading Nietzsche 

Why, then, has Nietzsche come to be so misunderstood? How is it that this nineteenth-century German philosopher has come to be claimed as a precursor by everyone from Fascists and libertarians to leftist postmodernists. 

The fault, in my view, lies largely with Nietzsche himself, in particular his obscure, cryptic, esoteric writing style, especially in his infamously indecipherable, Thus Spake Zarathustra, but to some extent throughout his entire corpus. 

Indeed, Nietzsche, perhaps to his credit, even admits to adopting a deliberately impenetrable prose style, not so much admitting as proudly declaring as much in one parenthesis from Beyond Good and Evil that has been variously translated as: 

I obviously do everything to be ‘hard to understand’ myself

Or: 

I do everything to be difficultly understood myself”  (Beyond Good and Evil: ii, 27).

Admittedly, here, the wording, or at least the various English renderings, is itself not entirely clear in its meaning. However, the fact that even this single seemingly simple sentence lends itself to somewhat different interpretations only illustrates the scale of the problem. 

In my view, as I have written previously, philosophers who adopt an aphoristic style of writing generally substitute bad poetry for good arguments. 

Thus, in one sense at least leftist postmodernists are right to claim Nietzsche as a philosophical precursor: He, like them, delights in pretentious obfuscation and obscurantism

The best writers, in my view, generally present their ideas in the clearest and simplest language that the complexity of their ideas permit. 

Indeed, the most profound thinkers generally have no need increase the complexity of ideas that are already inherently complex through deliberately obscure or impenetrable language. 

In contrast, it is only those with banal and unoriginal ideas who adopt deliberately complex and confusing language in order to conceal the banality and unoriginality of their thinking. 

Thus, Richard DawkinsFirst Law of the Conservation of Difficulty states: 

Obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity.”  

What applies to an academic subject applies equally to individual writers – namely, as a general rule, the greater the abstruseness of the prose style, the less the substance and insight. 

Yet, unlike the postmodernists, poststucturalists, deconstructionalists, contemporary continental philosophers and other assorted ‘professional damned fools’ who so often claim him as a precursor, Nietzsche is indeed, in my view, an important, profound and original thinker albeit not quite as brilliant and profound as he evidently regards himself. 

Moreover, far from replacing good philosophy with bad poetry, Nietzsche is, besides being a profound and original thinker, also, despite his sometimes abstruse style, nevertheless a magnificent prose stylist, the brilliance of whose writing shines through even in translation. 

Conclusion – Was Nietzsche a Nazi? 

The Nazis, we are repeatedly reassured by leftists, misunderstood Nietzsche. Either that or they deliberated misrepresented and misappropriated him. At any rate, one thing is clear – they were wrong. 

This argument is largely correct – as far as it goes. 

The Nazis did indeed engage in a disingenuous and highly selective reading of Nietzsche’s work, selectively quoting his words out of context, and conveniently ignoring, or even suppressing, those passages of his writing where he explicitly condemns both antiSemitism and German nationalism

The problem with this view is not that it is wrong – but rather with what it leaves out. 

Nietzsche may not have been a Nazi, but he was certainly an elitist and anti-egalitarian, opposed to socialism, liberalism, democracy and pretty much the entire founding ideology of liberal democracy and the contemporary west.

Indeed, although, today, in America at least, atheism tends to be associated with leftist, or at least liberal, views, and Christianity with conservatism and the right, Nietzsche opposed socialism precisely because he saw it as an inheritance of the very JudeoChristianslave morality’ to which his philosophy stood in opposition, albeit divested of the very religious foundation which provided this moral system with its ultimate justification and basis.

Thus, in The Will to Power, he observes that “socialists appeal to the Christian instincts” and bewails “the socialistic ideal” as merely “the residue of Christianity and of Rousseau in the de-Christianised world” (The Will to Power: iii, 765; iv, 1017). Likewise, he laments of the English in Twilight of the Idols:

They are rid of the Christian God and therefore think it all the more incumbent upon them to hold tight to Christian morality” (Twilight of the Idols: ix, 5).

While Nietzsche would certainly have disapproved of many aspects of Nazi ideology, it is not at all clear that he would have considered our own twenty-first century western culture as any better. Indeed he may well have considered it considerably worse.

It must be emphasized that Nietzsche’s anti-egalitarianism led him to reject, not only socialism, but also democracy itself.

Thus, while Nietzsche lamented the French revolution as a triumph of decadence, decline and defeat for the aristocratic values that he so cherished, he nevertheless rejoiced in how its aftermath paradoxically led to the rise of Napoleon, whom he extoled as the last great European emperor and tyrant (On the Genealogy of Morals: i, 16).

In spite of all, what a blessing, what a deliverance from a weight becoming unendurable, is the appearance of an absolute ruler for these gregarious Europeans—of this fact the effect of the appearance of Napoleon was the last great proof” (Beyond Good and Evil: v, 199).

Yet, today, of course, Napoleon no longer stands as the “the last great proof” of this fact. For, since that time, other absolute tyrants – Hitler, Stalin, Mussolini – have emerged in his place, and each, despite (or indeed perhaps because of) their ruthless suppression of their respective peoples, nevertheless enjoyed huge popular support among these very same peoples, far surpassing that of most, if not all, elected democratic and constitutional rulers in Europe during the same time period, or indeed thereafter.

Thus, if, for Nietzsche, the Little Corporal stood as “the most unique and violent anachronism that ever existed”, and “that synthesis of Monster and Superman” (On the Genealogy of Morals: i, 16), these descriptors could surely be applied with equal, if not greater, appositeness to his successor as would-be conquerer of Europe, the Bohemian Corporal – that “anarchonism”, “Monster” and self-imagined ‘Übermensch’ of the twentieth century, who, like Napoleon, rose from obscure, semialien origins to the brink of mastery over Europe, only, again like Napoleon, to find an implacable enemy in the British, and his ultimate undoing on the Russian plain.

In conclusion, then, it is indeed true that Nietzsche was no National Socialist, but neither was he a socialist of any other type, nor indeed a liberal or even a democrat – and his views on such matters as hierarchy and inequality, or indeed the role of Jewish people in western history, were far from politically correct by modern standards. 

Indeed, the worldview of this most elitist and anti-egalitarian of thinkers is arguably even less reconcilable with contemporary left-liberal notions of social justice than is that of the Nazis themselves.  

Thus, if the Nazis did indeed misappropriate Nietzsche’s philosophy, then this misappropriation was as nothing compared to the attempt of some post-modernists, post-structuralists and other self-styled ‘left-Nietzscheans’ to enlist this most anti-egalitarian and elitist of thinkers on behalf of the left

Endnotes

[1] The claim that the foreign policies of governmental regimes of all ideological persuasions are governed less by their ideology than by power politics, is, of course, a central tenet, indeed perhaps the central tenet, of the realist school of international relations theory. Indeed, Hitler himself provides a good example of this when, despite his ideological opposition to Judeo-Bolshevism and desire for lebensraum in the East, not to mention his disparaging racial attitude to the Slavic peoples, nevertheless, rebuffed in his efforts to come to an understanding with Britain and France, or form an alliance with Poland, he instead sent Ribbentrop to negotiate a non-aggression pact with the Soviet Union. It can even be argued that it was Hitler’s abandonment of pragmatic realpolitik in favour of ideological imperative, when he later invaded the Soviet Union, that led ultimately to his own, and his regime’s, demise.

[2] Curiously missing from all such lists of philosophical influences on Hitler and Nazism is Nietzsche’s own early idol, Arthur Schopenhauer. Yet it was Schopenhauer’s The World as Will and Representation, that Hitler claimed to have carried with him in the trenches in his knapsack throughout the First World War, and Schopenhauer even has the dubious distinction of having his antisemitic remarks regarding Jews favourably quoted by Hitler in Mein Kampf. Indeed, according to the recollections of filmmaker Leni Riefenstahl, Hitler professed to prefer Schopenhauer over Nietzsche, the Führer being quoted by her as observing: 

I can’t really do much with Nietzsche… He is more an artist than a philosopher; he doesn’t have the crystal-clear understanding of Schopenhauer. Of course, I value Nietzsche as a genius. He writes possibly the most beautiful language that German literature has to offer us today, but he is not my guide” (quoted: Hitler’s Private Library: p107). 

Somewhat disconcertingly, this assessment of Nietzsche – namely as “more… artist than philosopher” and far from “crystal-clear” in his writing style, but nevertheless a brilliant prose stylist, the beauty of whose writing shines through even in English translation – actually rather echoes my own judgement (though, of course, in my defence, he and I are hardly alone in this judgement).
Moreover, I too am an admirer of Schopenhauer’s writings, albeit not so much his philosophy, let alone his almost mystical metaphysics, but more his almost protoDarwinian biologism and theory of human behaviour and psychology.
Yet, on reflection, Schopenhauer is surely rightly omitted from lists of the philosophical influences on Nazism. Save for the antisemitic remarks quoted in Mein Kampf, which are hardly an integral part of Schopenhauer’s philosophy, there is little in Schopenhauer’s body of writing, let alone in his philosophical writings, that can be seen to jibe with National Socialism policy or ideology.
Indeed, Schopenhauer’s philosophy, to the extent it is prescriptive at all, advocates an ascetic withdrawal from worldly affairs, including politics, and championed art as a form of escapism. This hardly provides a basis for state policy of any kind.
Admittedly, it is true that Hitler’s lifestyle, in some ways, did indeed accord with the ascetic abstinance advised by Schopenhauer. Thus, in many respects, even as dictator, the Führer nevertheless lived a frugal, spartan life, being, in later lifereportedly, a vegetarian, who also abstained from alcohol. He also, for most of his adult life, seems to have had little active sex life. Also in accord with Schopenhauer’s teaching, he was also an art lover who seemingly found escapism both in movies and especially in the operas of Wagner, the latter himself a disciple of Schopenhauer.
However, the NSDAP programme, like all political programmes, necessarily involved active engagement with the world in order to, as they saw it, improve things, something Schopenhauer did not generally advocate, and would, I suspect, have dismissed as largely futile.
Thus, modern left-liberal apologists for Nietzsche sometimes attempt to characterize Nietzsche as a largely apolitical thinker. This is, of course, deluded apologetics. However, as applied is to Schopenhauer, the claim would indeed be largely valid.
Indeed, Hitler himself aptly summarized why Schopenhauer’s philosophy could never be a basis for any type of active political programme, let alone the radical programme of the NSDAP, in a comment quoted by Hanfstaengl, where he bemoans Schopenhauer’s influence on his former mentor Eckart, remarking: 

Schopenhauer has done Eckart no good. He has made him a doubting Thomas, who only looks forward to a Nirvana. Where would I get if I listened to all his [Schopenhauer’s] transcendental talk? A nice ultimate wisdom that: To reduce on[e]self to a minimum of desire and will. Once will is gone all is gone. This life is War” (quoted in: Hitler’s Philosophers: p24). 

Thus, while the quotation attributed to Hitler by Riefenstahl, and quoted in this endnote a few paragraphs above, in which he professed to prefer the philosophy of Schopenhauer over that of Nietzsche, may indeed be an authentic recollection, nevertheless it appears that, over time, the German Führer was to revise that opinion. Thus, Hanfstaengl himself, listening to a speech by Hitler in which the latter supposedly referred to “the heroic Weltanschauung which will illuminate the ideals of Germany’s future”, remarked:

This was not Schopenhauer, who had been Hitler’s philosophical god in the old Dietrich Eckart days. No, this was new. It was Nietzsche” (Hitler: The Missing Years: p206).

Corroborating this interpretation, many years later, in 1944, Hitler himself is quoted in his Table Talk as revising, indeed almost reversing, the opinion he had supposedly expressed to Riefenstahl all those years earlier. Here, Hitler is quoted as remarking:

Schopenhauer’s pessimism, which springs partly, I think, from his own line of philosophical thought and partly from subjective feeling and the experiences of his own personal life, has been far surpassed by Nietzsche” (Table Talk: p720).

This the idea – namely, that “Schopenhauer’s pessimism” is but a reflection of his own psychology (i.e. his “subjective feeling and the experiences of his own personal life”) – is itself, incidentally, characteristically Nietzschean. Thus, Nietzsche, in ‘The Philosophy of Socrates’ infamously explained the philosophy of Socrates, and that of his successors, which Nietzsche also saw as pessimistic rather than life-affirming, as reflecting the latter’s low-birth, decadance and even his alleged ugliness (Twilight of the Idols: ii).
Nietzsche was, of course, himself, famously a disillusioned former disciple of Schopenhauer. So, it seems, was Hitler.

[3] Hicks does not mention the figure who was, in my perhaps eccentric view, the greatest thinker associated with the NSDAP, namely Nobel Prize winning ethologist Konrad Lorenz, perhaps because, unlike the other thinkers whom he does discuss, Lorenz only joined the NSDAP several years after they had come to power, and his association with the NSDAP could therefore be dismissed as purely opportunistic. Alternatively, Hicks may have overlooked Lorenz simply because Lorenz was a biologist rather than a philosopher, though it should be noted that Lorenz also made important contributions to philosophy as well, in particular his pioneering work in evolutionary epistemology.

[4] It is true that Nietzsche does not actually envisage or advocate a return to the ‘master morality’ of an earlier age, but rather the construction of a new morality, the outline of which could, at the time he wrote, only be foreseen in rough outline. Nevertheless, it is clear he favoured ‘master morality’ over the ‘slave morality’ that he associated with Christianity and our own post-Christian ethics, and also that he viewed the coming morality of the Übermensch as having much more in common with the ‘master morality’ of old than with the Christian ‘slave morality’ he so disparages. 

[5] Hitler exerted a direct impact on world history from 1933 until his death in 1945. Yet Hitler, or at least the spectre of Hitler, continues to exert an indirect but not insubstantial impact on contemporary world politics to this day, as a kind of ‘bogeyman’, whom we define our views in opposition to, and invoke as a kind of threat or form of guilt-by-association. This is most obvious in the familiar ‘reductio ad Hitlerum’.
Of course, in considering the question of whether Hitler may indeed qualify as a ‘great man’, we are not using the word ‘great’ in a moral or acclamatory sense. Rather, we are employing the term in the older sense, meaning ‘large in size’. This exculpatory clarificiation we might aptly term the Farrakhan defence

[6] Collectivists are, almost by definition, authoritarian, since collectivism necessarily demands that individual rights and freedoms be curtailed, restricted or abrogated for the benefit of the collective, and this invariably requires coercion because people have evolved to selfishly promote their own inclusive fitness at the expense of that of rivals and competitors. However, authoritarianism can also be justified on non-collectivist grounds. Nietzsche’s proposed restrictions of the individual liberty of the ‘herd animal’ and ‘Chandala’ seem to me to be justified, not by reference to the individual or collective interests of such ‘Chandala’, but rather by reference to the interests of the superior man and of the higher evolution of mankind.

[7] The second of these is a pair of interviews that were supposedly conducted with Hitler by German journalist Richard Breiting in 1931, to which Hicks sources several supposed quotations from Hitler (p117; p122; p124; p125; p133). Unfortunately, however, the interviews, only published in 1968 by Yugoslavian journalist Edouard Calic several decades after they were supposedly conducted, contain anachronistic material and are hence almost certainly post-war forgeries. Richard Evans, for example, described them as having obviously been in large part, if not completely, made up by Calic himself (Evans 2014).
The other is Hermann Rauschning’s The Voice of Destruction, published in Britain under the title Hitler Speaks, to which Hicks sources several quotations from Hitler (p120; p125; 126; p134). This is now widely recognised as a fraudulent work of wartime propaganda. Historians now believe that Rauschning actually only met with Hitler on a few occasions, was certainly not a close confident and that most, if not all, of the conversations with Hitler recounted in The Voice of Destruction are pure inventions.
Thus, for example, Ian Kershaw in the first volume of his Hitler biography, Hitler, 1889–1936: Hubris, makes sure to emphasize in his preface: 

I have on no single occasion cited Hermann Rauschning’s Hitler Speaks [the title under which The Voice of Destruction was published in Britain], a work now regarded to have so little authenticity that it is best to disregard it altogether” (Hitler, 1889–1936: Hubris: pxvi). 

Similarly, Richard Evans definitively concludes:

Nothing was genuine in Rauschning’s book: his ‘conversations with Hitler’ had no more taken place than his conversations with Göring. He had been put up to writing the book by Winston Churchill’s literary agent, Emery Reeves, who was also responsible for another highly dubious set of memoirs, the industrialist Fritz Thyssen’s I Paid Hitler” (Evans 2014).

Admittedly, Rauschning’s work was once taken seriously by mainstream historians, and The Voice of Destruction is cited repeatedly in such early but still-celebrated works as Trevor-Roper’s The Last Days of Hitler, first published in 1947, and Bullock’s Hitler: A Study in Tyranny, first published in 1952.  However, Hicks’s own book was published in 2006, by which time Rauschning’s work had already long previously been discredited as a historical source. 
Indeed, it is something of an indictment of the standards, not to mention the politicized and moralistic tenor, of what we might call ‘Hitler historiography’ that this work was ever taken seriously by historians in the first place. First published in the USA in 1940, it was clearly a work of anti-Nazi wartime propaganda and much of the material is quite fantastic in content.
For example, there are bizarre passages about Hitler having been “long been in bondage to a magic which might well have been described, not only in metaphor but in literal fact, as that of evil spirits” and of Hitler “wak[ing] at night with convulsive shrieks”, and one such passage describes how Hitler: 

Stood swaying in his room, looking wildly about him. “He! He! He’s been here!” he gasped. His lips were blue. Sweat streamed down his face. Suddenly he began to reel off figures, and odd words and broken phrases, entirely devoid of sense. It sounded horrible. He used strangely composed and entirely un-German word-formations. Then he stood quite still, only his lips moving. He was massaged and offered something to drink. Then he suddenly broke out—“There, there! In the comer! Who’s that.?” He stamped and shrieked in the familiar way. He was shown that there was nothing out of the ordinary in the room, and then he gradually grew calm” (The Voice of Destruction: p256) 

Yet, oddy, the first doubts regarding the authenticity of the conversations reported in The Voice of Destruction were raised, not by mainstream historians studying the Third Reich, but rather by an obscure Swiss researcher, Wolfgang Haenel, who first presented his thesis at a conference organized by a research institute widely associated with so-called ‘Holocaust denial’. Moreover, other self-styled ‘Holocaust revisionists’ were among the first to endorse Haenel’s critique. Yet his conclusions are now belatedly accepted by virtually all mainstream scholars in the field. This perhaps suggests that such ‘revisionist’ research is not always entirely without value.

[8] It is sometimes suggested that the hostile view of Christianity expressed in Hitler’s Table Talk reflect less the opinion of Hitler, and more those of of Hitler’s private secretary, Martin Bormann, who was responsible for transcribing much of this material. Bormann is indeed known to have been hostile to Christianity, and Speer, who disliked Bormann, indeed remarks in his memoirs that:

If in the course of such a monologue Hitler had pronounced a more negative judgment upon the church, Bormann would undoubtedly have taken from his jacket pocket one of the white cards he always carried with him. For he noted down all Hitler’s remarks that seemed to him important; and there was hardly anything he wrote down more eagerly than deprecating comments on the church” (Inside the Third Reich: p95). 

However, it is important to note that Speer does not deny that Hitler himself did indeed make such remarks. Indeed, it is hardly likely that Bormann, a faithful, if not obsequious, acolyte of the Führer, would ever dare to falsely attribute to Hitler remarks which the latter had never uttered or views to which he did not subscribe. At any rate, the views attributed to Hitler in Table Talk are amply corroborated in other sources, such as in Goebbels’s diaries and indeed in Speer’s memoirs, both of which I have also quoted above.
It is also true that, elsewhere in Table Talk, Hitler talks approvingly of Jesus as “most certainly not a Jew”, and as fighting “against the materialism of His age, and, therefore, against the Jews”. This is, of course, a very odd and eccentric, not to mention historically unsupported, perspective on the historical Jesus.
However, it is interesting to note that, despite his disdain for Christianity, Nietzsche too, in The Antichrist, despite his more orthodox portrayal of the historical Jesus, nevertheless professes to admire Jesus himself and his approach to life, even if he does not agree with it. Indeed, both Nietzsche and Hitler instead put the blame for what Christianity became squarely on the shoulders of Paul of Tarsus, whom both view as quintessentially Jewish.
Thus, Hitler directly echoes Nietzsche when he accuses, not Jesus, but Paul, of transforming Christianity into “a rallying point for slaves of all kinds against the élite, the masters and those in dominant authority” (Table Talk: p722). This again is a quitessentially Nietzschean them, the latter, in The Antichrist, similarly condemning Paul as the true founder of modern Christianity and of the Christian slave morality that followed in its wake and infected western man.
Just to clarify, I am not here suggesting that Hitler’s views with respect to Christianity are identical to those of Nietzsche. On the contrary, they clearly differ in several respects, not least in their differing conceptions of the historial Jesus.
Nevertheless, Hitler’s religious views, as expressed in his Table Talk, clearly mirror those of Nietzsche in certain key respects, not least in seeing Christianity as the greatest tragedy to befall humanity, as inimical to life itself, as a means of mobilizing the slave class against elites, and as a malign invention of or inheritance from Jews and Judaism. Given these parallels, it seems almost certain that the German Führer had read the works of Nietzsche and, to some extent, been influenced by his ideas.
Interestingly, elsewhere in his Table Talk, Hitler also condemns atheism, describing it as “a return to the state of the animal” and argues that “the notion of divinity gives most men the opportunity to concretise the feeling they have of supernatural” (Table Talk: p123; p61). Hitler also often referred to God, and especially providence, in a metaphoric sense. Indeed, he even himself professes a belief in a God, albeit of a decidedly non-Chrisitian Pantheistic form, defining God as “the dominion of natural laws throughout the whole universe” (Table Talk: p6).
However, this only demonstrates that there are other forms of theism, and deism, besides Christianity, and that one can be opposed to Christianity without being opposed to all religion. Thus, Goebbels declares in his Diary: 

The Fuhrer is deeply religious, though completely anti-Christian” (The Goebbels diaries, 1939-1941: p77). 

The general impression from Table Talk is that Hitler sees himself, perhaps surprisingly, as a scientific materialist, albeit one who, like, it must be said, no few modern self-styled scientific materialists, actually knows embarrassingly little about science. (For example, in Table Talk, Hitler repeatedly endorses Hörbiger’s notoriously pseudo-scientific World Ice Theory, comparing Hörbiger to Copernicus in his impact on cosmology, and even proposing opposing the “pseudo-science of the Catholic Church” with the ‘science’ of PtolemyCopernicus, and, yes, Hörbiger: Table Talk: p249; p324; p445.)

[9] After all, socialists already have the horrors of Mau, Stalin, Pol Pot and communist North Korea among many others on their hands. To be associated also with National Socialism in Germany as well would effectively make socialism responsible for, or at least associated with, virtually all of the great atrocities of the twentieth century, rather than merely the vast majority of them. 

[10] Interestingly, although dictionary definitions available on the internet vary considerably, most definitions of ‘socialism tend to be much narrower than my definition, emphasizing, in particular, common or public ownership of the means of production. Partly, this reflects, I suspect, the different connotations of the word in British- and American-English. Thus, in America, where, until recently, socialism was widely seen as anathema, the term was associated with, and indeed barely distinguished from, communism or Marxism. In Britain, however, where the Labour Party, one of the two main parties of the post-war era, traditionally styled itself ‘socialist’, despite generally advocating and pursuing policies that would be closer to what would be called, on continental Europe, ‘social democracy’, the word has much less radical connotations.

[11] Admittedly, reducing unemployment also seems to have been a further objective of some of the large public works projects undertaken under the Nazis (e.g. the construction of the autobahns), and this can indeed be seen as a socialist objective. However, socialists are, of course, not alone in seeing job creation as desirable and high rates of unemployment as undesirable. On the contrary, the desirability of job creation and of reducing unemployment is widely accepted across the political spectrum. Politicians differ instead only with regard to the best way to achieve this goal. Those on the left are more likely to favour increasing public sector employment, including through the sorts of public works projects employed by the Nazis. Neo-liberals are more likely to favour cutting taxes, in order to increase spending and investment, which they theorize will increase private sector employment. Here, again, therefore, Nazi policy would align most closely with those policies that are today associated with the left.

[12] It is possible Hitler’s own views evolved over time, and he too may initially have been more sympathetic to socialist policies. Thus, still largely unexplained is the full story of Hitler’s apparent involvement with the short-lived revolutionary communist regime in Munich in 1919, led by the Jewish communist Kurt Eisner. Ron Rosenbaum writes:

One piece of evidence adduced for this view documents Hitler’s successful candidacy for a position on the soldier’s council in a regiment that remained loyal to the short-lived Bolshevik regime that ruled Munich for a few weeks in April 1919. Another is a piece of faded, scratchy newsreel footage showing the February 1919 funeral procession for Kurt Eisner, the assassinated Jewish leader of the socialist regime then in power. Slowed down and studied, the funeral footage shows a figure who looks remarkably like Hitler marching in a detachment of soldiers, all wearing armbands on their uniforms in tribute to Eisner and the socialist regime that preceded the Bolshevik one” (Explaining Hitler: pxxxvii). 

If Hitler was indeed briefly a supporter of the Peoples’ State of Bavaria, which remains to be proven, and this reflected more than mere opportunism and a desire for self-advancement, then it remains to be proven when his later antiSemitic and anti-Marxist views became crystalized. It is clear that, by the time he joined the nascent DAP, Hitler was already a confirmed anti-Semite. However, perhaps he still remained something of a socialist at this time. Indeed, this might explain why he ever joined the German Workers’ Party, which, as mentioned above, indeed seems to have had, at this early time, a broadly socialist, as well as nationalist, orientation. 

[13] In fact, Nietzsche is wrong to credit the Jews as the first to perform this transvaluation of values that elevated asceticism, poverty and abstinence from worldly pleasures into a positive value. On the contrary, similar and analogous notions of asceticism seem to have had an entirely independent, and apparently prior, origin in the Indian subcontinent, in the form of both Buddhism and especially Jainism

[14] The supposed proof of this theory in to be found in the state of Israel, where Jews find themselves as a majority, and where, far from embodying the sort of ideals of multiculturalism and tolerance that Jews have typically been associated with championing in the west, there is an apartheid state, the persecution of the country’s Palestinian minority, an immigration policy that overtly discriminates against non-Jews, not to mention increasing levels of conservatism and religiosity, proving, so the theory goes, that Jewish subversive iconoclasm is intended only for external Gentile consumption. 

[15] This element (namely, the espousal of a need for radical national spiritual rebirth and reawakening) represents an integral part of the influential definition of fascism espoused by historian and political theorist Roger Griffin in his book, The Nature of Fascism.

[16] In fact, whether Nietzsche indeed envisaged the Übermensch in this way – namely as a real-world coming savior promising a new transvaluation of values and revitablization of society and civilization that would restore the warrior ethos of the ancients – is not at all clear. In fact, the concept of the Übermensch is mentioned quite infrequently in his writings, largely in Thus Spake Zarathustra and Ecce Homo, and is neither time fully developed nor clearly explained. It has even been suggested that the importance of this concept in Nietzsche’s thought has been exaggerated, partly on account of its use in use in the title of George Bernard Shaw’s famous play, Man and Superman, which explores Nietzschean themes.
Elsewhere in his writing, Nietzsche is seemingly resolutely ‘blackpilled’ regarding the inevitability of moral and spiritual decline and the impossibility of any recovery. Thus, in Twilight of the Idols, he reproaches the conservatives for attempting to turn back the clock, declaring that an arrest, let alone a reverse, in the degeneration of mankind and civilization is an impossibility:

It cannot be helped: we must go forward,—that is to say step by step further and further into decadence (—this is my definition of modern ‘progress’). We can hinder this development, and by so doing dam up and accumulate degeneration itself and render it more convulsive, more volcanic: we cannot do more” (Twilight of the Idols: viii, 43).

In other words, not only is God indeed dead (as are Zeus, Jupiter, Thor and Wotan), but, unlike Jesus in the Gospels, He can never be resurrected.

[17] Of course, another difference between Nietzsche and the Nazis is that the contemporary German culture that each regarded as decadent were separated from each other by several decades. Thus, while Hitler may have despised the German culture of the 1920s as decadent, he nevertheless would likely have admired, in many respects, the German culture of Nietzsche’s time and certainly regarded this Germany as superior to the Weimar-era Germany in which he found himself after the First World War. 
Nevertheless, Hitler did not regard the Germany of Nietzsche’s own time as any kind of ‘golden age’ or ‘lost Eden’. On the contrary, he would have deplored the Germany of Nietzsche’s day both for its alleged domination by Jews and the fact that, even after Bismarck’s supposed unification of Germany, Hitler’s own native Austria remained outside the German Reich.
Thus, neither Nietzsche nor Hitler were mere reactionaries nostalgically looking to turn back the clock. On the contrary, Nietzsche considers this an imposibility, as indicated in the passage from Twilight of the Idols quoted in the immediately preceding endnote.
Thus, just as Nietzsche does not yearn for a return to the master morality or paganism of pre-Christian Europe and classical antiquity, but rather for the coming Übermensch and new transvaluation of values that he would deliver, so Hitler’s own ‘golden age’ was to be found, not in the nineteenth century, nor even in classical antiquity, but rather in the new and utopian thousand year Reich he envisaged and sought to construct.
In short, Hitler and Nietzsche were each, in their own way, very much ‘progressives’.

[18] Other English translations render the German as the “blond Teutonic beast [emphasis added]”. At any rate, regardless of the precise translation, it is clear that a reference to the ancient Germanic peoples is intended. 

[19] The influence of such occult ideas on the Nazi leadership is much exaggeraged in some popular, sensationalist histories (or pseudohistories), television documentaries and works of fiction dealing with National Socialism. However, the influence of Völkisch occultism on the development of the National Socialist movement is not entirely a myth, and is evident, not only in the name of the Thule Society, which birthed the NSDAP, but also, for example, in the adoption by the movement of the swastika symbol as an emblem and later a flag. Indeed, although generally regarded as dismissive of such bizarre esoteric notions, and wary of their influence on some of his followers (notably Himmler and Hess) who did not share his skepticism, even Hitler himself professed belief in World Ice Theory in his Table Talk (p249; p324; p445).

[20] Nietzsche has an odd attitude to Darwinism and social Darwinism. On the one hand, he frequently disparages Darwin and Darwinism.On the other hand, his moral philosophy directly parallels that of the social Darwinists, albeit bereft of the Darwinian theory that provides the ostensible justification and basis for this theory of prescriptive ethics
Interestingly, Hitler too has an ambiguous, and, in some respects, similar, relationship with both Darwinism and social Darwinism. On the one hand, Hitler, like Nietzsche, frequently espouses views that read very much like social Darwinism. For example, in Mein Kampf, Hitler writes:

Those who want to live, let them fight, and those who do not want to fight in this world of eternal struggle do not deserve to live” (Mein Kampf).

Similarly, in his Table Talk, Hitler is quoted as declaring:

By means of the struggle, the elites are continually renewed. The law of selection justifies this incessant struggle, by allowing the survival of the fittest” (Hitler’s Table Talk: p33).

Both these quotations definitely sound like social Darwinism. Yet, interestingly, Hitler never actually mentions Darwin or Darwinism, his reference to the law of selection” being the closest he comes to referencing the theory of evolution, and even this is ambiguous, at least in the English rendering. Moreover, in a different passage from Table Talk, Hitler seemingly emphatically rejects the theory of evolution, demanding: 

Where do we acquire the right to believe that man has not always been what he is now? The study of nature teaches us that, in the animal kingdom just as much as in the vegetable kingdom, variations have occurred. They’ve occurred within the species, but none of these variations has an importance comparable with that which separates man from the monkey—assuming that this transformation really took place” (Hitler’s Table Talk: p248). 

What are we to make of this? Clearly, Hitler often contradicted himself and seemingly expressed contradictory and inconsistent views.
Moreover, both Hitler and Nietzsche didn’t really understand Darwin’s theory of evolution. Thus, Nietzsche suggested that the struggle between individuals concerns, not mere survival, but rather power (e.g. Twilight of the Idols: xiii:14). In fact, it concerns neither survival nor power as such – but rather reproductive success (which tends to correlate with power, especially among men, which is why men, in particular, are known to seek power). Thus, Spencer’s phrase, survival of the fittest, is useful only once we recognise that the ‘survival’ promoted by selection is the survival of genes rather than of individual organisms themsevles.
But we must recognize that it is possible, and quite logically consistent, to espouse something very similar in content to a social Darwinist moral framework without actually justifying this moral framework by reference to Darwinism.
In short, both Nietzsche and Hitler seem to be advocating something akin to ‘social Darwinism without the Darwinism’.

[21] If Hitler was influenced by Chamberlain, then Chamberlain himself was a disciple of Arthur de Gobineau. The latter, though considered by many as the ultimate progenitor of Nazi race theory, was, far from anti-Semitic, actually positively effusive in his praise for and admiration of the Jewish people. Even Chamberlain, though widely regarded as an anti-Semite, at least with respect to the Ashkenazim, nevertheless professed to admire Sephardi Jews, not least on account of their supposed ‘racial purity’, in particular their refusal to intermingle and intermarry with the Ashkenazim.

[22] The exact connotations of this passage may depend on the translation. The version I have quoted comes from the Manheim edition. However, a different translation renders the passage, not as The mightiest counterpart to the Aryan is represented by the Jew, but rather The Jew offers the most striking contrast to the Aryan”. This alternative translation has rather different, and less flattering, connotations, given that Hitler famously extolled Aryans as the master race. 

The Biology of Beauty

Nancy Etcoff, Survival of the Prettiest: The Science of Beauty (New York: Anchor Books 2000) 

Beauty is in the eye of the beholder.  

This much is true by very definition. After all, the Oxford English Dictionary defines beauty as: 

A combination of qualities, such as shape, colour, or form, that pleases the aesthetic senses, especially the sight’. 

If beauty is in the eye of the beholder, then the ‘eye of the beholder’ has been shaped by a process of natural, and sexual, selection to find certain things beautful — and, if beauty is in the ‘eye of the beholder’, then sexiness is located in a different part of the male anatomy but similarly subjective

Thus, beauty is defined as that which is pleasing to an external observer. It therefore presupposes the existence of an external observer, separate from the person or thing that is credited with beauty, from whose perspective the thing or individual is credited with beauty.[1]

Moreover, perceptions of beauty do indeed differ.  

To some extent, preferences differ between individuals, and between different races and cultures. More obviously, and to a far greater extent, they also differ as between species.  

Thus, a male chimpanzee would presumably consider a female chimpanzee as more beautiful than a woman. The average human male, however, would likely disagree – though it might depend on the woman. 

As William James wrote in 1890: 

To the lion it is the lioness which is made to be loved; to the bear, the she-bear. To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her” (Principles of Psychology (vol 2): p387). 

Beauty is therefore not an intrinsic property of the person or object that is described as beautiful, but rather a quality attributed to that person or object by a third-party in accordance with their own subjective tastes. 

However, if beauty is then indeed a subjective assessment, that does not mean it is an entirely arbitrary one. 

On the contrary, if beauty is indeed in the ‘eye of the beholder’ then it must be remembered that the ‘eye of the beholder’—and, more importantly, the brain to which that eye is attached—has been shaped by a process of both natural and sexual selection

In other words, we have evolved to find some things beautiful, and others ugly, because doing so enhanced the reproductive success of our ancestors. 

Thus, just as we have evolved to find the sight of excrement, blood and disease disgusting, because each were potential sources of infection, and the sight of snakes, lions and spiders fear-inducing, because each likewise represented a potential threat to our survival when encountered in the ancestral environment in which we evolved, so we have evolved to find the sight of certain things pleasing on the eye. 

Of course, not only people can be beautiful. Landscapes, skylines, works of art, flowers and birds can all be described as ‘beautiful’. 

Just as we have evolved to find individuals of the opposite sex attractive for reasons of reproduction, so these other aspects of aesthetic preference may also have been shaped by natural selection. 

Thus, some research has suggested that our perception of certain landscapes as beautiful may reflect psychological adaptations that evolved in the context of habitat selection (Orians & Heerwagen 1992).  

However, Nancy Etcoff does not discuss such research. Instead, in ‘Survival of the Prettiest’, her focus is almost exclusively on what we might term ‘sexual beauty’. 

Yet, if beauty is indeed in the ‘in the eye of the beholder’, then sexiness is surely located in a different part of the male anatomy, but equally subjective in nature. 

Indeed, as I shall discuss below, even in the context of mate preferences, ‘sexiness’ and ‘beauty’ are hardly synonyms. As an illustration, Etcoff herself quotes that infamous but occasionally insightful pseudo-scientist and all-round charlatan, Sigmund Freud, whom she quotes as observing:  

The genitals themselves, the sight of which is always exciting, are nevertheless hardly ever judged to be beautiful; the quality of beauty seems, instead, to attach to certain secondary sexual characters” (p19: quoted from Civilization and its Discontents). 

Empirical Research 

Of the many books that have been written about the evolutionary psychology of sexual attraction (and I say this as someone who has read, at one time or another, a good number of them), a common complaint is that they are full of untested, or even untestable, speculation – i.e. what that other infamous scientific charlatan Stephen Jay Gould famously referred to as just so stories

This is not a criticism that could ever be levelled at Nancy Etcoff’s ‘Survival of the Prettiest’. On the contrary, as befits Etcoff’s background as a working scientist (not a mere journalist or popularizer), it is, from start to finish, it is full of data from published studies, demonstrating, among other things, the correlates of physical attractiveness, as well as the real-world payoffs associated with physical attractiveness (what is sometimes popularly referred to as ‘lookism’). 

Indeed, in contrast to other scientific works dealing with a similar subject-matter, one of my main criticisms of this otherwise excellent work would be that, while rich in data, it is actually somewhat deficient in theory. 

Youthfulness, Fertility, Reproductive Value and Attractiveness 

A good example of this deficiency in theory is provided by Etcoff’s discussion of the relationship between age and attractiveness. Thus, one of the main and recurrent themes of ‘Survival of the Prettiest’ is that, among women, sexual attractiveness is consistently associated with indicators of youth. Thus, she writes: 

Physical beauty is like athletic skill: it peaks young. Extreme beauty is rare and almost always found, if at all, in people before they reach the age of thirty-five” (p63). 

Yet Etcoff addresses only briefly the question of why it is that youthful women or girls are perceived as more attractive – or, to put the matter more accurately, why it is that males are sexually and romantically attracted to females of youthful appearance. 

Etcoff’s answer is: fertility

Female fertility rapidly declines with age, before ceasing altogether with menopause

There is, therefore, in Darwinian terms, no benefit in a male being sexually attracted to an older, post-menopausal female, since any mating effort expended would be wasted, as any resulting sexual union could not produce offspring. 

As for the menopause itself, this, Etcoff speculates, citing scientific polymath, popularizer and part-time sociobiologist Jared Diamond, evolved because human offspring enjoy a long period of helpless dependence on their mother, without whom they cannot survive. 

Therefore, after a certain age, it pays women to focus on caring for existing offspring, or even grandchildren, rather than producing new offspring whom, given their own mortality, they will likely not be around long enough to raise to maturity (p73).[2]

This theory has sometimes been termed the grandmother hypothesis.

However, the decline in female fertility with age is perhaps not sufficient to explain the male preference for youth. 

After all, women’s fertility is said to peak in their early- to mid-twenties.[3]

However, men’s (and boy’s) sexual interest, if anything, seems to peak in respect of females, if anything, somewhat younger, namely in their late-teens (Kenrick & Keefe 1992). 

To explain this, Douglas Kenrick and Richard Keefe propose, following a suggestion of Donald Symons, that this is because girls at this age, while less fertile, have higher reproductive value, a concept drawn from ecology, population genetics and demography, which refers to an individual’s expected future reproductive output given their current age (Kenrick & Keefe 1992). 

Reproductive value in human females (and in males too) peaks just after puberty, when a girl first becomes capable of bearing offspring. 

Before then, there is always the risk she will die before reaching sexual maturity; after, her reproductive value declines with each passing year as she approaches menopause. 

Thus, Kenrick and Keefe, like Symons before them, argue that, since most human reproduction occurs within long-term pair-bonds, it is to the evolutionary advantage of males to form long-term pair-bonds with females of maximal reproductive value (i.e. mid to late teens), so that, by so doing, they can monopolize the entirety of that woman’s reproductive output over the coming years. 

Yet the closest Etcoff gets to discussing this is a single sentence where she writes: 

Men often prefer the physical signs of a woman below peak fertility (under age twenty). Its like signing a contract a year before you want to start the job” (p72). 

Yet the theme of indicators of youth being a correlate of female attractiveness is a major theme of her book. 

Thus, Etcoff reports that, in a survey of traditional cultures: 

The highest frequency of brides was in the twelve to fifteen years of age category… Girls at this age are preternaturally beautiful” (p57). 

It is perhaps true that “girls at this age are preternaturally beautiful” – and Etcoff, being female, can perhaps even get away with saying this without being accused of being a pervert or ‘paedophile’ for even suggesting such a thing. 

Nevertheless, this age “twelve to fifteen” seems rather younger than most men’s, and even most teenage boys, ideal sexual partners, at least in western societies. 

Thus, for example, Kenrick and Keefe inferred from their data that around eighteen was the preferred age of sexual partner for most males, even those somewhat younger than this themselves.[4]

Of course, in primitive, non-western cultures, women may lose their looks more quickly, due to inferior health and nutrition, the relative unavailability of beauty treatments and because they usually undergo repeated childbirth from puberty onward, which takes a toll on their health and bodies. 

On the other hand, however, obesity is more prevalent in the West, decreases sexual attractiveness and increases with age. 

Moreover, girls in the west now reach puberty somewhat earlier than in previous centuries, and perhaps earlier than in the developing world, probably due to improved nutrition and health. This suggests that females develop secondary sexual characteristics (e.g. large hips and breasts) that are perceived as attractive because they are indicators of fertility, and hence come to be attractive to males, rather earlier than in premodern or primitive cultures. 

Perhaps Etcoff is right that girls “in the twelve to fifteen years of age category… are preternaturally beautiful” – though this is surely an overgeneralization and does not apply to every girl of this age. 

However, if ‘beauty’ peaks very early, I suspect ‘sexiness’ peaks rather later, perhaps late-teens into early or even mid-twenties. 

Thus, the latter is dependent on secondary sexual characteristics that develop only in late-puberty, namely larger breasts, buttocks and hips

Thus, Etcoff reports, rather disturbingly, that: 

When [the] facial proportions [of magazine cover girls] are fed into a computer, it guesstimates their age to be between six and seven years of age” (p151; citing Jones 1995). 

But, of course, as Etcoff is at pains to emphasize in the next sentence, the women pictured do not actually look like they are of this age, either in their faces let alone their bodies. 

Instead, she cites Douglas Jones, the author of the study upon which this claim is based, as arguing that the neural network’s estimate of their age can be explained by their display of “supernormal stimuli”, which she defines as “attractive features… exaggerated beyond proportions normally found in nature (at least in adults)” (p151). 

Yet much the same could be said of the unrealistically large, surgically-enhanced breasts favored among, for example, glamour models. These abnormally large breasts are likewise an example of “supernormal stimuli” that may never be found naturally, as suggested by Doyle & Pazhoohi (2012)

But large breasts are indicators of sexual maturity that are rarely present in girls before their late-teens. 

In other words, if the beauty of girls’ faces peaks at a very young age, the sexiness of their bodies peaks rather later. 

Perhaps this distinction between what we can term ‘beauty’ and ‘sexiness’ can be made sense of in terms of a distinction between what David Buss calls short-term and long-term mating strategies

Thus, if fertility peaks in the mid-twenties, then, in respect of short-term mating (i.e. one-night stands, casual sex, hook-ups and other one-off sexual encounters), men should presumably prefer partners of a somewhat greater age than their preferences in respect of long-term partners – i.e. of maximal fertility rather than maximum reproductive value – since in the case of short-term mating strategies there is no question of monopolizing the woman or girl’s long-term future reproductive output. 

In contrast, cues of beauty, as evinced by relatively younger females, might trigger a greater willingness for males to invest in a long-term relationship. 

This ironically suggests, contrary to contemporary popular perception, males’ sexual or romantic interest in respect of relatively younger women and girls (i.e. those still in their teens) would tend to reflect more ‘honourable intentions’ (i.e. more focussed on marriage or a long-term relationship rather than mere casual sex) than does their interest in older women. 

However, as far as I am aware, no study has ever demonstrated differences in men’s preferences regarding the preferred age-range of their casual sex partners as compared to their preferences in respect of longer-term partners. This is perhaps because, since commitment-free casual sex is almost invariably a win-win situation for men, and most men’s opportunities in this arena likely to be few and far between, there has been little selection acting on men to discriminate at all in respect of short-term partners. 

Are There Sex Differences in Sexiness? 

Another major theme of ‘Survival of the Prettiest’ is that the payoffs for good-looks are greater for women than for men. 

Beauty is most obviously advantageous in a mating context. But women convert this advantage into an economic one through marriage. Thus, Etcoff reports: 

The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (p65; see also Udry & Eckland 1984; Hamermesh & Biddle 1994). 

However, there is no such advantage accruing to better-looking male students. 

On the hand, according to Catherine Hakim, in her book Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom (which I have reviewed here, here and here) in the workplace, the wage premium associated with being better looking is actually, perhaps surprisingly, greater for men than for women. 

For Hakim herself: 

This is clear evidence of sex discrimination… as all studies show women score higher than men on attractiveness” (Money, Honey: p246). 

However, as I explain in my review of her book, the better view is that, since beauty opens up so many other avenues to social advancement for women, notably through marriage, relatively more beautiful women corresponding reduce their work-effort in the workplace since they have need of pursuing social advancement through their careers when they can far more easily achieve it through marriage. 

After all, by bother to earn money when you can simply marry it instead. 

According to Etcoff, there is only one sphere where being more beautiful is actually disadvantageous for women, namely in respect of same-sex friendships: 

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (p50; citing Krebs & Adinolfy 1975). 

She does not speculate as to why this is so. An obvious explanation is envy and dislike of the sexual competition that beautiful women represent. 

However, an alternative explanation is perhaps that beautiful women do indeed come to have less likeable personalities. Perhaps, having grown used to receiving preferential treatment from and being fawned over by men, beautiful women become entitled and spoilt. 

Men might overlook these flaws on account of their looks, but, other women, immune to their charms, may be a different story altogether.[5]

All this, of course, raises the question as to why the payoffs for good looks are so much greater for women than for men? 

Etcoff does not address this, but, from a Darwinian perspective, it is actually something of a paradox which I have discussed previously

After all, among other species, it is males for whom beauty affords a greater payoff in terms of the ultimate currency of natural selection – i.e. reproductive success. 

It is therefore male birds who usually evolve more beautiful plumages, while females of the same species are often quite drab, the classic example being the peacock and peahen

The ultimate evolutionary explanation for this pattern is called Bateman’s principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972). 

The basis of this theory is this: Females must make a greater minimal investment in offspring in order to successfully reproduce. For example, among humans, females must commit themselves to nine months pregnancy, plus breastfeeding, whereas a male must contribute, at minimum, only a single ejaculate. Females therefore represent the limiting factor in mammalian reproduction for access to whom males compete. 

One way in which they compete is by display (e.g. lekking). Hence the evolution of the elaborate tail of the peacock

Yet, among humans, it is females who seem more concerned with using their beauty to attract mates. 

Of course, women use makeup and clothing to attract men rather than growing or evolving long tails. 

However, behavior is no less subject to selection than morphology, so the paradox remains.[6]

Indeed, the most promising example of a morphological trait in humans that may have evolved primarily for attracting members of the opposite sex (i.e. a ‘peacock’s tail’) is, again, a female trait – namely, breasts

This is, of course, the argument that was, to my knowledge, first developed by ethologist Desmond Morris in his book The Naked Ape, which I have reviewed here, and which I discuss in greater depth here

As Etcoff herself writes: 

Female breasts are like no others in the mammalian world. Humans are the only mammals who develop rounded breasts at puberty and keep them whether or not they are producing milk… In humans, breast size is not related to the amount or quality of milk that the breast produces” (p187).[7]

Instead, human breasts are, save during pregnancy and lactation, composed predominantly of, not milk, but fat. 

This is in stark contrast to the situation among other mammals, who develop breasts only during pregnancy. 

Breasts are not sex symbols to other mammals, anything but, since they indicate a pregnant or lactating and infertile female. To chimps, gorillas and orangutans, breasts are sexual turn-offs” (p187). 

Why then does sexual selection seem, at least on this evidence, to have acted more strongly on women than men? 

Richard Dawkins, in The Selfish Gene (which I have reviewed here), was among the first to allude to this anomaly, lamenting: 

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (The Selfish Gene: p165). 

Yet this is surely not the case with regard to casual sex (i.e. hook-ups and one-night stands). Here, it is very much men who ardently pursue and women who are sought after. 

For example, in one study at a University campus, 72% of male students agreed to go to bed with a female stranger who propositioned them to this effect, yet not a single one of the 96 females approached agreed to the same request from a male stranger (Clark and Hatfield 1989). 

(What percentage of the students sued the university for sexual harassment was not revealed.) 

Indeed, patterns of everything from prostitution to pornography consumption confirm this – see The Evolution of Human Sexuality (which I have reviewed here). 

Yet humans are unusual among mammals in also forming long-term pair-bonds where male parental investment is the norm. Here, men have every incentive to be as selective as females in their choice of partner. 

In particular, in Western societies practising what Richard Alexander called socially-imposed monogamy (i.e. where there exist large differentials in male resource holdings, but polygynous marriage is unlawful) competition among women for exclusive rights to resource-abundant alpha males may be intense (Gaulin and Boser 1990). 

In short, the advantage to a woman in becoming the sole wife of a multi-millionaire is substantial. 

This, then, may explain the unusual intensity of sexual selection among human females. 

Why, though, is there not evidence of similar sexual selection operating among males? 

Perhaps the answer is that, since, in most cultures, arranged marriages are the norm, female choice actually played little role in human evolution. 

As Darwin himself observed in The Descent of Man as an explanation as to why intersexual selection seems, unlike among most other species, to operated more strongly on human females than on men:

Man is more powerful in body and mind than woman, and in the savage state he keeps her in a far more abject state of bondage than does the male of any other animal; therefore it is not surprising that he should have gained the power of selection” (The Descent of Man).

Instead, male mating success may have depended less upon what Darwin called intersexual selection and more upon intrasexual selection – i.e. less upon female choice and more upon male-male fighting ability (see Puts 2010). 

Male Attractiveness and Fighting Ability 

Paradoxically, this is reflected even in the very traits that women find attractive in men. 

Thus, although Etcoff’s book is titled ‘The Evolution of Prettiness’, and ‘prettiness’ is usually an adjective applied to women, and, when applied to men, is—perhaps tellingly—rarely a complement, Etcoff does discuss male attractiveness too.  

However, Etcoff acknowledges that male attractiveness is a more complex matter than female attractiveness: 

We have a clearer idea of what is going on with female beauty. A handsome male turns out to be a bit harder to describe, although people reach consensus almost as easily when they see him” (p155).[8]

Yet what is notable about the factors that Etcoff describes as attractive among men is that they all seem to be related to fighting ability. 

This is most obviously true of height (p172-176) and muscularity (p176-80). 

Indeed, in a section titled “No Pecs, No Sex”, though she focuses on the role of pectoral muscles in determining attractiveness, Etcoff nevertheless acknowledges: 

Pectoral muscles are the human male’s antlers. Their weapons of war” (p177). 

Thus, height and muscularity have obvious functional utility. 

This in stark contrast to traits such as the peacock’s tail, which are often a positive handicap to their owner. Indeed, one influential theory of sexual selection contends that it is precisely because they represent a handicap that they have evolved as a sexually-selected fitness indicator, because only a genetically superior male is capable of bearing the handicap of such an unwieldy ornament, and hence possession of such a handicap is paradoxically an honest signal of health. 

Yet, if men’s bodies have evolved more for fighting than attracting mates, the same is perhaps less obviously true of their faces. 

Thus, anthropologist David Puts proposes: 

Even [male] facial structure may be designed for fighting: heavy brow ridges protect eyes from blows, and robust mandibles lessen the risk of catastrophic jaw fractures” (Puts 2010: p168). 

Indeed, looking at the facial features of a highly dominant, masculine male face, like that of Mike Tyson, for example, one gets the distinct impression that, if you were foolish enough to try punching it, it would likely do more damage to your hand than to his face. 

Thus, if some faces are, as cliché contends, highly ‘punchable’, then others are presumably at the opposite end of this spectrum. 

This also explains some male secondary sexual characteristics that otherwise seem anomalous, for example, beards. These have actually been found in some studies “to decrease attractiveness to women, yet have strong positive effects on men’s appearance of dominance” (Puts 2010: p166). 

David Puts concludes: 

Men’s traits look designed to make men appear threatening, or enable them to inflict real harm. Men’s beards and deep voices seem designed specifically to increase apparent size and dominance” (Puts 2010: p168). 

Interestingly, Etcoff herself anticipates this theory, writing: 

Beautiful ornaments [in males] develop not just to charm the opposite sex with bright colors and lovely songs, but to intimidate rivals and win the intrasex competition—think of huge antlers. When evolutionists talk about the beauty of human males, they often refer more to their weapons of war than their charms, to their antlers rather than their bright colors. In other words, male beauty is thought to have evolved at least partly in response to male appraisal” (p74) 

Of course, these same traits are also often attractive to females. 

After all, if a tall muscular man has higher reproductive success because he is better at fighting, then it pays women to preferentially mate with tall, muscular men so that their male offspring will inherit these traits and hence themselves have high reproductive success, helping the spread the women’s own genes by piggybacking on the superior male’s genes.  

This is a version of sexy son theory

In addition, males with fighting prowess are better able to protect and provision their mates. 

However, this attractiveness to females is obviously secondary to the primary role in male-male fighting. 

Moreover, Etcoff admits, highly masculine faces are not always attractive. 

Thus, unlike the “supernormal” or “hyperfeminine” female faces that men find most attractive in women, women rated “hypermasculine” faces as less attractive (p158). This, she speculates, is because they are perceived as overaggressive and unlikely to invest in offspring

As to whether such men are indeed less willing to invest in offspring, this Etcoff does not discuss and there appears to be little evidence on the topic. But the association of testosterone with both physiological and psychological masculinization suggests that the hypothesis is at least plausible

Etcoff concludes: 

For men, the trick is to look masculine but not exaggeratedly masculine, which results in a ‘Neanderthal’ look suggesting coldness or cruelty” (p159). 

Examples of males with perhaps overly masculine faces are perhaps certain boxers, who tend to have highly masculine facial morphology (e.g. heavy brow ridges, deep set eyes, wide muscular jaws), but are rarely described as handsome. 

For example, I doubt anyone would ever call Mike Tyson handsome. But, then, no one would ever call him exactly ugly either – at least not to his face. 

An extreme example might be the Russian boxer Nikolai Valuev, whose extreme neanderthal-like physiognomy was much remarked on. 

Another example that sprung to mind was the footballer Wayne Rooney (also, perhaps not uncoincidentally, said to have been a talented boxer) who, when he first became famous, was immediately tagged by the newspapers, media and comedians as ugly despite – or indeed because of – his highly masculine, indeed thuggish, facial physiognomy

Likewise, Etcoff reports that large eyes are perceived as attractive in men, but these are a neotenous trait, associated with both immature infants and indeed with female beauty (p158). 

This odd finding Etcoff attributes to the fact that large eyes, as an infantile trait, evoke women’s nurturance, a trait that evolved in the context of parental investment rather than mate choice

Yet this is contrary to the general principle in evolutionary psychology of modularity of mind and the domain specificity of psychological adaptations, whereby it is assumed that that psychological adaptations for mate choice and for parental investment represent domain-specific modules with little or no overlap. 

Clearly, for psychological adaptations in one of these domains to be applied in the other would result in highly maladaptive behaviours, such as sexual attraction to infants and to your own close biological relatives.[9]

In addition to being more complex and less easy to make sense of than female beauty, male physical attractiveness is also of less importance in determining female mate choice than is female beauty in male mate choice

In particular, she acknowledges that male status often trumps handsomeness. Thus, she quotes a delightfully cynical, not especially poetic, line from the ancient Roman poet Ovid, who wrote: 

Girls praise a poem, but go for expensive presents. Any illiterate oaf can catch their eye, provided he’s rich” (quoted: p75). 

A perhaps more memorable formulation of the same idea is quoted on the same page from a less illustrious source, namely boxing promoter, numbers racketeer and convicted killer Don King, on a subject I have already discussed, namely the handsomeness (or not) of Mike Tyson, King remarking: 

Any man with forty two million looks exactly like Clark Gable” (quoted: p75). 

Endnotes

[1] I perhaps belabor this rather obvious point only because one prominent evolutionary psychologist, Satoshi Kanazawa, argues that, since many aspects of beauty standards are cross-culturally universal, beauty standards are not ‘in the eye of the beholder’. I agree with Kanazawa on the substantive issue that beauty standards are indeed mostly cross-culturally universal among humans (albeit not entirely so). However, I nevertheless argue, perhaps somewhat pedantically, that beauty remains strictly in the ‘eye of the beholder’, but it is simply that the ‘eye of the beholder’ (and the brain to which is attached) has been shaped by a process of natural selection so as to make different humans share the same beauty standards. 

[2] While Jared Diamond has indeed made many original contributions to many fields, this idea does not in fact originate with him, even though Etcoff oddly cites him as a source. Indeed, as far as I am aware, it is even especially associated with Diamond. Instead, it may actually originatea by another, lesser known, but arguably even more brilliant evolutionary biologist, namely George C Williams (Williams 1957). 

[3] Actually, pregnancy rates peak surprisingly young, perhaps even disturbingly young, with girls in their mid- to late-teens being most likely to become pregnant from any single act of sexual intercourse, all else being equal. However, the high pregnancy rates of teenage girls are said to be partially offset by their greater risk of birth complications. Therefore, female fertility is said to peak among women in their early- to mid-twenties.

[4] This Kenrick and Keefe inferred from, among other evidence, an analysis of lonely hearts advertisements, wherein, although the age of the female sexual/romantic partner sought was related to the advertised age of the man placing the ad (which Kenrick and Keefe inferred was a reflection of the fact that their own age delimited the age-range of the sexual partners whom they would be able to attract, and whom it would be socially acceptable for them to seek out) nevertheless the older the man, the greater the age-difference he sought in a partner. In addition, they reported evidence of surveys suggesting that, in contrast to older men, younger teenage boys, in an ideal world, actually preferred somewhat older sexual partners, suggesting that the ideal age of sexual partner for males of any age was around eighteen years of age (Kenrick & Keefe 1992).

[5] Etcoff also does not discuss whether the same is true of exceptionally handsome men – i.e. do exceptionally handsome men, like beautiful women, also have problems maintaining same-sex friendships. I suspect that this is not so, since male status and self-esteem is not usually based on handsomeness as such – though it may be based on things related to handsomeness, such as height, athleticism, earnings, and perceived ‘success with women’. Interestingly, however, French novelist Michel Houellebecq argues otherwise in his novel, Whatever, in which, after describing the jealousy of one of the main characters, the short ugly Raphael Tisserand, towards an particularly handsome male colleague, writes: 

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63) 

[6] Thus, in other non-human species, behaviour is often subject to sexual selection, in, for example, mating displays, or the remarkable, elaborate and often beautiful, but non-functional, nests built by male bowerbirds, which Geoffrey Miller sees as analogous to human art. 

[7] An alternative theory for the evolution of human breasts is that they evolved, not as a sexually selected ornament, but rather as a storehouse of nutrients, analogous to the camel’s humps, upon which women can draw during pregnancy. On this view, the sexual dimorphism of their presentation (i.e. the fact that, although men do have breasts, they are usually much less developed than those of women) reflects, not sexual selection, but rather the calaric demands of pregnancy. 
However, these two alternative hypotheses are not mutually incompatible. On the contrary, they may be mutually reinforcing. Thus, Etcoff herself mentions the possibility that breasts are attractive precisely because: 

Breasts honestly advertise the presence of fat reserves needed to sustain a pregnancy” (p178.) 

On this view, men see fatty breasts as attractive in a sex partner precisely because only women with sufficient reserves of fat to grow large breasts are likely to be capable of successfully gestating an infant for nine months. 

[8] Personally, as a heterosexual male, I have always had difficulty recognizing ‘handsomeness’ in men, and I found this part of Etcoff’s book especially interesting for this reason. In my defence, this is, I suspect, partly because many rich and famous male celebrities are celebrated as ‘sex symbols’ and described as ‘handsome’ even though their status as ‘sex symbols’ owes more to the fact they are rich and famous than their actual looks. Thus, male celebrities sometimes become sex symbols despite their looks, rather than because of them. Many famous rock stars, for example, are not especially handsome but nevertheless succeed in becoming highly promiscuous and much sought after by women and girls as sexual and romantic partners. In contrast, men did not suddenly start idealizing fat or physically unattractive female celebrities as sexy and beautiful simply because they are rich famous celebrities.
Add to this the fact that much of what passes for good looks in both sexes is, ironically, normalness – i.e. a lack of abnormalities and averageness – and identifying which men women consider ‘handsome’ had, before reading Etcoff’s book, always escaped me.
However, Etcoff, for her part, might well call me deluded. Men, she reports, only claim they cannot tell which men are handsome and which are not, perhaps to avoid being accused of homosexuality

Although men think they cannot judge another man’s beauty, the agree among themselves and with women about which men are the handsomest” (p138). 

Nevertheless, there is indeed some evidence that judging male handsomeness is not as clear cut as Etcoff seems to suggests. Thus, it has been found that, not only do men claim to have difficulty telling handsome men from ugly men, but also women themselves are more likely to disagree among themselves about the physical attractiveness of members of the opposite sex as compared to men (Wood & Brumbaugh 2009Wake Forest University 2009). 
Indeed, not only do women not always agree with one another regarding the attractiveness of men, sometimes they can’t even agree with themselves. Thus, Etcoff reports: 

A woman makes her evaluations of men more slowly, and if another woman offers a different opinion, she may change her mind” (p76). 

This indecisiveness, for Etcoff, actually makes good evolutionary sense:

If women take a second look, compare notes with other women, or change their minds after more thought, it is not out of indecisiveness but out of wisdom. Mate choice is not just about fertility—most men are fertile most or all of their lives—but about finding a helpmate to bring up the baby” (p77). 

Another possible reason why women may consult other women as to whether a given man is attractive or not is sexy son theory
On this view, it pays for women to mate with men who are perceived as attractive by other women because then any offspring whom they bear by these men will likely inherit the very traits that made the father attractive to women, and hence themselves be attractive to women and hence be successful in spreading the woman’s own genes to subsequent generations. 
In other words, being attractive to other women is itself an attractive trait in a male. However, sexy son theory is not discussed by Etcoff.

[9] Another study discussed by Etcoff also reported anomalous results, finding that women actually preferred somewhat feminized male faces over both masculinized and average male faces (Perrett et al 1998). However, Etcoff cautions that: 

The Perrett study is the only empirical evidence to date that some degree of feminization may be attractive in a man’s face” (p159). 

Other studies concur that male faces that are somewhat, but not excessively, masculinized as compared to the average male face are preferred by women. 
However, one study published just after the first edition of ‘Survival of the Prettiest’ was written, holds the possibility of reconciling these conflicting findings. This study reported cyclical changes in female preferences, with women preferring more masculinized faces only when they are in the most fertile phase of their cycle, and at other times preferring more feminine features (Penton-Voak & Perrett 2000). 
This, together with other evidence, has been controversially interpreted as suggesting that human females practice a so-called dual mating strategy, preferring males with more feminine faces, supposedly a marker for a greater willingness to invest in offspring, as social partners, while surreptitiously attempting to cuckold these ‘beta providers’ with DNA from high-T alpha males, by preferentially mating with the latter when they are most likely to be ovulating (see also Penton-Voak et al 1999Bellis & Baker 1990). 
However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology
While the intensity of women’s sex drive does indeed seem to fluctuate cyclically, the evidence for more fine-grained changes in female mate preferences should be treated with caution. 

References 

Bateman (1948), Intra-sexual selection in DrosophilaHeredity, 2(3): 349–368. 
Bellis & Baker (1990). Do females promote sperm competition?: Data for humansAnimal Behavior, 40: 997-999. 
Clark & Hatfield (1989) Gender differences in receptivity to sexual offers. Journal of Psychology & Human Sexuality, 2(1), 39–55 
Doyle & Pazhoohi (2012) Natural and Augmented Breasts: Is What is Not Natural Most Attractive? Human Ethology Bulletin 27(4):4-14. 
Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005. 
Gildersleeve et al (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic reviewPsychological Bulletin 140(5):1205-59. 
Hamermesh & Biddle (1994) Beauty and the Labor Market, American Economic Review 84(5):1174-1194.
Jones 1995 Sexual selection, physical attractiveness, and facial neoteny: Cross-cultural evidence and implications, Current Anthropology, 36(5):723–748. 
Kenrick & Keefe (1992) Age preferences in mates reflect sex differences in mating strategies. Behavioral and Brain Sciences 15(1):75-133. 
Orians & Heerwagen (1992) Evolved responses to landscapes. In Barkow, Cosmides & Tooby (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 555–579). Oxford University Press. 
Penton-Voak et al (1999) Menstrual cycle alters face preferencesNature 399 741-2. 
Penton-Voak & Perrett DI (2000) Female preference for male faces changes cyclically: Further evidence. Evolution and Human Behavoir 21(1):39–48. 
Perrett et al (1998) Effects of sexual dimorphism on facial attractiveness. Nature 394(6696):884-7. 
Puts (2013) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31(3):157-175. 
Wake Forest University (2009) Rating Attractiveness: Consensus Among Men, Not Women, Study Finds. ScienceDaily. ScienceDaily, 27 June 2009. 
Trivers (1972) Parental investment and sexual selectionSexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 
Williams (1957) Pleiotropy, natural selection, and the evolution of senescence. Evolution. 11(4): 398–411. 
Wood & Brumbaugh (2009) Using Revealed Mate Preferences to Evaluate Market Force and Differential Preference Explanations for Mate Selection, Journal of Personality and Social Psychology 96(6):1226-44.
Udry & Eckland (1984) Benefits of Being Attractive: Differential Payoffs for Men and Women, Psychological Reports 54(1):47–56.
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.  

Selwyn Raab’s ‘Five Families’: A History of the New York Mafia, Heavily Slanted Towards Recent Times

Selwyn Raab, Five Families: The Rise, Decline and Resurgence of American’s Most Powerful Mafia Empires (London: Robson Books 2006) 

With Italian-American organized crime now surely in terminal decline, the time is ripe for a definitive history of the New York Mafia. Unfortunately, Selwyn Raab’s ‘Five Families: The Rise, Decline, and Resurgence of America’s Most Powerful Mafia Empires’ is not it.[1]

Focus on Late-Twentieth Century

Its first failing as a history of the New York Mafia is that, despite its length, the book gives only cursory coverage to the early history of the New York Mafia. 

Instead, it is heavily weighted towards the recent history of the five families. 

This is perhaps unsurprising. After all, the author, Selwyn Raab is, by background, a journalist not an historian. 

Indeed, it is surely no coincidence that Raab’s history only starts to become in-depth at about the time he began covering the activities of the New York mob in real-time as reporter for The New York Times in 1974.

To give an idea of this bias I will cite page numbers. 

The book comprises over 700 pages, plus title pages, ‘Prologue’, ‘Introduction’, ‘Afterword’, ‘Epilogue’, two appendices, ‘Bibliography’ and ‘Index’, themselves comprising a further 100 or so pages. 

The first two chapters are introductory, and mostly cite examples of Mafia activities from the mid- to late twentieth century. 

The chronological narrative begins in in Chapter 3, titled ‘Roots’, which purports to cover both the origin of the New York Mafia and its prehistory. 

In doing so, Raab repeats uncritically the Sicilian Mafia’s own romantic foundation myth, claiming that the Mafia began during Sicily’s long history of foreign occupation as a form of  “self-preservation against perceived corrupt oppressors (p14). 

Indeed, even his supposedly “less romantic and more likelyetymology for the word ‘Mafia is that it derives from “a combined Sicilian-Arabic slang expression that means acting as a protector against the arrogance of the powerful” (p14). 

Actually, according to historian John Dickie, rather than protecting the common people against corrupt oppression by outsiders, the Sicilian Mafia was itself corrupt, exploitative and oppressive from the very beginning (see Dickie’s books, Costa Nostra and Blood Brotherhoods). 

Raab is vague on the precise origins of the Sicilian Mafia, but does insist that mafia cosche evolved “over hundreds of years” (p14).

This is, again, likely a Mafia myth. The Mafia, like the Freemasons (from whom its initiation rituals are, at least according to Dickie, likely borrowed), exaggerates its age to enhance its venerability and mystique.[2]

Of course, Raab’s text is a history of the New York Mafia. One can therefore overlook his inadequate treatment of its Sicilian prehistory. 

Unfortunately, his treatment of early Mafia activity in New York itself is barely better. 

Early turn-of-the-century New York Mafiosi like Giuseppe Morello and Lupo the Wolf are not even mentioned. Nor are their successors, the Terranova brothers. Neither is there any mention of the barrel murders, counterfeiting trial or Mafia-Camorra War

Even their nemesis, Italian-born NYPD officer, Joe Petrosino, murdered in Sicily while investigating the backgrounds of transplanted Mafiosi with a view to deportation, merits only a cursory two and a bit pages – something almost as derisory as the “bare, benchless concrete slab serv[ing] as a road divider and pedestrian-safety island” that ostensibly commemorates him in Lower Manhattan today (p19- 21). 

There are just nineteen pages in Raab’s chapter on the New York Mafia’s ‘Roots’. The next chapter is titled ‘The Castallammarese War’, and focuses upon the gang war of that name, which began in 1930, although the chapter begins with a discussion of the effects of the national Prohibition law that came into force in 1920. 

Therefore, since the Morello Family seems to have had its roots in the 1890s, that’s over twenty years of New York Mafia history (not to mention, according to Raab, some several centuries of Sicilian Mafia history) passed over in less than twenty pages. 

Readers interested in the origins of the five families, and indeed how there came to be five families in the first place, should look elsewhere. I would recommend instead Mike Dash’s The First Family, which uncovers the formerly forgotten history of the first New York Mafia family, the Morello family, the ancestor of today’s Genovese Family, arguably still the most powerful mafia family in America to this day. 

Although I have yet to read it, James Jacobs’ The Mob and the City also comes highly recommended in many quarters. 

Whereas Raab’s account of the first few decades of American Mafia history is particularly inadequate, the coverage of the next few decades of organized crime history, is barely better. 

Here, we get the familiar potted history of the New York Mafia with the each of the usual suspects – Luciano, Anastasia, Costello, Genovese – successively assuming center stage. 

Moreover, despite his ostensible focus on Italian-American organized crime, unlike in the Mafia itself (which, though it has survived countless RICO prosecutions, would surely would never survive a class-action lawsuit for racial discrimination), non-Italians are not arbitrarily excluded from Raab’s account.

On the contrary, each of make their usual, almost obligatory, cameos—Bugsy Siegel assassinated in the Los Angeles home of Virginia Hill, Abe ‘Kid Twist’ Relesaccidentally’ falling from a sixth-floor window, and, of course, the shadowy and much-mythologized Meyer Lansky always lurking in the background like a familiar anti-Semitic conspiracy theory

It is not that Raab actually misses anything out, but rather that he doesn’t really add much. 

Instead, we get another regurgitation of the familiar Mafia history with which anyone who has had the misfortunate of reading any of the countless earlier popular histories of the American Mafia will be all too familiar. 

Then, after just 100 pages, we are already at the Appalachin meeting in 1957. 

That’s over fifty years of twentieth century American Mafia history condensed in less than 100 pages. More to the point, it’s over half the entire period of American Mafia history covered by Raab’s book (which was published in 2005) covered in less than a seventh of the total text. 

After a brief diversion, namely two chapters discussing supposed Mafia involvement in the Kennedy assassination, we are into the 1970s, and now Raab’s coverage suddenly becomes in-depth and authoritative. 

Is All Publicity Bad Publicity?

The period of New York Mafia history upon which Raab’s text focusses (namely from the 1970s until the turn of the century) may indeed have marked the high point of Mafia mystique, with blockbuster movies like the overrated Godfather’ trilogy glamourizing Italian-American organized crime like never before.

However,it arguably also marked the beginning of the New York Mafia’s decline.[3]

Indeed, the Mafia’s notoriety during this period may even have been a factor in its decline. After all, publicity and media infamy are, for a criminal organization, at best a mixed blessing.  

True, a media-cultivated aura of power and untouchability may discourage victims from running to the police, and also deter rival criminals from attempting to challenge mafia hegemony. 

However, criminal conspiracies operate best when they are outside the public eye, let alone the scrutinizing glare of the journalists, movie-makers, government and law enforcement. 

There is, after all, a reason why the Mafia is a secret society whose very existence is, at least in theory, a closely-guarded secret.

It is no accident, then, that those crime bosses who openly courted the limelight and revelled in their own notoriety did not enjoy long and successful careers, Al Capone and John Gotti representing the two best known American cases of organized crime bosses who made the mistake of courting media atention.[4]

Thus, John Gotti inevitably takes up more than his share of chapters in Raab’s book, just as, during his lifetime, he enjoyed more than his share of headlines in Raab’s own New York Times. In short, the so-called ‘Dapper Don’ invariably made for good copy. 

However, courting the media is rarely a sensible way to run a crime empire. 

A famous adage of the marketing industry supposedly has it that all publicity is good publicity.

This may be true, or at least close to being true, in, say, the realm of rock or rap music, where controversy is often a principal selling point.

However, in the world of organized crime, almost the exact opposite could be said to be true. 

Thus, much of the press coverage of Gotti may indeed have been flattering, even fawning, or at least perceived by Gotti as such. Certainly he himself often seemed to revel in his own infamy and also became something of a folk hero to some sections of the public. 

However, the more he became a folk hero by thumbing his nose at the authorities, the more of a threat he posed to those authorities, in part precisely because he had become something of a folk hero.

The result was that, although the press initially dubbed him ‘The Teflon Don’, because, supposedly, no charges would ever stick, Gotti actually enjoyed less than a decade of freedom as Gambino family boss before being convicted and imprisoned. 

By courting the limelight, he also invited the attention of, not just the media, but also of law enforcement and thereby ensured that his fifteen minutes of fame would be followed by a lifetime of incarceration. 

A rather more sensible approach was perhaps that adopted by a lesser-known contemporary of, and rival to, Gotti, namely Genovese family boss Vincent ‘The Chin’ Gigante, who, far from courting publicity like Gotti, let ‘front bossFat Tony Salerno take the bulk of law enforcement heat, while himself attempting, initially quite successfully, to pass under the radar. 

While fictional Mafia boss Tony Soprano spent the bulk of the television series in which he played the leading role attempting to conceal his visits to a psychiatrist from his Mafia colleagues, Gigante made sure his own (supposed) mental health difficulties were as public as possible, feigning mental illness for decades in order to avoid law enforcement attention. 

Nicknamed ‘The Oddfather’ by the press for his bizarre antics, he was regularly pictured walking the streets of Greenwich Village in a bathrobe and was said to regularly check into a local psychiatric hospital whenever law enforcement heat was getting too much.[5]

Wary of phone taps and bugs, Gigante also insisted that other members of the crime family of which he was head never mention him by name, but rather, if they had to refer to him, simply to point towards their chin or curl their fingers into the shape of a letter ‘c’. 

These precautions had law enforcement fooled for years, and it was long believed in law enforcement circles that Gigante was retired and the real boss was indeed front boss Tony Salerno

Largely as a result, Gigante enjoyed at least a decade and a half as Genovese boss before he too belatedly joined his erstwhile rival John Gotti behind bars. 

A History of the Mafia – or of Law Enforcement Efforts to Destroy Them?

Of course, the secrecy with which mafiosi like Gigante took pains to veil their affairs presents a challenge, not just to law enforcement, but also to the historian. 

After all, criminals are, almost by definition, dishonest.[6]

Even those mafiosi who did break rank, and the code of omertà, by providing testimony to the authorities, or sometimes publishing memoirs and giving interviews on television (or, most recently, even starting their own youtube channels), are notoriously unreliable sources of information, being prone to both exaggerate their own role and importance in events, while also (rather contradictorily) minimizing their role in any serious prosecutable offences for which they have yet to serve time. 

Perhaps a more trustworthy source of information—or so one would hope—is law enforcement.  

Yet, relying on the latter as a source, Raab’s account inevitably ends up being as much a history of law enforcement efforts to bring the Mafiosi to justice as it is of the Mafia itself. 

Thus, for example, a whole chapter, entitled ‘The Birth of RICO’, is devoted to the development and passage into law of the Racketeer Influenced and Corrupt Organizations Act or RICO Act of 1970

Indeed, amusingly, but perhaps not especially plausibly, Raab even suggests that the name of this act, or rather the acronym by which the Act, and prosecutions under it, became known, may have been inspired by the once-famous final line of seminal 1930s Warner Brothers gangster movie, Little Ceasar, Raab reporting that George Robert Blakely, the lawyer largely responsible for the drafting of the Act: 

Refuses to explain the reason for the RICO acronym. But he is a crime-film buff and admits that one of his favorite movies is Little Caesar, a 1931 production loosely modeled on Al Capone’s life… Dying in an alley after a gun battle with the police, Little Caesar [Caesar Enrico ‘Rico’ Bandello] gasps one of Hollywood’s famous closing lines—also Blakey’s implied message to the Mob: ‘Mother of Mercy—is this the end of Rico?’” (p177). 

Of course, the passage into law of the RICO statute, as it turned out, was indeed a seminal event in American Mafia history, facilitating, as it did, the successful prosection and incarceration of countless Mafia bosses and other organized crime figures.

Nevertheless, in this chapter, and indeed elsewhere in the book, the five families themselves inevitably fade very much into the background, and Raab concentrates instead on the tactics of and conflicts among law enforcement themselves. 

Yet, in Raab’s defence, such material is often no less interesting than the stories of mafiosi themselves. 

Indeed, one thing to emerge from portions of Raab’s narrative is that conflicts and turf wars between different branches, levels and layers of law enforcement—local, state and federal—were often as fiercely, if less bloodily, fought over as were territorial disputes among mafiosi themselves. 

After all, mafiosi rarely take the trouble to commit crimes in only the jurisdiction of a single police precinct. Therefore, the jurisdiction of different branches and levels of law enforcement frequently overlapped.  

Yet, such was the fear of police corruption and mafia infiltration, different branches of law enforcement rarely trusted one another enough to share intelligence with other branches of law enforcement, lest a confidential source, informant, undercover agent, phone tap, bug or wire be thereby compromised, let alone to allow a rival branch of law enforcement to take the lion’s share of the credit, and newspaper headlines, for bringing a high-profile mafia scalp to justice. 

A ‘Pax Mafiosa’ in New York?

In contrast, territorial disputes between crime families actually seem to have been surprisingly muted, and were usually ironed out through ‘sit-downs’ (i.e. effectively an appeal to arbitration by a higher authority) rather than resort to violence. 

Thus, despite its familiarity as a formulaic cliché of mafia movies from The Godfather onwards, there appears to have never actually been another war between rival Mafia families in New York after the Castallammarese War ended in 1931. 

Mafia wars did ocassionally occur—e.g. the Banana War, First, Second and Third Colombo Wars. However, these were all intra-family affairs, involving control over a single family, rather than conflict between different families, though families did sometimes attempt to sponsor ‘regime change’ in other families.[7]

The Castallammarese War therefore stands as the New York Mafia equivalent of the First World War, with each of the nascent five family factions joined together in two grand coalitions, just as, before and during the World War One, the great powers (and a host of lesser powers) joined together in two grand alliances

However, whereas the First World War only promised to be the war to end all wars, the Castallammarese War has some claim to actually delivering on this promise, with the independent sovereignty of each of the five families thenceforth mutually respected in a sort of Westphalian Peace, or Pax Mafiosa that lasted for the better part of a century. 

In The Godfather (the novel, not the film), Michael Corleone quotes his father as claiming, had “the [five] Families had been running the State Department there would never have been World War II”, because the rival powers would have been smart enough to iron out their problems without resort to unnecessary bloodshed and economic expense. 

On the evidence of New York Mafia history as recounted by Raab in ‘The Five Families’, Don Corleone may, perhaps surprisingly, have had a point. 

Perhaps, then, our world leaders and statesmen could learn indeed something from lowlife criminals about the importance of avoiding the unnecessary bloodshed and expense of war. 

Honor Among Thieves – and Among Men of Honor? 

Another general conclusion that can be drawn from Raab’s history is that, if there is, as cliché contends, but little honor among thieves, there is seemingly scarcely any more honor even among self-styled ‘men of honor’. 

This is even true of the most influential figure in American Mafia history, Charles ‘Lucky’ Luciano, described by Raab in one photo caption as “the visionary godfather and designer of the modern Mafia”, and elsewhere as “the Mafia’s visionary criminal genius”, who is even credited, in some tellings, with creating the Commission and even the five families themselves.[8]

Yet Luciano was a serial traitor. 

First, he betrayed his ostensible ally, Joe ‘The Boss’ Masseria, in the Castellammarese War, setting him up for assassination by his rival Salvatore Maranzano. Then, just a few months later, he betrayed and arranged the murder of Maranzano as well, leaving Luciano himself free to take the position of, if not capo di tutti capi, then at least the most powerful mafiosi in New York, and probably in America, if not the world. 

In this series of betrayals, Luciano set the pattern for the twentieth century mob. 

The key is to make sure that you betray what turns out to be the losing side, if only on account of your betrayal.

The powerful Gambino crime family provides a particularly good illustration of this. Indeed, for much of the twentieth century, staging an interal coup or arranging the assassination of the current incumbent seems to have been almost the accepted means of securing the succession.

Thus, John Gotti famously became boss of the family by arranging the murder of his own boss, Paul Castellano, just as Castellano’s own predecessor, the eponymous Carlo Gambino, had himself allegedly been complicit in the murder of his own former boss, Albert Anastasia, who was himself the main suspect in the murder of his own predecessor, Vincent Mangano

However, such treachery was by no means restricted to the Gambinos. On the contrary, Joe Colombo became boss of the crime family now renamed in his honor by betraying his own boss Joe Magliocco (and Bonanno boss Joe Bonnano) to the bosses of the three other families whom he had been ordered by them to to kill. 

Meanwhile, one of Colombo’s successors, Carmine ‘The Snake’ Gigante, had also been at war with his own boss, Joe Profaci, in the First Colombo War, but then, in a further betrayal, switched allegiances, setting up his former allies, the Gallo brothers, for assassination by the Profaci leadership. For his trouble, Gigante earned himself the perhaps unflattering sobriquet of ‘The Snake’, but also ultimately the leadership of the crime family.

As for Luciano himself, not only was he a serial traitor, he was also guilty of what was, in Mafia eyes, an even more egregious and unpardonable transgression—namely, he was a police informer

Thus, during his trial for prostitution offences, Raab reveals: 

The most embarrassing moment for the proud Mafia don was Dewey’s disclosure that in 1923, when he was twenty-five, Luciano had evaded a narcotics arrest by informing on a dealer with a larger cache of drugs. 
‘You’re just a stool pigeon,’ Dewey belittled him. ‘Isn’t that it?’ 
‘I told them what I knew,’ a downcast Luciano replied” (p55). 

In this, Luciano was again to set a pattern that, somewhat later in the century, many other mafiosi would eagerly follow. 

Indeed, by the end of the century, the fabled Mafia code of omertà seems to have been, rather like its earlier supposed ban on drug-dealing, almost as often honored in the breach as actually complied with, at least for mafiosi otherwise facing long spells of incarceration with little prospect of release.

At least since Abe ‘Kid Twist’ Reles, who, being non-Italian, was not, of course, a ‘made man’, and who, at any rate, died under mysterious circumstances, none, to my recollection, ever paid the ultimate price for their betrayal. 

Instead, the main consequence of their breaking the code of omertà seems to have been reduced sentences, government protection under the witness protection program and an premature end to their Mafia careers.

However, an end to their mafia careers rarely meant an end to their criminal careers, and few turncoat mafiosi seem to have gone straight, let alone been genuinely repentant.

The most famous case is that of Gambino underboss, and Gotti nemesis, Sammy ‘The Bull’ Gravano, then the highest-ranking New York mafiosi ever to become a cooperating witness, who helped put John Gotti and a score of other leading mafiosi behind bars with his testimony.

In return for this testimony, Gravano was to serve less than five years in prison, despite admitting involvement in as many as nineteen murders.

In defence of this exceptionally lenient sentence, Leo Glasser, the judge responsible for sentencing both Gravano and Gotti, naïvely insisted that Gravano’s craven treachery was “the bravest thing I have ever seen” and declared “there has never been a defendant of his stature in organized crime who has made the leap he has made from one social planet to another” (p449). 

However, just a few years after his release, Gravano was convicted of masterminding a multi-million-dollar ecstasy ring in Arizona, where the authorities had relocated him for his own protection. 

His status as a notorious mafia stoolie seems to have impeded his reentry into the crime world hardly at all. 

On the contrary, it seems to have been precisely his status as a famed former Gambino family underboss that recommended him to the starstruck young ecstasy trafficking crew who, having befriended his son, were only too happy to allow the infamous New York crime boss Sammy Gravano to assume leadership of the crime ring which they themselves had established and built up. 

By the end of the century, only the secretive and close-knit Bonnano Family, long the only New York family still to restrict membership to those of full-Sicilian (not just Southern Italian) ancestry, could brag that they were, perhaps for this reason, the only New York family never to have had a fully-inducted member become a cooperating government witness.  

Yet even this claim, though technically true, was largely disingenuous. 

Indeed, the Bonannos had actually been expelled from the Commission for reportedly being on the verge of inducting undercover FBI agent Joe Pistone (alias ‘Donnie Brasco’) into the family just before his status as an undercover FBI agent and infiltrator had been revealed by the authorities.

Nevertheless, this did not stop Bonanno boss Joe Massino:

Proudly inform[ing] the new soldiers of the family’s unique record among all of the nation’s borgatas as the only American clan that had never spawned a stool pigeon or cooperative government witness” (p640).

It is therefore somewhat ironic that, in 2004, it was Joe Massino himself who would become the first ever actual boss of a New York family to become a cooperating witness. 

Mafia Decline 

Besides its inadequate treatment of early New York Mafia history (see above), the other main reason that Raab’s ‘Five Families’ cannot be regarded as the definitive history of the New York Mafia is that Raab himself evidently doesn’t believe the story is over. On the contrary, in his subtitle, he predicts, and, in his Afterword, reports a ‘resurgence’.

The reason Raab wrongly predicts a Mafia revival is that he fails to understand the ultimate reason behind mafia malaise, attributing it primarily to law enforcement success: 

The combined federal and state campaigns were arguably the most successful anticrime expedition in American history. Over a span of two decades, twenty-four Mob families, once the best-organized and most affluent criminal associations in the nation, were virtually eliminated or seriously undermined” (p689). 

The real reason for Mafia decline is demographic. 

Italian-Americans no longer live in close-knit urban ghettos. Indeed, outside of Staten Island, few even live in New York City proper (i.e. the five boroughs). 

Italian Harlem has long previously transformed into Spanish Harlem and, beyond the tourist trap, Italian restaurants and annual parade, there is now little of Italy left in what little remains of Manhattan’s Little Italy

Even Bensonhurst, perhaps the last neighborhood in New York to be strongly associated with Italian-Americans, was never really an urban ghetto, being neither deprived nor monoethnic, and is now majority nonwhite.[9]

Italian-Americans are now often middle-class, and the smart ambitious ones aspire to be professionals and legitimate businessmen rather than criminals.

Indeed, I would argue that Italian-Americans no longer even still exist as a distinct demographic. They are now fully integrated into the American mainstream. 

Indeed, I suspect that, as with the infamous plastic paddy phenomenon with respect to Irish ancestry, few self-styled ‘Italian-Americans’ are even of 100% Italian ancestry. Thus, as far back as 1985, the New York Times reported: 

8 percent of Americans of Italian descent born before 1920 had mixed ancestry, but 70 percent of them born after 1970 were the children of intermarriage… Among Americans of Italian descent under the age of 30, 72 percent of men and 64 percent of women married someone with no Italian background” (Collins, The Family: A new look at intermarriage in the US, New York Times, Feb 11 1985). 

Thus, almost of necessity, the five families have long previously relaxed their traditional requirement for inductees to be of full-Italian ancestry, since otherwise so few Americans would be eligible.

The Gambinos seem to have been the first to relax this requirement, inducting, and eventually promoting to acting-boss, John Gotti’s son, Gotti Junior, at the behest of his father, despite the (part-) Russian, or possibly Russian-Jewish, ancestry of his mother (p462). 

Recently, Raab reports, in an attempt to restore discipline, the earlier requirement of full-Italian ancestry has been reimposed.  

However, in the absence of a fresh infusion of zips fresh off the boat from Sicily (which Raab also anticipates: p703), this will only further dry up the supply of potential recruits, since so few native-born Americans now qualify as 100% Italian in ancestry.

Raab reports that the supposed Mafia revival has resulted from a reduction in FBI scrutiny, owing to: 

1) The perception that the Mafia threat is extinguished;

2) A change in FBI priorities post-9/11, with the FBI increasingly focusing on domestic terror at the expense of Mafia investigation.  

The lower public profile of the five families in recent years, Raab believes, only shows that Mafiosi have been slipping below the radar, quietly returning to their roots:  

Gambling and loan-sharking—the Mafia’s symbiotic bread-and-butter staples—appear to be unstoppable” (p692).[10]

But, in the aftermath of the Supreme Court decision in Murphy v. National Collegiate Athletic Association, sports betting is now legal throughout the New York Metropolitan area (i.e. in New York, New Jersey and Connecticut), and indeed most of the US, and one of these two staples is now likely off the menu for the foreseeable future. 

Moreover, the big money is increasingly in narcotics, and, as Raab concedes, in contrast with their success in taking down the Mafia, the FBI’s “more costly half-century campaign against the narcotics scourge remains a Sisyphean failure” (p689). 

This has meant that non-Italian criminals have increasingly taken over the drug-trade, especially Latin-American cartels, who have taken over importation and wholesale, and black and Latino street gangs, who control most distribution at the street-level. 

Yet, in truth, the replacement of Italian-Americans in organized crime is only the latest in an ongoing process of  ethnic succession—in New York, the Italians had themselves replaced Jewish American crime gangs, who had dominated organized crime in New York in the early twentieth century into the prohibition era, and who had themselves replaced the Irish gangs and political bosses of the nineteenth century (see Ianni, Black Mafia: Ethnic Succession Organized Crime). 

The future likely belongs to blacks and Hispanics. The belief that the latter are somehow incapable of operating with the same level of organization and sophistication as the Mafia is, not only racist, but also likely wrong. 

Indeed, the fact that, prior to recent times, the Mafia in particular, not organized crime in general, was a major FBI priority may even have acted as a form of racially-based ‘affirmative action for black and Hispanic criminals. 

Raab may be right that the shift in FBI priorities post-9/11 has permitted a resurgence of organized crime. Indeed, in truth, organized crime, like the drug problem that fuels it, never really went away.

However, there is no reason to anticipate any resurgence will come with an Italian surname.

Endnotes

[1] Indeed, since Italian-American crime is in terminal decline – not just in New York – the time is also ripe for a definitive history of Italian-American organized crime in general. Of course, Raab’s book does not purport to be a history of Italian-American organized crime in general. It is a history only of the famed ‘five families’ operating in the New York metropolitan area, and hence only of Italian-American organized crime in this city. 
However, it does purport, in its subtitle, to be a history of ‘America’s most Powerful Mafia Empires’. Probably the only Italian-American crime syndicate (or at least predominantly Italian-American crime syndicate) outside of New York which had a claim to qualifying as one of ‘America’s Most Powerful Mafia Empires’ during most of the twentieth century is the Chicago Outfit.
Of course, New York is a much bigger city than Chicago, especially today. However, for most of the twentieth century, until it was eclipsed by Los Angeles in the 1980s, Chicago was known as America’s ‘Second City’. Moreover, whereas in New York there were famously five families competing for power and influence, in Chicago, from the time of the St Valentine’s day massacre in 1929 until the late-twentieth century, the Chicago Outfit was said to enjoy almost unchallenged criminal hegemony.
Raab extends his gaze beyond the New York families to Mafia families based in other cities only during an extended, and probably misguided, discussion of the supposed role of the Mafia, in particular Florida boss, Santo Trafficante Jr., and New Orleans boss, Carlos ‘The Little Man’ Marcello, in the assassination of John F Kennedy.
However, even here, the Chicago Outfit receives short shrift, with infamous Chicago boss, Sam ‘Momo’ Giancana, receiving only passing mention by Raab, even though he features as prominently in JFK conspiracy theories as either Trafficante and Morello.

[2] Of course, most mafiosi themselves likely believe this myth, just as many Freemasons probably themselves believe the exaggerated tales of their own venerability and spurious historical links to groups such as the Knights Templar. They are, in short, very much in thrall of their own mystique. This is among the reasons they are led to join the mafia in the first place. If claims of ancient origins were originally a myth cynically invented by mafiosi themselves, rather than presumed by outsiders, then modern mafiosi have certainly come to very much fall for their own propaganda.

[3] This is certainly the suggestion of Francis Ianni in Black Mafia: Ethnic Succession in Organized Crime, who argues that the American Mafia was already ceding power to black and Hispanic organized crime by at least the 1970s. This view seems to have some substance. 
Early to mid-twentieth century black Harlem crime Bumpy Johnson, for all his infamy, was said to be very much subservient to the Italian mafia families. Indeed, in the 1920s, a white criminal like Owney Madden was able to run the famous Cotton Club, initially with a whites-only door policy, in the heart of black Harlem.
However, by the 1970s, Harlem was mostly a no-go area for whites, Italian-Americans very much included. Therefore, even if the Mafia had the upper-hand in any negotiations, they nevertheless had to delegate to blacks any criminal activities in black areas of the city.
Thus, Nicky Barnes, the major heroin distributer in Harlem, was said to buy his heroin from mafia importers and wholesalers, especially Crazy’ Joe Gallo, whom he was said to formed a relationship with while they were both in prison together. Similarly, unlike his portrayal in the movie American GangsterFrank Lucas also seems to have bought his heroin primarily through mafia wholesalers. However, he may also have had an indirect link to the Golden Triangle through his associate Ike Anderson, a serving soldier in the Vietnam War.
However, both Lucas and Barnes necessarily had their own crew of black dealers to distribute the drugs on the street. The first black criminal in New York to supposedly operate entirely independently of the Mafia in New York was said to have been Frank Matthews, who disappeared under mysterious circumstances while on parole.

[4] Intriguingly, Professor of Criminal Justice, Howard Abadinsky, in his textbook on organized crime, links the higher public profile adopted by Capone and Gotti to the fact that both trace their ancestry, not to Sicily, but rather to Naples, where the local Camorra have long cultivated a higher public profile, and typically adopted a flashier style of dress and demeanor, than their Sicilian Mafia equivalents (Organized Crime, 4th Edition: p18).
Thus, historian John Dickie refers to a “longstanding difference between the public images of the two crime fraternities”: 

The soberly dressed Sicilian Mafioso has traditionally had a much lower public profile than the Camorrista. Mafiosi are so used to infiltrating the state and the ruling elite that they prefer to blend into the background rather than strike poses of defiance against the authorities. The authorities, after all, were often on their side. Camorista, by contrast, often played to an audience” (Mafia Republic: p248). 

Abadinsky concurs that: 

While even a capomafioso exuded an air of modesty in both dress and manner of speaking, the Camorrista was a flamboyant actor whose manner of walking and style of dress clearly marked him out as a member of the società” (Organized Crime, 4th Edition: p18). 

Adabinsky therefore tentatively observes: 

In the United States the public image of Italian-American organized crime figures with Neapolitan heritage has tended towards Camorra, while their Sicilian counterparts have usually been more subdued. Al Capone, for example, and, in more recent years, John Gotti, are both of Neapolitan heritage” (Organized Crime, 4th Edition: p18). 

However, while true, I cannot see how this could be anything other than a coincidence, since both Capone and Gotti were born and spent their entire lives in the USA, Gotti being fully two generations removed from the old country, and neither seem to have had parents or other close relatives who were involved in crime and could somehow have passed on this cultural influence from Naples – unless perhaps Abadinsky is proposing some sort of innate, heritable, racial difference between Neapolitans and Sicilians, which seems even more unlikely.

[5] Gigante is not the only organized crime boss accused of malingering. Neapolitan Camorra boss, Raffaele Cutolo, alias ‘The Professor’, also stood accused of faking mental illness. However, whereas Gigante did so in order to avoid prison, Cutolo, apart from eighteen months living on the run from the authorities after escaping, spent virtually the entirety of his career as a crime boss locked up, being periodically shuttled between psychiatric hospitals and prisons. 

[6] Actually, not all crimes necessarily involve dishonesty – e.g. crimes of passion, some crimes of violence. However, any mafiosa necessarily has to be dishonest, since otherwise he would admit his crimes to the authorities and hence not enjoy a long career. Indeed, the very code of omertà, though conceptualized as a code of honour, demands dishonesty, at least in one’s dealings with the authorities, since it forbids both informing to the authorities regarding the crimes of others, and admitting the existence of, or one’s membership of, the criminal fraternity itself. 

[7] Thus, if there was never outright war between families after the Castallammarese War, nevertheless bosses of some families did often attempt to sponsor ‘regime change’ in other families, by deposing other bosses, both in New York and beyond. For example, as discussed above, Bonanno family boss Joe Bonnano, acting in concert with Joe Magliocco, the then-boss of what was then known as the Profaci family, supposedly conspired to assasinate the bosses of the other three New York families, only to have their scheme betrayed by the assigned assassin, Joe Colombo, who was then himself rewarded for his betrayal by being appointed as boss of the family that thenceforth came to be named after him.
Similarly, Genovese boss Vincent ‘The Chin’ Gigante and Lucchese boss Tony ‘Ducks’ Corallo together attempted unsuccessfully assassinate Gambino boss John Gotti as revenge for Gotti’s own unauthorised assassination of his predecessor, Paul Castellano, which they saw as a violation of Mafia rules, whereby the assassination of a boss was, at least in theory, only permissible with the prior consent and authority of the Commission. The attempted assassination, carried out by Vittorio ‘Little Vic’ Amuso and Anthony ‘Gaspipe’ Casso, themselves later to become boss and underboss of the Luccheses, resulted in the death of Gambino underboss, Frank DeCicco in a car bomb, but not Gotti himself.

[8] In truth, Luciano seems to have invented neither the five families nor the commission. According to Mike Dash in his excellent The First Family, the Commission, under the earlier name ‘the Council’, actually existed long before Luciano came to prominance. 
As for the five families, surely if Luciano, or indeed Maranzano before him (as other versions relate), were to invent afresh the structure of the New York Mafia in a ‘top down’ process, they would surely have created a more unitarycentralized structure in order to maximize their own power and control as overall boss of bosses, rather than devolving power to the bosses of the individual families, who themselves issued orders to capos and soldiers.
As I have discussed previously, the power of the so-called National Commission was, to draw an analogy with international relations, largely intergovernmental rather than federal, let alone unitary or centralized, in its powers. Its power lay in its perverse perceived ‘legitimacy among mafiosi. As Stalin is said to have contemptuously remarked of the Pope, the Commision commanded no divisions (nor any ‘crews’, capos or soldiers) of its own.
In reality, Maranzano and Luciano surely at most merely give formal recognition to factions which long predated the Castallammarese War and its aftermath and whose independent power demanded recognition. Indeed, the Commission, was even initially said to have included non-Italians such as Dutch Schultz, if only because the power of the ‘Bronx Beer Baron’ simply demanded his inclusion if the Commission were to be at all effective in regulating organized crime in New York.

[9] Raab, for his part, anticipates that Mafia rackets will increasingly, like Italian-Americans themselves, migrate to the suburbs: 

A strategic shift could be exploiting new territories. Although big cities continue to be glittering attractions, there are signs that the Mafia, following demographic trends, is deploying more vigorously in suburbs. There, the families might encounter police less prepared to resist them than federal and big-city investigators. ‘Organized crime goes where the money is, and there’s money and increasing opportunities in the suburbs,’ Howard Abadinsky, the historian, observes. Strong suburban fiefs have already been established by the New York, Chicago, and Detroit families” (p707). 

However, organized crime tends to thrive in poor close-knit communities in deprived areas who lack a trust in the police and authorities and are hence unwilling to turn to the latter for protection. If the Mafia attempts to make inroads in the suburbs, it will likely come up against assimilated, middle-class Americans only too willing to turn the to police to protection. In short, there is a reason why organized crime has largely been absent in middle-class suburbia.

[9] Although he wrote ‘Five Families’ several years before the legalization of sports betting in most of America, New York City included, Raab seems to anticipate that legalization will have little if any effect on Mafia revenue from illegal sports books, writing: 

Sensible gamblers will always prefer wagering with the Mob rather than with state-authorized Off-Track Betting parlors and lotteries. Bets on baseball, football, and basketball games placed with a bookie have a 50 percent chance of winning, without the penalty of being taxed, while the typical state lottery is considered a pipe dream because the chance of winning is infinitesimal” (p694). 

It is, of course, true that lotteries, almost by definition, involve long odds and little realistic chance of winning. However, the same was also true of the illegal numbers rackets that were a highly lucrative source of income for predominantly black ‘policy kings’ (and queens) in early twentieth century America. Indeed, this racket was so lucrative so eventually major white organized crime figures like Dutch Schultz in New York and Sam Guancana in Chicago sought to take it over.
Yet, if winning a state lottery is indeed a ‘pipe dream’, the same is not true of legalized sports betting. On the contrary, here, the odds are as good as in illegal Mafia-controlled sports betting, and, given the legal regulation, prospective gamblers will probably be more confident that they are not likely to be ripped off by the bookies.
Thus, in most jurisdications where off-track sports betting is legal and subject to few legal restrictions, there is little if any market for illegal sports betting. Hence the legalization of sports betting in most of America will likely mean that sports betting is no longer controlled by organized crime, let alone the Mafia, just as the end of Prohibition in 1933 similarly similarly led to the decline in the the market for moonshine and bootleg alcohol.

In Defence of Physiognomy

Edward Dutton, How to Judge People by What they Look Like (Wrocław: Thomas Edward Press, 2018) 

Never judge a book by its cover’ – or so a famous proverb advises. 

However, given that Edward Dutton’s ‘How to Judge People by What they Look Like’, represents, from its provocative title onward, a spirited polemic against this received wisdom, one is tempted, in the name of irony, to review his book entirely on the basis of its cover. 

I will resist this temptation. However, it is perhaps worth pointing out that two initial points are apparent, if not from the book’s cover alone, then at least from its external appearance. These are: 

1) It is rather cheaply produced and apparently self-published; and

2) It is very short – a pamphlet rather than a book.[1]

Both these facts are probably excusable by reference to the controversial and politically-incorrect nature of the book’s title, theme and content.

Thus, on the one hand, the notion that we can, with some degree of accuracy, judge people by appearances alone is a very politically-incorrect idea and hence one that many publishers would be reluctant to associate themselves with or put their name to.

On the other hand, the fact that the topic is so controversial may also explain why the book is so short. After all, relatively little research has been conducted on this topic for precisely this reason.

Moreover, even such research as has been conducted is often difficult to track down. 

After all, physiognomy, the field of research which Dutton purports to review, is no longer a recognized science. On the contrary, most people today dismiss it as a discredited pseudoscience.

Therefore, there is no ‘International Journal of Physiognomy’ available at the click of a mouse on ScienceDirect. 

Neither are there any Departments of Physiognomy or Professors of Physiognomy at major universities, or a recent undergraduate, or graduate-level textbook on physiognomy collating all important research on the subject. Indeed, the closest thing we have to such a textbook is Dutton’s own thin, meagre pamphlet. 

Therefore, not only has relatively little research has been conducted in this area, at least in recent years, but also such research as has been conducted is spread across different fields, different journals and different researchers, and hence not always easy to track down. 

Moreover, such research rarely actually refers to itself as ‘physiognomy’, in part precisely because physiognomy is widely regarded as a pseudoscience and hence something to which researchers, even those directly researching correlations between morphology and behaviors, are reluctant to associate themselves.[2]

Therefore, conducting a key word search for the term ‘physiognomy’ in one or more of the many available databases of scientific papers would not assist the reader much, if at all, in tracking down relevant research.[3]

It is therefore not surprising that Dutton’s book is quite short. 

For this same reason, it is perhaps also excusable that Dutton has evidently failed to track down some interesting studies relevant to his theme. 

For example, a couple of interesting studies not cited by Dutton purported to uncover an association between behavioural inhibition and iris pigmentation in young children (Rosenberg & Kagan 1987; Rosenberg & Kagan 1989). 

Another interesting study not mentioned by Dutton presents data apparently showing that subjects are able to distinguish criminals from non-criminals at better than chance levels merely from looking at photographs of their faces (Valla, Ceci & Williams 2011).[4]

Such omissions are inevitable and excusable. More problematically however, Dutton also seems to have omitted at least one entire area of research relevant to his subject-matter – namely research on so-called minor physical anomalies or MPAs

These are certain physiological traits, interpreted as minor abnormalities, probably reflecting developmental instability and mutational load, which have been found in several studies to be associated with various psychiatric and developmental conditions, as well as being a correlate of criminal behaviour (see below).

Defining the Field 

Yet Dutton not only misses out on several studies relevant to the subject-matter of his book, he also is not entirely consistent in identifying just what the precise subject-matter of his book actually is. 

It is true that, at many points in his book, he talks about physiognomy

This term is usually defined as the science (or, according to many people, the pseudoscience) of using a person’s morphology in order to determine their character, personality and likely behaviour. 

However, the title of Dutton’s book, ‘How to Judge People by What They Look Like’, is potentially much broader. 

After all, what people look like includes, not just our morphology, but also, for example, how we dress and what clothes we wear.

For example, we might assess a person’s job from their uniform, or, more generally, their socioeconomic status and income level from the style and quality of their clothing, or the designer labels and brand names adorning it. 

More specifically, we might even determine their gang allegiance from the color of their bandana, and their sexuality and fetishes from the colour and positioning of their handkerchief

We also make assessments of character from clothing style. For example, a person who is sloppily dressed and is hence perceived not take care in his or her appearance (e.g. whose shirt is unironed or unclean) might be interpreted as lacking in self-worth and likely to produce similarly sloppy work in whatever job s/he is employed at. On the other hand, a person always kitted out in the latest designer fashions might be thought shallow and materialistic. 

In addition, certain styles of dress are associated with specific youth subcultures, which are often connected, not only to taste in music, but also with lifestyle (e.g. criminality, drug-use, political views).[5]

Dutton does not discuss the significance of clothing choice in assessments of character. However, consistent with this broader interpretation of his book’s title, Dutton does indeed sometimes venture beyond physiognomy in the strict sense. 

For example, he discusses tattoos (p46-8) and beards (p60-1). 

I suppose the decision to get tattooed or grow a beard reflects both genetic predispositions and environmental influence, just as all aspects of phenotype, including morphology, reflect the interaction between genes and environment. 

However, this is also true of clothing choice, which, as I have already mentioned, Dutton does not discuss.  

On the other hand, both tattoos and, given that they take time to grow, even beards are relatively more permanent than whatever clothes we are wearing at any given time. 

However, Dutton also discusses the significance of what he terms a “blank look” or “glassy eyes” (p57-9). But this is a mere facial expression, and hence even more transitory than clothing. 

Yet Dutton omits discussion of other facial expressions which, unlike his wholly anecdotal discussion of “glassy eyes”, have been researched by ethologists at least since Charles Darwin’s seminal The Expression of the Emotions in Man and Animals was published in 1872. 

Thus, Paul Ekman famously demonstrated that the meanings associated with at least some facial expressions are cross-culturally universal (e.g. smiling being associated with happiness). 

Indeed, some human facial expressions even appear to be homologues of behaviour patterns among non-human primates. For example, it has been suggested that the human smile is homologous with an appeasement gesture, namely the baring of clenched teeth (aka a ‘fear grin’), among chimpanzees. 

Of particular relevance to the question posed in Dutton’s book title, namely ‘How to Judge People by What They Look Like’, it is suggested some facial expressions lie partly outside of conscious control – e.g. blushing when embarrassed, going pale when shocked or fearful.  

Indeed, even a fake smile is said to be distinguishable from a Duchenne smile

This then explains the importance of reading facial expressions when playing poker or interrogating suspects, as people often inadvertently give away their true feelings through their facial expressions, behaviour and other mannerisms (e.g. so-called microexpressions). 

Somatotypes and Physique 

Dutton begins his book with a remarkable attempt to resurrect William Sheldon’s theory that certain types of physiques (or, as Sheldon called them, somatotypes) are associated with particular types of personality (or as Sheldon called them, constitutions). 

Although the three dimensions by which Sheldon classified physiques – endomorphy, ectomorphy and mesomorphy – have proven useful as dimensions for classifying body-type, Sheldon’s attempt to equate these ideal types with personality is now widely dismissed as pseudoscience. 

Dutton, however, argues that physique is indeed associated with character, and moreover provides what was conspicuously lacking in Sheldon’s own exposition – namely, compelling theoretical reasons for the postulated associations. 

Yet, interestingly, the associations suggested by Dutton do indeed to some extent mirror those first posited by William Shelton over half a century previously.

Whereas, elsewhere, Dutton draws on previously published research, here, Dutton’s reasoning is, to my knowledge, largely original to himself, though, as I show below, psychometric studies do support the existence of at least some of the associations he postulates. 

This part of Dutton’s book represents, in my view, the most important and convincing original contribution in the book. 

Endomorphy/Obesity, Self-Control and Conscientiousness

First, he discusses what Sheldon called endomorphy – namely, a body-type that can roughly be equated with what we would today call fatness or obesity

Dutton points out that, at least in contemporary Western societies, where there is a superabundance of food, and starvation is all but unknown even among the relatively less well-off, obesity tends to correlate with personality. 

In short, people who lack self-control and willpower will likely also lack the self-control and willpower to diet effectively. 

Endomorphy (i.e. obesity) is therefore a reliable correlate of the personality factor known to psychometricians as conscientiousness (p31-2).  

Although Dutton himself cites no data or published studies in support of this conclusion, nevertheless several published studies confirm an association between BMI and conscientiousness (Bagenjuk et al 2019; Jokela et al 2012; Sutin et al 2011). 

Obesity is also, Dutton claims, inversely correlated with intelligence

This is, first, because IQ is, according to Dutton, correlated with time-preference – i.e. a person’s willingness to defer gratification by making sacrifices in the short-term in return for a greater long-term pay-off. 

Therefore, low-IQ people, Dutton claims: 

Are less able to forego the immediate pleasure of ice cream for the future positive of not being overweight and diabetic” (p31). 

However, far from being associated with a short-time preference, some evidence, not discussed by Dutton, suggests that intelligence is actually inversely correlated with conscientiousness, such that more intelligent people are actually on average less conscientious (e.g. Rammstedt et al 2016; cf. Murray et al 2014). 

This would suggest that low IQ people might, all else being equal, actually be more successful at dieting than their high IQ counterparts. 

However, according to Dutton, there is a second reason that low-IQ people are more likely to be fat, namely: 

They are likely to understand less about healthy eating and simply possess less knowledge of what constitutes healthy food or a reasonable portion” (p31). 

This may be true. 

However, while there are some borderline cases (e.g. foods misleadingly marketed by advertisers as healthy), I suspect that virtually everyone knows that, say, eating lots of cake is unhealthy. Yet resisting the temptation to eat another slice is often easier said than done. 

I therefore suspect conscientiousness is a better predictor of weight than is intelligence

Interestingly, a few studies have investigated the association between IQ and the prevalence of obesity. However, curiously, most seem to be premised on the notion that, rather than low intelligence causing obesity, obesity somehow contributes to cognitive decline, especially in children (e.g. Martin et al 2015) and the elderly (e.g. Elias et al 2012). 

In fact, however, longitudinal studies confirm that, as contended by Dutton, it is low IQ that causes obesity rather than the other way around (Kanazawa 2014). 

At any rate, people lacking in intelligence and self-control also likely lack the intelligence and self-discipline to excel in school and gain promotions into high-income jobs, since both earnings and socioeconomic status correlate with both intelligence and conscientiousness.[6]

One can also, then, make better than chance assessments of a person’s socioeconomic status  and income from their physique. 

In other words, whereas in the past (and perhaps still in the developing world) the poor were more likely to starve or suffer from malnutrition and only the rich could afford to be fat, in the affluent west today it is the relatively less well-off who are, if anything, more likely to suffer from obesity and diseases of affluence such as diabetes and heart disease

This, then, all rather confirms the contemporary stereotype of the fat, lazy slob. 

However, Dutton also provides a let-off clause for offended fatties. Obesity is associated, not only with conscientiousness, but also with the factor of personality known as extraversion. This refers to the tendency to be outgoing, friendly and talkative, traits that are generally viewed positively. 

Several studies, again not cited by Dutton, do indeed suggest an association between extraversion and BMI (Bagenjuk et al 2019; Sutin et al 2011). Dutton, for his part, explains it this way: 

Extraverts simply enjoy everything positive more, and this includes tasty (and thus unhealthy) food” (p32). 

Dutton therefore provides theoretical support to the familiar stereotype of, not only the fat, lazy slob, but also the jolly and gregarious fat man, and the ‘bubbly’ fat woman.[7]

Mesomorphy/Muscularity and Testosterone

Mesomorphs were another of Sheldon’s supposed body-types. Mesomorphy can roughly be equated with muscularity. 

Here, Dutton concludes that: 

Sheldon’s theory… actually fits quite well with what we know about testosterone” (p33). 

Thus, mesomorphy is associated with muscularity, and muscularity with testosterone

Yet testosterone, as well as masculinizing the body, also masculinizes brain and behaviour. 

This is why anabolic steroids, not only increase muscularity, but are also said to be associated with roid rage.[8]

Testosterone, at least during development, may also be associated, not only with muscularity, but also with certain aspects of facial morphology, such as a wide and well-defined jawline, prominent brow ridges, deep-set eyes and facial width.  

I therefore wonder if this might go some way towards explain the finding, not mentioned by Dutton (but clearly relevant to his subject-matter), that observers are apparently able to identify convicted criminals at better than chance levels from a facial photograph alone (Valla, Ceci & Williams 2011).[9]

Testosterone and Autism 

Further exploring the effects of testosterone on both psychology and morphology, Dutton also proposes: 

We would also expect the more masculine-looking person to have higher levels of autism traits” (p34). 

This idea seems to be based on Simon Baron-Cohen’s extreme male brain theory of autism

However, the relationship between, on the one hand, levels of androgens such as testosterone and, on the other, degree of masculinization in respect of a given sexually-dimorphic trait may be neither one-dimensional nor linear

Thus, interestingly, Kingsley Browne in his excellent Biology at Work: Rethinking Sexual Equality (which I have reviewed here) reports: 

The relationship between spatial ability and [circulating] testosterone levels is described by an inverted U-shaped curve… Spatial ability is lowest in those with the very lowest and the very highest testosterone levels, with the optimal testosterone level lying in the lower end of the normal male range. Thus, males with testosterone in the low-normal range have the highest spatial ability” (Biology at Work: p115; Gouchie & Kimura 1991). 

Similarly, leading intelligence researcher Arthur Jensen reports, in The g Factor: The Science of Mental Ability, that:

Within each sex there is a nonlinear (inverted-U) relationship between an individual’s position on the estrogen/testosterone continuum and the individual’s level of spatial ability, with the optimal level of testosterone above the female mean and below the male mean. Generally, females with markedly above-average testosterone levels (for females) and males with below-average levels of testosterone (for males) tend to have higher levels of spatial ability, relative to the average spatial ability for their own sex” (The g Factor: p534).

In contrast, however, Dutton claims: 

There is evidence that testosterone level in healthy males is positively associated with spatial ability” (p36). 

However, the only study he cites in support of this assertion was, according to its methodology section and indeed its very title, conducted among “older males”, reported as having been between the ages of 60 and 75 years of age (Janowsky et al 1994). 

Therefore, since testosterone levels are known to decline with age, this finding is not necessarily inconsistent with the relationship between testosterone and spatial ability described by Browne (see Moffat & Hampson 1996). 

This, of course, accords with the anecdotal observation that math nerds and autistic males are rarely athletic, square-jawed ‘alpha male’-types.[10]

Testosterone and Baldness 

Another trait associated with testosterone levels, according to Dutton, is male pattern baldness. Thus, Dutton contends: 

Baldness is yet another reflection of high testosterone… [B]aldness in males known as androgenic apolecia, is positively associated with levels of testosterone” (p55). 

As evidence, he cites a study both a review (Batrinos 2014) and some indirect anecdotal evidence: 

It is widely known among doctors – I base this on my own discussions with doctors – that males who come to them in their 60s complaining of impotence tend to have full heads of fair or only very limited hair loss” (p55).[11]

If male pattern baldness is indeed associated with testosterone levels then this is somewhat surprising, because our perceptions regarding men suffering from male pattern baldness seem to be that they are, if anything, less masculine than other males. 

Thus, Nancy Etcoff, in Survival of the Prettiest (which I have reviewed here), reports that one study  found that: 

Both sexes assumed that balding men were weaker and found them less attractive” (Survival of the Prettiest: p121; Cash 1990).[12]

Yet, if the main message of Dutton’s book is that individual differences in morphology and appearance do indeed predict individual differences in behaviour, psychology and personality, then a second implicit theme seems also to be that our intuitions and stereotypes regarding the association between appearance and behaviors are often correct.  

True, it is likely that few people notice, say, digit ratios, or make judgements about people based on them either consciously or unconsciously. However, elsewhere, Dutton cites studies showing that subjects are able to estimate the IQ of male students at better than chance levels simply by viewing a photograph of their faces (Kleisner et al 2014; discussed at p50); and identify homosexuals and heterosexual men at better than chance levels from a facial photograph alone (Kosinski & Wang 2017; discussed at p66). 

Yet, according to Etcoff and Cash, perceptions regarding the personalities of balding men are almost the opposite of what would be expected if male pattern balding were indeed a reflection of high testosterone levels, as suggested by Dutton. 

In fact, however, although a certain level of testosterone is indeed a necessary condition for male pattern hair loss (this is why neither women nor castrated eunuchs experience the condition, though their hair does thin with age), this seems to be a threshold effect, and among non-castrated males with testosterone levels within the normal range levels of circulating testosterone do not seem to significantly predict either the occurrence, or severity, of male pattern baldness

Thus, healthline reports: 

It’s not the amount of testosterone or DHT that causes baldness; it’s the sensitivity of your hair follicles. That sensitivity is determined by genetics. The AR gene makes the receptor on hair follicles that interact with testosterone and DHT. If your receptors are particularly sensitive, they are more easily triggered by even small amounts of DHT, and hair loss occurs more easily as a result. 

In other words, male pattern baldness is yet another trait that is indeed related to testosterone, but does not evince a simple linear relationship

2D:4D Ratio

Another presumed correlate of prenatal androgens is 2D:4D ratio (aka digit ratio). 

Over the last two decades, a huge body of research has reported correlations between 2D:4D ratio and a variety of psychiatric conditions and behavioural propensities, including autism (Manning et al 2001), ADHD (Martel et al 2008; Buru 2020; Işık 2020), psychopathy (Blanchard & Lyons 2010), aggressive behaviours (Bailey & Hurd 2005; Benderlioglu & Nelson 2005), sports and athletic performance (Manning & Taylor 2001Hönekopp & Urban 2010; Griffin et al 2012; Keshavarz et al 2017), criminal behaviour (Ellis & Hoskin 2015; Hoskin & Ellis 2014) and homosexuality (Williams et al 2000; Lippa 2003; Kangassalo et al 2011; Li et al 2016; Xu & Zheng 2016). 
 
Unfortunately, and slightly embarrassingly, Dutton apparently misunderstands what 2D:4D ratio actually measures. Thus, he writes: 

If the profile of someone’s fingers is smoother, more like a shovel, then it implies high testosterone. If, by contrast, the little finger is significantly smaller than the middle finger, which is highly prevalent among women, then it implies lower testosterone exposure” (p69). 

Actually, however, both the little finger and middle finger are irrelevant to 2D:4D ratio.

Indeed, for virtually everyone, “the little finger is significantly smaller than the middle finger”. This is, of course, why the latter is called “the little finger”.

Actually, 2D:4D ratio concerns the ratio between index finger and the ring finger – i.e. the two fingers on either side of the middle finger

These fingers are, of course, the second and fourth digit, respectively, if you begin counting from your thumb outwards, hence the name ‘2D:4D ratio’. 

In evidently misnumbering his digits, I can only conclude that Dutton began counting at the correct end, but missed out his thumb. 

At any rate, the evidence for any association between digit ratios and measures of behavior and psychology is, at best, mixed

Skimming the literature on the subject, one finds many conflicting findings – for example, sometimes significant effects are found only for one sex, while other studies find the same correlations limited to the other sex (e.g. Bailey & Hurd 2005; Benderlioglu & Nelson 2005; see also Hilgard et al 2019), and also many failures to replicate earlier reported associations (e.g. Voracek et al 2011; Fossen et al 2022; Kyselicová et al 2021). 

Likewise, meta-analyses of published studies have generally found, at best, only small and inconsistent associations (e.g Voracek et al 2011 ; Pratt et al 2016). Thus, 2D:4D ratio has been a major victim of the recent so-called replication crisis in psychology

Indeed, it is not entirely clear that 2D:4D ratio represents a useful measure of prenatal androgens in the first place (Hollier et al 2015), and even the universality of the sex difference that originally led researchers to posit such a link is has been called into question (Apicella 2015; Lolli et al 2017).  

In short, the usefulness of digit ratio as a measure of exposure to prenatal androgens, let alone an important correlate of behaviour, psychology, personality or athletic performance, is questionable. 

Testosterone and Height 

The examples of male pattern baldness and spatial ability demonstrate that the effect of testosterone on some sexually-dimorphic traits is not necessarily always linear. Instead, it can be quite complex. 

Therefore, just because men are, on average, higher for a given trait than are women, which is ultimately a consequence of androgens such as testosterone, this does not necessarily mean that men with relatively higher levels of testosterone are necessarily higher for this trait than are men with relatively lower levels of testosterone. 

Indeed, Dutton himself provides another example of such a trait – namely height

Thus, although men, in general, are taller than women, nevertheless, according to Dutton: 

Men who are high in testosterone… tend to be of shorter stature than those who are low in it. High levels of testosterone at a relatively early age have been shown to reduce stature” (p34).[13]

In evolutionary terms, Dutton explains this in terms of the controversial Life History Theory of Philippe Rushton, of whom Dutton seems to be, with some reservations, something of a disciple (p22-4). 

If true, this might explain why eunuchs who were castrated before entering puberty are said to grow taller, on average, than other men. 

Further corroboration is provided by the fact that, in the Netherlands, whose population is among the tallest in the world, excessively tall boys are sometimes treated with testosterone in order to prevent them growing any taller (de Waal et al 1995).[14]

This is said to occur because additional testosterone speeds up puberty, and produces a growth spurt, but it also brings this to an end when height stabilizes and we cease to grow any taller. This is discussed in Carole Hooven’s book Testosterone: The Story of the Hormone that Dominates and Divides Us.

Short Man Syndrome’?

Interestingly, although Dutton does not explore the idea, the association between testosterone levels and height among males may even explain the supposed phenomenon of short man syndrome (also referred to, by reference to the supposed diminutive stature of the French emperor Napoleon, as a Napoleon complex), whereby short men are said to be especially aggressive and domineering. 

This is something that is usually attributed to a psychological need among shorter men to compensate for their diminutive stature. However, if Dutton is right, then the supposed aggressive predilections of short men might simply reflect differences between short and taller man in testosterone levels during adolescence. 

Actually, however, so-called short man syndrome is likely a myth – and yet another way society in general demeans and belittles short men. Certainly, it is very much a folk-psychiatric diagnosis with no empirical or real evidential basis, besides the merely anecdotal.  

Indeed, far from short men being, on average, more aggressive and domineering than taller men, one study commissioned by the BBC actually found that short men were less likely to respond aggressively when provoked

Given that tall men have an advantage in combat, it would actually make sense for relatively shorter men to avoid potentially violent confrontations with other men where possible, since, all else being equal, they would be more likely to come off worse in any such altercation.  

Consistent with this, some studies have found a link between increased stature and anti-social personality disorder, which is associated with aggressive behaviours (e.g. Ishikawa et al 2001; Salas-Wright & Vaughn 2016), while another study found a positive association between height and dominance, especially among males (Malamed 1992).[15]

Height and Intelligence 

Height is also, Dutton reports, correlated with intelligence, with taller people having, on average, slightly higher IQs than shorter people.  

The association between height and IQ is, like most if not all of those discussed by Dutton in this book, modest in magnitude or effect size.[16]

However, unlike many other associations reported by Dutton, many of which are based on just a single published study, or sometimes by purely theoretical arguments, the association between height and intelligence is robust and well-established.[17] Indeed, there is even wikipedia page on the topic

Dutton’s explanation for this phenomenon is that intelligence and height “have been sexually selected for as a kind of bundle” (p46). 

Females have sexually selected for intelligent men (because intelligence predicts social status and they have been specifically selected for this) but they have also selected for taller men, realising that taller men will be better able to protect them. This predilection for tall but intelligent men has led to the two characteristics being associated with one another” (p46). 

Actually, as I see it, this explanation would only work, or at least work much better, if both men and women had a preference for partners who are both tall and intelligent

This is indeed Arthur Jensen’s explanation for the association between height and IQ

Probably represents a simple genetic correlation resulting from cross-assortative mating for the two traits. Both height and ‘intelligence’ are highly valued in western culture. There is also evidence for cross-assortative mating for height and IQ. There is some trade-off between them in mate selection. When short and tall women are matched on IQ, educational level and social class of origin, for example, it is found that taller women tend to marry men of higher socioeconomic status… than do shorter women” (The G Factor: The Science of Mental Ability: p146). 

An alternative explanation might be that both height and intelligence reflect developmental stability and a lack of deleterious mutations. On this view, both height and intelligence might represent indices of genetic quality and lack of mutational load. 

However, this alternative explanation is inconsistent with the finding that there is no ‘within-family’ correlation between height and intelligence. In other words, when one looks at, say, full-siblings from the same family, there is no tendency for the taller sibling to have a higher IQ (Mackintosh, IQ and Human Intelligence: p6). 

This suggests that the genes that cause greater height are different from those that cause greater intelligence, but that they have come to be found in the same individuals through assortative mating, as suggested by Jensen and Dutton.[18]

Height and Earnings 

Although not discussed by Dutton, there is also a correlation between height and earnings. Thus, economist Steven Landsburg reports that: 

In general, an extra inch of height adds roughly an extra $1,000 a year in wages, after controlling for education and experience. That makes height as important as race or gender as a determinant of wages” (More Sex is Safer Sex: p53). 

This correlation could be mediated by the association between height and intelligence, since intelligence is known to be correlated with earnings (Case & Paxson 2009). 

However, one interesting study found that it was actually height during adolescence that accounted for the association, and that, once this was controlled for, adult height had little or no effect on earnings (Persico, Postlewaite & Silverman 2004). 

Controlling for teen height essentially eliminates the effect of adult height on wages for white males. The teen height premium is not explained by differences in resources or endowments” (Persico, Postlewaite & Silverman 2004). 

Thus, Landsburg reports: 

Tall men who were short in high school earn like short men, while short men who were tall (for their age) in high school” (More Sex is Safer Sex: p54). 

This suggests that it is height during a key formative period (a critical period’) in adolescence that increases self-confidence, which self-confidence continues into adulthood and ultimately contributes to higher adult earnings of men who were relatively taller as adolescents. 

On the other hand, however, Case and Paxon report that, in addition to being associated with adult height, intelligence is also associated with an earlier growth spurt. This leads them to conclude that adolescent height might be a better marker for cognitive ability than adult height, thereby providing an alternative explanation for Persico et al’s finding (Case & Paxson 2009). 

Head Size and Intelligence 

Dutton also discusses the finding that there is an association between intelligence and head-size. This is indeed true and is a topic I have written about elsewhere

However, Dutton’s illustration of this phenomenon seems to me rather unhelpful. Thus, he writes: 

Intelligent people have big heads in comparison to the size of their bodies. This association is obvious at the extremes. People who suffer from a variety of conditions that reduce their intelligence, including fetal alcohol syndrome or the zika virus, have noticeably very small heads” (p56). 

However, to me, this seems to be the wrong way to think about it. 

While it is indeed true that microcephaly (i.e. a smaller than usual head size) is usually associated with lower than normal intelligence levels, the reverse is not true. Thus, although head-size is indeed correlated with IQ, people suffering from macrocephaly (i.e. abnormally large heads) do not generally have exceptionally high IQs. On the contrary, macrocephaly is often associated with impaired cognitive function, probably because, like microcephaly, it reflects a malfunction in brain development.

Neither do people afflicted with forms of disproportionate dwarfism, such as achondroplasia, have higher than average IQs even though their heads are larger relative to their body-size than are those of ordinary-sized people.  

In short, rather than being, as Dutton puts it “obvious at the extremes”, the association between head-size and intelligence is obvious at only one of the extremes and not at all apparent at the other extreme. 

In general, species, individuals and races with larger brains have higher intelligence because, because brain-size is highly metabolically expensive and therefore unlikely to evolve without some compensating advantage (i.e. higher intelligence). 

However, conditions such as dwarfism and macrocephaly did not evolve through positive selection. On the contrary, they are pathological and maladaptive. Therefore, in these cases, the additional brain tissue may indeed be wasted and hence confer no cognitive advantage. 

Mate Choice 

In evolutionary psychology, there is a large literature on human mate-choice and beauty/attractiveness standards. Much of this depends on the assumption that the physical characteristics favoured as mate-choice criteria represent fitness-indicators, or otherwise correlate with traits desirable in a mate. 

For example, a low waist-to-hip ratio (or ‘WHR’) is said to be perceived as attractive among females because it is supposedly a correlate of both health and fertility. Similarly, low levels of fluctuating asymmetry are thought to be perceived as attractive by members of the opposite sex in both humans and other animals, supposedly because it is indicative of developmental stability and hence indirectly of genetic quality

Dutton reviews some of this literature. However, an introductory textbook on evolutionary psychology (e.g. David Buss’s Evolutionary Psychology: The New Science of the Mind), or on the evolutionary psychology of mating behaviour in particular (e.g. David Buss’s The Evolution of Desire), would provide a more comprehensive review. 

Also, some of Dutton’s speculations are rather unconvincing. He claims: 

Hipsters with their Old Testament beards are showcasing their genetic quality… Beards are a clear advertisement of male health and status. They are a breeding ground for parasites” (p61). 

However, if this is so, then it merely raises the question as to why have beards come back into fashion very recently? Indeed, until the last few years, beards had not been in fashion for men in the west to my knowledge since the 1970s.[19]

Moreover, it is not at all clear that beards do increase attractiveness (e.g. Dixson & Vasey 2012). Rather, it seems that beards increase perceptions of male age, dominance, social status and aggressiveness, but not their attractiveness.[20]

This suggests that beards are more likely to have evolved through intrasexual selection (i.e. dominance competition or fighting between males) than by intersexual selection (i.e. female choice). 

This is actually consistent with a recently-emerging consensus among evolutionary psychologists that human male physiology (and behaviour) has been shaped more by intrasexual selection than by intersexual selection (Puts 2010; Kordsmeyer et al 2018). 

Consistent with this, Dutton notes: 

“[Beards] have been found to make men look more aggressive, of higher status, and older… in a context in which females tend to be attracted to slightly older men, with age tending to be associated with status in men” (p61). 

However, this raises the question as to why, today, most men prefer to look younger.[21]

Are Feminine Faces More Prone to Infidelity?

Another interesting idea discussed by Dutton is that mate-choice criteria may vary depending on the sort of relationship sought. For example, he suggests: 

A highly feminine face is attractive, in particular in terms of a short term relationship… [where] a healthy and fertile partner is all that is needed” (p43). 

In contrast, however, he concludes that for a long-term relationship a less feminine face may be desirable, since he contends “being extremely feminine in terms of secondary sexual characteristics is associated with an r-strategy” and hence supposedly with a greater risk of infidelity (p43).[22]

However, Dutton presents no evidence in favour of the claim that less feminine women are less prone to sexual infidelity. 

Actually, on theoretical grounds, I would contend that the precise opposite relationship is more likely to exist. 

After all, less feminine and more masculine females, having been subjected to higher levels of androgens, would presumably also have a more male-typical sexuality, including a high sex drive and preference for promiscuous sex with multiple partners

Indeed, there is data in support of this conclusion, from studies of women afflicted with a rare condition, congenital adrenal hyperplasia, which results in their having been exposed to abnormally high levels of masculinizing androgens such as testosterone both in the womb and sometimes in later life as compared to other females, and who, as a consequence, exhibit a more male-typical psychology and sexuality than other females. 

Thus, Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here) reports:  

There is evidence that certain aspects of adult male sexuality result from the effects of prenatal and postpubertal androgens: before the discovery of cortisone therapy women with andrenogenital syndrome [AGS] were exposed to abnormally high levels of androgens throughout their lives, and clinical data on late-treated AGS women indicate clear-cut tendencies toward a male pattern of sexuality” (The Evolution of Human Sexuality: p290). 

Thus, citing the work of, among others the much-demonized John Money, Symons reports that women suffering from andrenogenital syndrome

Tended to exhibit clitoral hypersensitivity and an autonomous, initiatory, appetitive sexuality which investigators have characterized as evidencing a high sex drive or libido” (The Evolution of Human Sexuality: p290). 

This suggests that females with a relatively more masculine appearance, having been subject, on average, to higher levels of masculinizing androgens, will also evidence a more male-typical sexuality, including greater promiscuity and hence presumably a greater proclivity towards infidelity, rather than a lesser tendency as theorized by Dutton. 

Good Looks, Politics and Religion 

Dutton also cites studies showing that conservative politicians, and voters, are more attractive than liberals (Peterson & Palmer 2017; Berggren et al 2017). 

By way of explanation for these findings, Dutton speculates that in ancestral environments: 

Populations… so low in ethnocentrism as to espouse Multiculturalism and reject religion would simply have died out… Therefore… the espousal of leftist dogmas would partly reflect mutant genes, just as the espousal of atheism does. This elevated mutational load… would be reflected in their bodies as well as their brains” (p76). 

However, this seems unlikely, since atheism and possibly socially liberal political views as well have usually been associated with higher intelligence, which is probably a marker for good genes.[23]

Moreover, although mutations might result in suboptimal levels of both ethnocentrism and religiosity, these suboptimal levels would presumably also manifest in the form of excessive levels of religiosity and ethnocentrism

This would suggest that religious fundamentalists and extreme xenophobes and racial supremacists would be just as mutated, and hence just as ugly, as atheists and extreme leftists supposedly are. 

Yet Dutton instead insists that religious fundamentalists, especially Mormons, tend to be highly attractive (Dutton et al 2017). However, he and his co-authors cite little evidence for this claim beyond the merely anecdotal.[24]

The authors of the original paper, Dutton reports, themselves suggested an alternative explanation for the greater attractiveness of conservative politicians, namely: 

Beautiful people earn more, which makes them less inclined to support redistribution” (p75). 

This, to me seems, both simpler more plausible. However, in response, Dutton observes: 

There is far more to being… right-wing… than not supporting redistribution” (p75). 

Here, he is right. The correlation between socioeconomic status/income and political ideology and voting is actually quite modest (see What’s Your Bias). 

However, earnings do still correlate with voting patterns, and this correlation is perhaps enough to explain the modest association between physical attractiveness and political opinions. 

Nevertheless, other factors may also play a role. For example, a couple of studies have found, among men, an association between grip strength and support for policies that benefit oneself economically (Peterson et al 2013; Peterson & Laustsen 2018). 

Grip strength is associated with muscularity, which is generally considered attractive in males

Since most leading politicians mostly come from middle-class, well-to-do, if not elite backgrounds, this would suggest that conservative male politicians are likely to be, on average, more attractive than liberal or leftist politicians.

Indeed, Noah Carl has even purported to observe, and presents evidence suggesting, a general, and widening, masculinity gap between the political left and right, and some studies have found evidence that more physically formidable males have more conservative and less egalitarian political views (Price et al 2017; Kerry & Murray 2018). 

Since masculinity in general (e.g. not just muscularity, but also square jaws etc.) is associated with attractiveness in males (see discussion here), this might explain at least part of the association between political views and physical attractiveness. 

On the other hand, among females, an opposite process may be at work. 

Among women, leftist politics seem to be strongly associated with feminist views

Since feminists reject traditional female sex roles, it is likely they would be relatively less ‘feminine’ than other women, perhaps having been, on average, subjected to relatively higher levels of androgens in the womb, masculinizing both their behaviour and appearance. 

Yet it is relatively more feminine women, with feminine, sexually-dimorphic traits such as large breasts, low waist to hip ratios, and neotenous facial features, who are perceived by men as more attractive.

It is therefore unsurprising that feminist women in particular tend to be less attractive than women who are attracted to traditional sex roles.[25]

Developmental Disorders and MPAs

One study cited by Dutton found that observers are able to estimate a male’s IQ from a facial photograph alone at better than chance level (Kleisner 2014). To explain this, Dutton speculates: 

Having a small nose is associated with Downs [sic] Syndrome and Foetal Alcohol Syndrome and this would have contributed to our assuming that those with smaller noses were less intelligent” (p51). 

Thus, he explains: 

“[Whereas] Downs [sic] Syndrome and Foetal Alcohol Syndrome are major disruptions of developmental pathways and they lead to very low intelligence and a very small nose… even minor disruptions would lead to slightly reduced intelligence and a slightly smaller nose” (p51-2). 

Indeed, foetal alcohol syndrome itself seems to exist on a continuum and is hence a matter of degree. 
 
Indeed, going further than Dutton, I would agree with publisher/blogger Chip Smith, who observes in his blog

Dutton only mention[s] trisomy 21 (Down syndrome) in passing, but I think that’s a pretty solid place to start if you want to establish the baseline premise that at least some mental traits can be accurately inferred from external appearances.” 

Thus, the specific ‘look associated with Down Syndrome is a useful counterexample to cite to anyone who dismisses the idea of physiognomy, and the existence of any association between looks and ability or behaviour, a priori

Indeed, other developmental disorders and chromosomal abnormalities, not mentioned by Dutton, are also associated with a specific specific ‘look’ – for example, Williams Syndrome, the distinctive appearance, and personality, associated with which has even been posited as the basis for the elf figure in folklore.[26]

Less obviously, it has even been suggested that there are also subtle facial features that distinguish autistic children from neurotypical children, and which also distinguish boys with relatively more severe forms of autism from those who are likely to be diagnosed as higher functioning (Aldridge et al 2011; Ozgen et al 2011). 

However, Dutton neglects to mention that there is in fact a sizable literature regarding the association between so-called minor physical anomalies (aka MPAs) and several psychiatric conditions including autism (Ozgen et al 2008), schizophrenia (Weinberg et al 2007; Xu et al 2011) and paedophilia (Dyshniku et al 2015). 

MPAs have also been identified in several studies as a correlate of criminal behaviour (Kandel et al 1989; see also Criminology: A Global Perspective: p70-1). 

Yet these MPAs are often the very same traits – the single transverse palmar crease; sandal toe gap; fissured tongue – that are also used to diagnose Down Syndrome in nenates.

The Morality of Making Judgements

But is it not superficial to judge a book by its cover? And, likewise, by extension, isn’t it morally wrong to judge people by their appearance? 

Indeed, it is not only morally wrong to judge people by their appearance, but also, worse still, isn’t it racist

After all, skin colour is obviously a part of our appearance, and did not our Lord and Saviour, Dr Martin Luther King, himself advocate for a world in which people would be judged “not be judged by the color of their skin but by the content of their character.” 

Here, Dutton turns from science to morality, and convincingly contends that, at least in certain circumstances, it is indeed morally acceptable to judge people by appearances. 

It is true, he acknowledges, that most of the correlations that he has uncovered or reported are modest in magnitude. However, he is at pains to emphasize, the same is true of almost all correlations that are found throughout psychology and the social sciences. Thus, he exhorts: 

Let us be consistent. It is very common in psychology to find a correlation between, for example, a certain behaviour and accidents (or health) of 0.15 or 0.2 and thus argue that action should be taken based on the results. These sizes are considered large enough to be meaningful and even for policy to be changed” (p82). 

However, Dutton also includes a few sensible precautions and caveats to be borne in mind by those readers who might be tempted overenthusiastically apply some of his ideas. 

First, he warns against regarding making inferences regarding “people from a racial group with which you have relatively limited contact”, where the same cues used with respect to your own group may be inapplicable, or must be applied relative to the group averages for the other group, something we may not be adept at doing (p82-3). 

Thus, to give an obvious example, among Caucasians, epicanthic folds (i.e. so-called ‘slanted’ eyes) may be indicative of a developmental disorder such as Down syndrome. However, among East Asians, Southeast Asians and some other racial groups (notably the Khoisan of Southern Africa), such folds are entirely normal and not indicative of any pathology. 

He also cautions regarding people’s ability to disguise their appearance, both by makeup and plastic surgery. However, also notes that the tendency to wear excessive makeup, or undergo cosmetic surgery, is itself indicative of a certain personality type, and indeed often, Dutton asserts, of psychopathology (p84-5). 

Using physical appearance to make assessments is particularly useful, Dutton observes, “in extreme situations when a quick decision must be made” (p80). 

Thus, to take a deliberately extreme reductio ad absurdum, if we see someone stabbing another person, and this first person then approaches us in an aggressive manner brandishing the knife, then, if we take evasive action, we are, strictly speaking, judging by appearances. The person appears as if they are going to stab us, so we assume they are and act accordingly. However, no one would judge us morally wrong for so doing. 

However, in circumstances where we have access to greater individualizing information, the importance of appearances becomes correspondingly smaller. Here, a Bayesian approach is useful. 

In 2013, evolutionary psychologist Geoffrey Miller caused predictable outrage and hysteria when he tweeted

Dear obese PhD applicants: if you didn’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth.” 

According to Dutton, as we have seen above, willpower is indeed likely correlated with obesity, because, as Miller argues, people lacking in willpower also likely lack the willpower to diet. 

However, a PhD supervisor surely has access to far more reliable information regarding a person’s personality and intelligence, including their conscientiousness and willpower, in the form of their application and CV, than is obtainable from their physique alone. 

Thus, the outrage that this tweet provoked, though indeed excessive and a reflection of the intolerant climate of so-called cancel culture’ and public shaming in the contemporary west, was not entirely unwarranted. 

Similarly, if geneticist James Watson did indeed say, as he was rather hilariously reported as having said, that “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them”, he was indeed being prejudiced, because, again, an employer has access to more reliable information regarding applicants than their physique, namely, again, their application and CV. 

Obesity may often—perhaps even usually—be indicative of low levels of conscientiousness, willpower and intelligence. But, it is not always indicative of low levels of conscientiousness, willpower and intelligence. Instead, it may instead, as Dutton himself points out, reflect only high extraversion, or indeed an unusual medical condition. 

However, even at job interviews, employers do still, in practice, judge people partly by their appearance. Moreover, we often regard them as well within their rights to do so. 

This is, of course, why we advise applicants to dress smartly for their interviews.

Endnotes

[1] If ‘How to Judge People by What They Look Like’ is indeed a very short book, then, it must be conceded that this is, by comparison, a rather long and detailed book review. While, as will become clear in the remainder of this review, I have many points of disagreement with Dutton (as well as many points of agreement) and there are many areas where I feel he is mistaken, nevertheless the length of this book review is, in itself, testament to the amount of thinking that Dutton’s short pamphlet has inspired in this reader. 

[2] In addition, I suspect few of the researchers whose work Dutton cites ever even regarded themselves as working within, or somehow reviving, the field of physiognomy. On the contrary, despite researching and indeed demonstrating robust associations between morphology and behavior, this idea may never even have occurred to them.
Thus, for example, I was already familiar with some of this literature even before reading Dutton’s book, but it never occurred to me that what I was reading was a burgeoning literature in a revived science of physiognomy. Indeed, despite being familiar with much of this literature, I suspect that, if questioned directly on the matter, I may well have agreed with the general consensus that physiognomy was a discredited pseudoscience.
Thus, one of the chief accomplishments of Dutton’s book is simply to establish that this body of research does indeed represent a revived science of physiognomy, and should be recognized and described as such, even if the researchers themselves rarely if ever use the term.

[3] Instead, it would surely uncover mostly papers in the field of ‘history of science’, documenting the history of physiognomy as a supposedly discredited pseudoscience, along with such other real and supposed pseudosciences as phrenology and eugenics.

[4] The studies mentioned in the two paragraphs that precede this endnote are simply a few that I happen to have stumbled across that are relevant to Dutton’s theme and which I happen to have been able to recall. No doubt, any list of relevant studies that I could compile would be just as inexhaustive as that of Dutton and my own list would be longer than Dutton’s only because I have the advantage of having read Dutton’s book beforehand.

[5] Thus, a young person dressed as a hippy in the 60s and 70s was more likely to ascribe to certain (usually rather silly and half-baked) political beliefs, and also more likely to engage in recreational drug-use and live on a commune, while a young man dressed as a teddy boy in Britain in the 1950s, a skinhead in the 1970s and 80s, a football casual in the 1990s, or indeed a chav today, may be perceived as more likely to be involved in violent crime and thuggery. The goth subculture also seems to be associated with a certain personality type, and also with self-harm and suicide.

[6] The association between IQ and socioeconomic status is reviewed in The Bell Curve: Intelligence and Class Structure in American Life (which I have reviewed here). The association between conscientiousness and socioeconomic status is weaker, probably because personality tests are a less reliable measure of conscientiousness than IQ tests are of IQ, since the former rely on self-report. This is the equivalent of an IQ test that, instead of asking test-takers to solve logical puzzles, simply asked them how good they perceived themselves to be at solving logical puzzles. Nevertheless, conscientiousness, as measured in personality tests, does indeed correlate with earnings and career advancement, albeit less strongly than does IQ (Spurk & Abele 2011Wiersma & Kappe 2016).

[7] If some fat people are low in conscientiousness and intelligence, and others merely high in extraversion, there may, I suspect, also be a third category of people who do have self-control and self-discipline, but simply do not much care about whether they are fat or thin. However, given both the social stigma and health implications of obesity, this group is, I suspect, small. It is also likely young, since health dangers of obesity increase with age, and male, since both the social stigma of fatness, and especially its negative impact on mate value and attractiveness, seems to be greater for females. 

[8] Actually, whether roid rage is a real thing is a matter of some dispute. Although users of anabolic steroids do indeed have higher rates of violent crime, it has been suggested that this may be at least in part because the type of people who choose to use steroids are precisely those already prone to violence. In other words, there is a problem of self-selection bias.
Moreover, the association between testosterone and aggressive behaviours is more complex than this simple analysis assumes. One leading researcher in the field, Allan Mazur, argues that testosterone is not associated with aggression or violence per se, but only with dominance behaviours, which only sometimes manifest themselves through violent aggression. Thus, for example, a leading politician, business tycoon or chief executive of a large company may have high testosterone and be able to exercise dominance without resort to violence. However, a prisoner, being of low status in the legitimate world, is likely only able to assert dominance through violence (see Mazur & Booth 1998; Mazur 2009).

[9] Here, however, it is important to distinguish between the so-called organizing and ‘activating’ effects of testosterone. The latter can be equated with levels of circulating testosterone at any given time. The former, however, involves androgen levels at certain key points during development, especially in utero (i.e. in the womb) and during puberty, which thenceforth have long-term effects on both morphology and behaviour (and a person’s degree of susceptibility to circulating androgens).
Facial bone structure is presumable largely an effect of the ‘organizing’ effects of testosterone during development, though jaw shape is also affected by the size of the jaw muscles, which can be increased, it has been claimed, by regularly chewing gum. Bodily muscularity, on the other hand, is affected by both levels of circulating testosterone (hence the effects of anabolic steroids on muscle growth) but also levels of testosterone during development, not least because high levels of androgens during development increases the number and sensitivity of androgen receptors, which affect the potential for muscular growth.

[10] In this section, I have somewhat conflated spatial ability, mathematical ability and autism traits. However, these are themselves, of course, not the same, though each is probably associated with the others, albeit again not necessarily in a linear relationship.

[11] I have been unable to discover any evidence for this supposed association between lack of balding and impotence in men. On the contrary, googling the terms ‘male pattern baldness’ and ‘impotence’ finds only a results, mostly people speculating whether there is a positive correlation between balding and impotence in males, if only on the very unpersuasive ground that the two conditions tend to have a similar age of onset (i.e. around middle-age).

[12] In contrast, the shaven-head skinhead-look, or close-cropped military-style induction cut, buzz cut or high and tight is, of course, perceived as a quintessentially masculine, and even thuggish, hairstyle. This is perhaps because, in addition to contrasting with the long hair typically favoured by females, it also, by reducing the size of the upper part of the head, makes the lower part of the face e.g. the jaw and body, appear comparatively larger, and large jaws are a masculine trait, Thus, Nancy Etcoff observes:

The absense of hair on the head serves to exaggerate signals of strength. The smaller the head the bigger the look of the neck and body. Bodybuilders often shave or crop their hair, the size contrast between the head and neck and shoulders emphasizing the massiveness of the chest” (Survival of the Prettiest: p126).

[13] The source that Dutton cites for this claim is (Nieschlag & Behr 2013).

[14] In America, it has been suggested, especially tall boys are not treated with testosterone to prevent their growing any taller. Instead, they are encouraged to attempt to make a successful career in professional basketball

[15] On the other hand, one Swedish study investigating the association between height and violent crime found that the shortest men in Sweden had almost double convictions for violent crimes as compared to the tallest men in Sweden. However, after controlling for potential confounds (e.g. socioeconomic status and intelligence, both of which positively correlate with height), the association was reversed, with taller man having a somewhat higher likelihood of being convicted of a violent crime (Beckley et al 2014). 

[16] According to Dutton, the correlation between height and IQ is only about r = 0.1. This is a modest correlation even by psychology and social science standards.

[17] In other words, although modest in magnitude, the association between height and IQ has been replicated in so many studies with sufficiently large and representative sample sizes that we can be certain that it represents a real association in the population at large, not an artifact of small, unrepresentative or biased sampling in just one or a few studies. 

[18] An alternative explanation for the absence of a within-family correlation between height and intelligence is that some factor that differs as between families causes both increased height and increased intelligence. An obvious candidate would be malnutrition. However, in modern western economies where there is an superabundance of food, starvation is almost unknown and obesity is far more common than undernourishment even among the ostensible poor (indeed, as noted by Dutton, especially among the ostensible poor), it is doubtful that undernourishment is a significant factor in explaining either small stature or low IQs, especially since height is mostly heritable, at least by the time a person reaches adulthood.

[19] The conventional wisdom is that beards went out of fashion during the twentieth century precisely because their role as in spreading germs came to be more widely known. Thus, Nancy Etcoffwrites:

Facial hair has been less abundant in this century than in centuries past (except in the 1960s) partly because medical opinion turned against them. As people became increasingly aware of the role of germs in spreading diseases, beards came to be seen as repositories of germs. Previously, they had been advised by doctors as a means to protect the throat and filter air to the lungs” (Survival of the Prettiest: p156-7). 

Of course, this is not at all inconsistent with the notion that beards are perceived as attractive by women precisely because they represent a potential vector of infection and hence advertise the health and robustness of the male whom they adorn, as contended by Dutton. On the contrary, the fact that beards are indeed associated with infection, is consistent with and supportive of Dutton’s theory. 

[20] It would be interesting to discover whether these findings generalize to other, non-western cultures, especially those where beards are universal or the norm (e.g. among Muslims in the Middle East). It would also be discover whether women’s perceptions regarding the attractiveness of men with beards have changed as beards have gone in and out of fashion. 

[21] Perhaps this is because, although age is still associated with status, it is no longer as socially acceptable for older men to marry, or enter sexual relationships with, much younger women or girls as it was in the past, and such relationships are now less common. Indeed, in the last few years, this has become especially socially unacceptable. Therefore, given that most men are maximally attracted to females in this age category, they prefer to be thought of as younger so that it is more acceptable for them to seek relationships with younger, more attractive females.
Actually, while older men tend to have higher status on average, I suspect that, after controlling for status, it is younger men who would be perceived as more attractive. Certainly, a young multi-millionaire would surely be considered a more eligible bachelor than an older homeless man. Therefore, age per se is not attractive; only high status is attractive, which happens to correlate with age.

[22] This idea is again based on Philippe Rushton’s Differential K theory, which I have reviewed here and here.

[23] Dutton is apparently aware of this objection. He acknowledges, albeit in a different book, that “Intelligence, in general, is associated with health” (Why Islam Makes You Stupid: p174). However, in this same book, he also claims that: 

Intelligence has been shown to be only weakly associated with mutational load” (Why Islam Makes You Stupid: p169). 

Interestingly, Dutton also claims in this book: 

Very high intelligence predicts autism” (Why Islam Makes You Stupid: p175). 

This claim, namely that exceptionally high intelligence is associated with autism, seems anecdotally plausible. Certainly, autism seems to have a complex and interesting relationship with intelligence
Unfortunately, however, Dutton does not cite a source for the claim the claim that exceptionally high intelligence is associated with autism. Nevertheless, according to data cited here, there is indeed a greater variance in the IQs of autistic people, with greater proportions of autistic people at both tail-ends of the bell curve, the author even referring to an inverted bell curve for intelligence among autistic people, though, even according to her own cited data, this appears to be an exaggeration. However, this is not a scholarly source, but rather appears to be the website of a not entirely disinterested advocacy group, and it is not entirely clear from where this data derives, the piece referring only to data from the Netherlands collected by the Dutch Autism Register (NAR). 

[24] Admittedly, Dutton does cite one study showing that subjects can identify Mormons from facial photographs alone, and that the two groups differed in skin quality (Rule et al 2010). However, this might reflect merely the health advantages resulting from the religiously imposed abstention from the consumption of alcohol, tobacco, tea and coffee.
For what it’s worth, my own subjective and entirely anecdotal impression is almost the opposite of Dutton’s, at least here in secular modern Britain, where anyone who identifies as Christian, let alone a fundamentalist, unless perhaps s/he is elderly, tends to be regarded as a bit odd.
An interesting four-part critique of this theory, along very different lines from my own, is provided by Scott A McGreal at the Psychology Today website, see here, here, here, and here. Dutton responds with a two-part rejoinder here and here.

[25] However, when it comes to actual politicians, I suspect this difference may be attenuated, or even nonexistent, since pursuing a career in politics is, by its very nature, a very untraditional, and unfeminine, career choice, most likely because, in Darwinian terms, political power has a greater reproductive payoff for men than for women. Thus, it is hardly surprising that leading female politicians, even those who theoretically champion traditional sex roles, tend themselves to be quite butch and masculine in appearance and often as unattractive as their leftist opponents (e.g. Ann Widdecombe). Indeed, even Ann Coulter, a relatively attractive woman, at least by the standards of female political figures, has been mocked for her supposedly mannish appearance and pronounced Adam’s apple.
Moreover, most leading politicians are at least middle-aged, and female attractiveness peaks very young, in mid- to late-teens into early-twenties

[26] Another medical condition associated with a specific look, as well as with mental disability, is cretinism, though due to medical advances, most people with the condition in western societies, develop normally and no longer manifest either the distinctive appearance or the mental disability. 

References 

Aldridge et al (2011) Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Molecular Autism 14;2(1):15. 
Apicella et al (2015) Hadza Hunter-Gatherer Men do not Have More Masculine Digit Ratios (2D:4D) American Journal of Physical Anthropology 159(2):223-32. 
Bagenjuk et al (2019) Personality Traits and Obesity, International Journal of Environmental Research and Public Health 16(15): 2675. 
Bailey & Hurd (2005) Finger length ratio (2D:4D) correlates with physical aggression in men but not in women. Biological Psychology 68(3):215-22. 
Batrinos (2014) The endocrinology of baldness. Hormones 13(2): 197–212. 
Beckley et al (2014) Association of height and violent criminality: results from a Swedish total population study. International Journal of Epidemiology 43(3):835-42 
Benderlioglu & Nelson (2005) Digit length ratios predict reactive aggression in women, but not in men Hormones and Behavior 46(5):558-64. 
Berggren et al (2017) The right look: Conservative politicians look better and voters reward it Journal of Public Economics 146:  79-86. 
Blanchard & Lyons (2010) An investigation into the relationship between digit length ratio (2D: 4D) and psychopathy, British Journal of Forensic Practice 12(2):23-31. 
Buru et al (2017) Evaluation of the hand anthropometric measurement in ADHD children and the possible clinical significance of the 2D:4D ratioEastern Journal of Medicine 22(4):137-142. 
Case & Paxson (2008) Stature and status: Height, ability, and labor market outcomes, Journal of Political Economy 116(3): 499–532. 
Cash (1990) Losing Hair, Losing Points?: The Effects of Male Pattern Baldness on Social Impression Formation. Journal of Applied Social Psychology 20(2):154-167. 
De Waal et al (1995) High dose testosterone therapy for reduction of final height in constitutionally tall boys: Does it influence testicular function in adulthood? Clinical Endocrinology 43(1):87-95. 
Dixson & Vasey (2012) Beards augment perceptions of men’s age, social status, and aggressiveness, but not attractiveness, Behavioral Ecology 23(3): 481–490. 
Dutton et al (2017) The Mutant Says in His Heart, “There Is No God”: the Rejection of Collective Religiosity Centred Around the Worship of Moral Gods Is Associated with High Mutational Load Evolutionary Psychological Science 4:233–244. 
Dysniku et al (2015) Minor Physical Anomalies as a Window into the Prenatal Origins of Pedophilia, Archives of Sexual Behavior 44:2151–2159. 
Elias et al (2012) Obesity, Cognitive Functioning and Dementia: Back to the Future, Journal of Alzheimer’s Disease 30(s2): S113-S125. 
Ellis & Hoskin (2015) Criminality and the 2D:4D Ratio: Testing the Prenatal Androgen Hypothesis, International Journal of Offender Therapy and Comparative Criminology 59(3):295-312 
Fossen et al (2022) 2D:4D and Self-Employment: A Preregistered Replication Study in a Large General Population Sample Entrepreneurship Theory and Practice 46(1):21-43. 
Gouchie & Kimura (1991) The relationship between testosterone levels and cognitive ability patterns Psychoneuroendocrinology 16(4): 323-334. 
Griffin et al (2012) Varsity athletes have lower 2D:4D ratios than other university students, Journal of Sports Sciences 30(2):135-8. 
Hilgard et al (2019) Null Effects of Game Violence, Game Difficulty, and 2D:4D Digit Ratio on Aggressive Behavior, Psychological Science 30(1):095679761982968 
Hollier et al (2015) Adult digit ratio (2D:4D) is not related to umbilical cord androgen or estrogen concentrations, their ratios or net bioactivity, Early Human Development 91(2):111-7 
Hönekopp & Urban (2010) A meta-analysis on 2D:4D and athletic prowess: Substantial relationships but neither hand out-predicts the other, Personality and Individual Differences 48(1):4-10. 
Hoskin & Ellis (2014) Fetal testosterone and criminality: Test of evolutionary neuroandrogenic theory, Criminology 53(1):54-73. 
Ishikawa et al (2001) Increased height and bulk in antisocial personality disorder and its subtypes. Psychiatry Research 105(3):211-219. 
Işık et al (2020) The Relationship between Second-to-Fourth Digit Ratios, Attention-Deficit/Hyperactivity Disorder Symptoms, Aggression, and Intelligence Levels in Boys with Attention-Deficit/Hyperactivity Disorder, Psychiatry Investigation 17(6):596–602. 
Janowski et al (1994) Testosterone influences spatial cognition in older men. Behavioral Neuroscience 108(2):325-32. 
Jokela et al (2012) Association of personality with the development and persistence of obesity: a meta-analysis based on individual–participant data, Etiology and Pathophysiology 14(4): 315-323. 
Kanazawa (2014) Intelligence and obesity: Which way does the causal direction go? Current Opinion in Endocrinology, Diabetes and Obesity (5):339-44. 
Kandel et al (1989) Minor physical anomalies and recidivistic adult violent criminal behavior, Acta Psychiatrica Scandinavica 79(1) 103-107. 
Kangassalo et al (2011) Prenatal Influences on Sexual Orientation: Digit Ratio (2D:4D) and Number of Older Siblings, Evolutionary Psychology 9(4):496-508 
Kerry & Murray (2019) Is Formidability Associated with Political Conservatism?  Evolutionary Psychological Science 5(2): 220–230. 
Keshavarz et al (2017) The Second to Fourth Digit Ratio in Elite and Non-Elite Greco-Roman Wrestlers, Journal of Human Kinetics 60: 145–151. 
Kleisner et al (2014) Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women. PLoS ONE 9(3): e81237. 
Kordsmeyer et al (2018) The relative importance of intra- and intersexual selection on human male sexually dimorphic traits, Evolution and Human Behavior 39(4): 424-436. 
Kosinski & Wang (2018) Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114(2):246-257. 
Kyselicová et al (2021) Autism spectrum disorder and new perspectives on the reliability of second to fourth digit ratio Developmental Pyschobiology 63(6). 
Li et al (2016) The relationship between digit ratio and sexual orientation in a Chinese Yunnan Han population, Personality and Individual Differences 101:26-29. 
Lippa (2003) Are 2D:4D finger-length ratios related to sexual orientation? Yes for men, no for women, Journal of Personality &Social Psychology 85(1):179-8 
Lolli et al (2017) A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans, Proceedings of the Royal Society B: Biological Sciences 284(1857):20170356 
Malamed (1992) Personality correlates of physical height. Personality and Individual Differences 13(12):1349-1350. 
Manning & Taylor (2001) Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans, Evolution & Human Behavior 22(1):61-69. 
Manning et al (2001) The 2nd to 4th digit ratio and autism, Developmental Medicine & Child Neurology 43(3):160-164. 
Martel et al (2008) Masculinized Finger-Length Ratios of Boys, but Not Girls, Are Associated With Attention-Deficit/Hyperactivity Disorder, Behavioral Neuroscience 122(2):273-81. 
Martin et al (2015) Associations between obesity and cognition in the pre-school years, Obesity 24(1) 207-214 
Mazur & Booth (1998) Testosterone and dominance in men. Behavioral and Brain Sciences, 21(3), 353–397. 
Mazur (2009) Testosterone and violence among young men. In Walsh & Beaver (eds) Biosocial Criminology: New Directions in theory and Research. New York: Routledge. 
Moffat & Hampson (1996) A curvilinear relationship between testosterone and spatial cognition in humans: Possible influence of hand preference. Psychoneuroendocrinology. 21(3):323-37. 
Murray et al (2014) How are conscientiousness and cognitive ability related to one another? A re-examination of the intelligence compensation hypothesis, Personality and Individual Differences, 70, 17–22. 
Nieshclag & Behr (2013) Testosterone Therapy. In Nieschlag & Behr (eds) Andrology: Male Reproductive Health and Dysfunction. New York: Springer. 
Ozgen et al (2010) Minor physical anomalies in autism: a meta-analysis. Molecular Psychiatry 15(3):300–7. 
Ozgen et al (2011) Morphological features in children with autism spectrum disorders: a matched case-control study. Journal of Autism and Developmental Disorders 41(1):23-31. 
Peterson & Palmer (2017) Effects of physical attractiveness on political beliefs. Politics and the Life Sciences 36(02):3-16 
Persico et al (2004) The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height, Journal of Political Economy 112(5): 1019-1053. 
Pratt et al (2016) Revisiting the criminological consequences of exposure to fetal testosterone: a meta-analysis of the 2d:4d digit ratio, Criminology 54(4):587-620. 
Price et al (2017). Is sociopolitical egalitarianism related to bodily and facial formidability in men? Evolution and Human Behavior, 38, 626-634. 
Puts (2010) Beauty and the beast: Mechanisms of sexual selection in humans, Evolution and Human Behavior 31(3):157-175. 
Rammstedt et al (2016) The association between personality and cognitive ability: Going beyond simple effects, Journal of Research in Personality 62: 39-44. 
Rosenberg & Kagan (1987) Iris pigmentation and behavioral inhibition Developmental Psychobiology 20(4):377-92. 
Rosenberg & Kagan (1989) Physical and physiological correlates of behavioral inhibition Developmental Psychobiology 22(8):753-70. 
Rule et al (2010) On the perception of religious group membership from faces. PLoS ONE 5(12):e14241. 
Salas-Wright & Vaughn (2016) Size Matters: Are Physically Large People More Likely to be Violent? Journal of Interpersonal Violence 31(7):1274-92. 
Spurk & Abele (2011) Who Earns More and Why? A Multiple Mediation Model from Personality to Salary, Journal of Business and Psychology 26: 87–103. 
Sutin et al (2011) Personality and Obesity across the Adult Lifespan Journal of Personality and Social Psychology 101(3): 579–592. 
Valla et al (2011). The accuracy of inferences about criminality based on facial appearance. Journal of Social, Evolutionary, and Cultural Psychology, 5(1), 66-91. 
Voracek et al (2011) Digit ratio (2D:4D) and sex-role orientation: Further evidence and meta-analysis, Personality and Individual Differences 51(4): 417-422. 
Weinberg et al (2007) Minor physical anomalies in schizophrenia: A meta-analysis, Schizophrenia Research 89: 72–85. 
Wiersma & Kappe 2015 Selecting for extroversion but rewarding for conscientiousness, European Journal of Work and Organizational Psychology 26(2): 314-323. 
Williams et al (2000) Finger-Length Ratios and Sexual Orientation, Nature 404(6777):455-456. 
Xu et al (2011) Minor physical anomalies in patients with schizophrenia, unaffected first-degree relatives, and healthy controls: a meta-analysis, PLoS One 6(9):e24129. 
Xu & Zheng (2016) The Relationship Between Digit Ratio (2D:4D) and Sexual Orientation in Men from China, Archives of Sexual Behavior 45(3):735-41. 

Desmond Morris’s ‘The Naked Ape’: A Pre-Sociobiological Work of Human Ethology 

Desmond Morris, Naked Ape: A Zoologist’s Study of the Human Animal (New York: Mcgraw-Hill Book Company, 1967)

First published in 1967, ‘The Naked Ape’, a popular science classic authored by the already famous British zoologist and TV presenter Desmond Morris, belongs to the pre-sociobiological tradition of human ethology

In the most general sense, the approach adopted by the human ethologists, who included, not only Morris, but also playwright Robert Ardrey, anthropologists Lionel Tiger and Robin Fox and the brilliant Nobel-prize winning ethologist, naturalist, zoologist, pioneering evolutionary epistemologist and part-time Nazi sympathizer Konrad Lorenz, was correct. 

They sought to study the human species from the perspective of zoology. In other words, they sought to adopt the disinterested perspective, and detachment, of, as Edward O Wilson was later to put it, “zoologists from another planet” (Sociobiology: The New Synthesis: p547). 

Thus, Morris proposed cultivating: 

An attitude of humility that is becoming to proper scientific investigation… by deliberately and rather coyly approaching the human being as if he were another species, a strange form of life on the dissecting table” (p14-5).  

In short, Morris proposed to study humans just as a zoologist would any other species of non-human animal. 

Such an approach was an obvious affront to anthropocentric notions of human exceptionalism – and also a direct challenge to the rather less scientific approach of most sociologists, psychologists, social and cultural anthropologists and other such ‘professional damned fools’, who, at that time, almost all studied human behavior in isolation from, and largely ignorance of, biology, zoology, evolutionary theory and the scientific study of the behavior of all animals other than humans. 

As a result, such books inevitably attracted controversy and criticism. Such criticism, however, invariably missed the point. 

The real problem was not that the ethologists sought to study human behavior in just the same way a zoologist would study the behavior of any nonhuman animal, but rather that the study of the behavior of nonhuman animals itself remained, at this time, very much in its infancy. 

Thus, the field of animal behavior was to be revolutionized just a decade or so after the publication of ‘The Naked Ape’ by the approach that came to be known as, first, sociobiology, now more often as behavioral ecology, or, when applied to humans, evolutionary psychology

These approaches, based on what became known as selfish gene theory, sought to understand behavior in terms of fitness maximization – in other words, on the basis of the recognition that organisms have evolved to engage in behaviors which tended to maximize their reproductive success in ancestral environments. 

Mathematical models, often drawn from economics and game theory, were increasingly employed. In short, behavioral biology was becoming a mature science. 

In contrast, the earlier ethological tradition was, even at its best, very much a soft science. 

Indeed, much such work, for example Jane Goodall’s rightly-celebrated studies of the chimpanzees of Gombe, was almost pre-scientific in its approach, involving observation, recording and description of behaviors, but rarely the actual testing or falsification of hypotheses. 

Such research was obviously important. Indeed, Goodall’s was positively groundbreaking. 

After all, the observation of the behavior or an organism is almost a prerequisite for the framing of hypotheses about the behavior of that organism, since hypotheses are, in practice, rarely generated in an informational vacuum from pure abstract theory. 

However, such research was hardly characteristic of a mature and rigorous science. 

When hypotheses regarding the evolutionary significance of behavior patterns were formulated by early ethologists, this was done on a rather casual ad hoc basis, involving a kind of ‘armchair adaptationism’, which could perhaps legitimately be dismissed as the spinning of, in Stephen Jay Gould’s famous phrase, just so stories

Thus, a crude group selectionism went largely unchallenged. Yet, as George C Williams was to show, and Richard Dawkins later to forcefully reiterate in The Selfish Gene (reviewed here), behaviours are unlikely to evolve that benefit the group or species if they involve a cost to the inclusive fitness or reproductive success of the individual engaging in the behavior. 

Robert Wright picks out a good example of this crude group selectionism from ‘The Naked Ape’ itself, quoting Morris’s claim that, over the course of human evolution: 

To begin with, the males had to be sure that their females were going to be faithful to them when they left them alone to go hunting. So the females had to develop a pairing tendency” (p64). 

To anyone schooled in the rudiments of Dawkinsian selfish gene theory, the fallacy should be obvious. But, just in case we didn’t spot it, Wright has picked it out for us: 

Stop right there. It was in the reproductive interests of the males for the females to develop a tendency toward fidelity? So natural selection obliged the males by making the necessary changes in the females? Morris never got around to explaining how, exactly, natural selection would perform this generous feat” (The Moral Animal: p56). 

In reality, couples have a conflict of interest here, and the onus is clearly on the male to evolve some mechanism of mate-guarding, though a female might conceivably evolve some way to advertise her fidelity if, by so doing, she secured increased male parental investment and provisioning, hence increasing her own reproductive success.[1]

In short, mating is Machiavellian. A more realistic view of human sexuality, rooted in selfish gene theory, is provided by Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here). 

Unsuccessful Societies? 

The problems with ‘The Naked Ape’ begin in the very first chapter, where Morris announces, rather oddly, that, in studying the human animal, he is largely uninterested in the behavior of contemporary foraging groups or other so-called ‘primitive’ peoples. Thus, he bemoans: 

The earlier anthropologists rushed off to all kinds of unlikely corners of the world… scattering to remote cultural backwaters so atypical and unsuccessful that they are nearly extinct. They then returned with startling facts about the bizarre mating customs, strange kinship systems, or weird ritual procedures of these tribes, and used this material as though it were of central importance to the behaviour of our species as a whole. The work done by these investigators… did not tell us was anything about the typical behaviour of typical naked apes. This can only be done by examining the common behaviour patterns that are shared by all the ordinary, successful members of the major cultures-the mainstream specimens who together represent the vast majority. Biologically, this is the only sound approach” (p10).[2]

Thus, today, political correctness has wholly banished the word ‘primitive’ from the anthropological lexicon. It is, modern anthropologists insist, demeaning and pejorative.  

Indeed, post-Boasian cultural anthropologists in America typically reject the very notion that some societies are more advanced than others, championing instead a radical cultural relativism and insisting we have much to learn from the lifestyle and traditions of hunter-gatherers, foragers, savage cannibals and other such ‘indigenous peoples’. 

Morris also rejects the term ‘primitive’ as a useful descriptor for hunter-gatherer and other technologically-backward peoples, but for diametrically opposite reasons. 

Thus, for Morris, to describe foraging groups as ‘primitive’ is to rather give them altogether too much credit: 

The simple tribal groups that are living today are not primitive, they are stultified. Truly primitive tribes have not existed for thousands of years. The naked ape is essentially an exploratory species and any society that has failed to advance has in some sense failed, ‘gone wrong’. Something has happened to it to hold it back, something that is working against the natural tendencies of the species to explore and investigate the world around it” (p10). 

Instead, Morris proposes to focus on contemporary western societies, declaring: 

North America… is biologically a very large and successful culture and can, without undue fear of distortion, be taken as representative of the modern naked ape” (p51). 

It is indeed true that, with the diffusion of American media and consumer goods, American culture is fast becoming ubiquitous. However, this is a very recent development in historical terms, let alone on the evolutionary timescale of most interest to biologists. 

Indeed, viewed historically and cross-culturally, it is we westerners who are the odd, aberrant ones. 

Thus, we even have been termed, in a memorable backcronym, WEIRD (Western, Educated, Industrialized, Rich and Democratic), and hence quite aberrant, not only in terms of our lifestyle and prosperity, but also in terms of our psychology and modes of thinking

Moreover, while extant foraging groups, and other pre-modern peoples that have survived into modern times, may now indeed now be tottering on the brink of extinction, this, again, is a very recent development in evolutionary terms. 

Indeed, far from being aberrant, this was the lifestyle adopted by all humans throughout most of the time we have existed as a species, including during the period when most of our unique physical and behavioural adaptations evolved

In short, although we may inhabit western cities today, this is not the environment where we evolved, nor that to which our brains and bodies are primarily adapted.[3]

Therefore, given that it represents the lifestyle of our ancestors during the period when most of our behavioral and bodily adaptations evolved, primitive peoples must necessarily have a special place in any evolutionary theory of human behaviour.[4]

Indeed, Morris himself admits as much himself just a few pages later, where he acknowledges that: 

The fundamental patterns of behavior laid down in our early days as hunting apes still shine through all our affairs, no matter how lofty they may be” (p40). 

Indeed, a major theme of ‘The Naked Ape’ is the extent to which the behaviour even of wealthy white westerners is nevertheless fundamentally shaped and dictated by the patterns of foraging set out in our ancient hunter-gatherer past. 

This, of course, anticipates the concept of the environment of evolutionary adaptedness (or EEA) in modern evolutionary psychology

Thus, Morris suggests that the pattern of men going out to work to financially provision wives and mothers who stay home with dependent offspring reflects the ancient role of men as hunters provisioning their wives and children: 

“Behind the façade of modern city life there is the same old naked ape. Only the names have been changed: for ‘hunting’ read ‘working’, for ‘hunting grounds’ read ‘place of business’, for ‘home base’ read ‘house’, for ‘pair-bond’ read ‘marriage’, for ‘mate’ read ‘wife’, and so on” (p84).[5]

In short, while we must explain the behaviors of contemporary westerners, no less than those of primitive foragers, in the light of Darwinian evolution, nevertheless all such behaviors must be explained ultimately in terms of adaptations that evolved over previous generations under very different conditions. 

Indeed, in the sequel to ‘The Naked Ape’, Morris further focuses on this very point, arguing that modern cities, in particular, are unnatural environments for humans, rejecting the then-familiar description of cities as concrete jungles on the grounds that, whereas jungles are the “natural habitat” of animals, modern cities are very much an unnatural habitat for humans. 

Instead, he argues, the better analogy for modern cities is a Human Zoo

The comparison we must make is not between the city dweller and the wild animal but between the city dweller and the captive animal. The city dweller is no longer living in conditions natural for his species. Trapped, not by a zoo collector, but by his own brainy brilliance, he has set himself up in a huge restless menagerie where he is in constant danger of cracking under the strain” (The Human Zoo: pvii). 

Nakedness 

Morris adopts what he calls a zoological approach. Thus, unlike modern evolutionary psychologists, he focuses as much on explaining our physiology and morphology as on our behavior and psychology. Indeed, it is in explaining the peculiarities of human anatomy that Morris is at his best.[6]

This begins, appropriately enough, with the trait that gives him his preferred name for our species, and also furnishes his book with its title – namely our apparent nakedness or hairlessness. 

Having justified calling us ‘The Naked Ape’ on zoological grounds, namely on the ground that this is the first thing the naturalist would notice upon observing our species, Morris then comes close to contradicting himself, admitting that, given the densely concentrated hairs on our heads (as well as the much less densely packed hairs which also cover much of the remainder of our bodies), we actually have more hairs on our bodies than do our closest relatives, chimpanzees.[7]

However, Morris summarily dispatches this objection: 

It is like saying that because a blind man has a pair of eyes, he is not blind. Functionally, we are stark naked and our skin is fully exposed” (p42). 

Why then are we so strangely hairless? Neoteny, Morris proposes, provides part of the answer. 

This refers to the tendency of humans to retain into maturity traits that are, in other primates, restricted to juveniles, nakedness among them. 

Neoteny is a major theme in Morris’s book – and indeed in human evolution

Besides our hairlessness, other human anatomical features that have been explained either partly or wholly in terms of neoteny, whether by Morris or by other evolutionists, include our brain size, growth patterns, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into sexual maturity (see below). Indeed, many of these traits are explicitly discussed by Morris himself as resulting from neoteny

However, while neoteny may supply the means by which our relative hairlessness evolved, it is not a sufficient explanation for why this development occurred, because, as Morris points out: 

The process of neoteny is one of the differential retarding of developmental processes” (p43). 

In other words, humans are neotenous in respect of only some of our characters, not all of them. After all, an ape that remained infantile in all respects would never evolve, for the simple reason that it would never reach sexual maturity and hence remain unable to reproduce. 

Instead, only certain specific juvenile or infantile traits are retained into adulthood, and the question then becomes why these specific traits were the ones chosen by natural selection to be retained. 

Thus, Morris concludes: 

It is hardly likely… that an infantile trait as potentially dangerous as nakedness was going to be allowed to persist simply because other changes were slowing down unless it had some special value to the new species” (p43). 

As to what this “special value” (i.e. selective advantage) might have been, Morris considers, in turn, various candidates.  

One theory considered by Morris theory relates to our susceptibility to insect parasites.  

Because humans, unlike many other primates, return to a home base to sleep most nights, we are, Morris reports, afflicted with fleas as well as lice (p28-9). Yet fur, Morris observes, is a good breeding ground for such parasites (p38-9). 

Perhaps, then, Morris imagines, we might have evolved hairlessness in order to minimize the problems posed by such parasites. 

However, Morris rejects this as an adequate explanation, since, he observes: 

Few other den dwelling mammals… have taken this step” (p43). 

An alternative explanation implicates sexual selection in the evolution of human hairlessness.  

Substantial sex differences in hairiness, as well as the retention of pubic hairs around the genitalia, suggests that sexual selection may indeed have played a role in the evolution of our relative hairlessness as compared to other mammals.

Interestingly, this was Darwin’s own proposed explanation for the loss of body hair during the course of our evolution, the latter writing in The Descent of Man that:

No one supposes that the nakedness of the skin is any direct advantage to man; his body therefore cannot have been divested of hair through natural selection” (The Descent of Man).

Darwin instead proposes:

Since in all parts of the world women are less hairy than men… we may reasonably suspect that this character has been gained through sexual selection” (The Descent of Man).

Morris, however, rejects this explanation on the grounds that: 

The loss of bodily insulation would be a high price to pay for a sexy appearance alone” (p46). 

But other species often often pay a high price for sexually selected bodily adornments. For example, the peacock sports a huge, brightly coloured and elaborate tail which is costly to grow and maintain, impedes his mobility and is conspicuous to predators. Yet this elaborate tail is thought to have evolved through sexual selection or female choice,

Indeed, according to Amotz Zahavi’s handicap principle, it is precisely the high cost of such sexually-selected adornments that made them reliable fitness indicators and hence attractive to potential mates, because only a highly ‘fit’, and hence attractive, male can afford to grow such a costly, inconvenient and otherwise useless appendage. 

Morris also gives unusually respectful consideration to the highly-controversial aquatic ape theory as an explanation for human hairlessness. 

Thus, if humans did indeed pass through an aquatic, or at least amphibious, stage during our evolution, then, Morris agrees, this may indeed explain our hairlessness, since it is indeed true that other aquatic or semiaquatic mammals, such as whales, dolphins and seals, also seem to have jettisoned most of their fur over the course of their evolution. 

This is presumably because fur increases frictional drag while in the water and hence impedes swimming ability, and is among the reasons that elite swimmers also remove their body-hair before competition. 

Indeed, our loss of body hair is among the human anatomical peculiarities that are most often cited by champions of aquatic ape theory in favor of the theory that humans did indeed pass through an aquatic phase during our evolution. 

However, aquatic ape theory is highly controversial, and is rejected by almost all mainstream evolutionists and biological anthropologists.  

As I have said, Morris, for his part, gives respectful consideration to the theory, and, unlike many other anthropologists and evolutionists, does not dismiss it out of hand as entirely preposterous and unworthy even of further consideration.[8]

On the contrary, Morris credits the theory as “ingenious”, acknowledging that, if true, it might explain many otherwise odd features of human anatomy, including not just our relative hairlessness, but also the retention of hairs on our head, the direction of the hairs on our backs, our upright posture, ‘streamlined’ bodies, dexterity of our hands and the thick extra layer of sub-cutaneous fat beneath our skin that is lacking in other primates. 

However, while acknowledging that the theory explains many curious anomalies of human physiology, Morris ultimately rejects ‘aquatic ape theory’ as altogether too speculative given the complete lack of fossil evidence in support of the theory – the same reason that most other evolutionists also reject the theory. 

Thus, he concludes: 

It demands… the acceptance of a hypothetical major evolutionary phase for which there is no direct evidence” (p45-6). 

Morris also rejects the theory that was, according to Morris himself, the most widely accepted explanation for our hairlessness among other evolutionists at the time he was writing – namely the theory that our hairlessness evolved as a cooling mechanism when our ancestors left the shaded forests for the open African savannah

The problem with this theory, as Morris explains it, is that:  

Exposure of the naked skin to the air certainly increases the chances of heat loss, but it also increases heat gain at the same time and risks damage from the sun’s rays” (p47). 

Thus, it is not at all clear that moving into the open savannah would indeed select for hairlessness. Otherwise, as Morris points out, we might expect other carnivorous, predatory mammals such as lions and jackals, who also inhabit the savannah, to have similarly jettisoned most of their fur. 

Ultimately, however, Morris accepts instead a variant on this idea – namely that hairlessness evolved to prevent overheating while chasing prey when hunting. 

However, this fails to explain why it is men’s bodies that are generally much hairier than those of women, even though, cross-culturally, in most foraging societies, it is men who do most, if not all, of the hunting

It also raises the question as to why other mammalian carnivores, including some that also inhabit the African Savannah and other similar environments, such as lions and jackals, have not similarly shed their body hair, especially since the latter rely more on their speed to catch prey species, whereas humans, armed with arrows and javelins as well as hunting dogs, do not always have to catch a prey by hand in order to kill it. 

I would tentatively venture an alternative theory, one which evidently did not occur to Morris – namely, perhaps our hairlessness evolved in concert with our invention and use of clothing (e.g. animal hides) – i.e. a case of gene-culture coevolution

Clothing would provide an alternative means of protect from both sun and cold alike, but one that has the advantage that, unlike bodily fur, it can be discarded (and put back on) on demand. 

This explanation suggests that, paradoxically, we became naked apes at the same time, and indeed precisely because, we had also become clothed apes. 

The Sexiest Primate? 

One factor said to have contributed to the book’s commercial success was the extent to which its thesis chimed with the prevailing spirit of the age during which it was first published, namely the 1960s. 

Thus, as already alluded to, it presented, in many ways, an idealized and romantic version of human nature, with its crude group-selectionism and emphasis on cooperation within groups without a concomitant emphasis on conflict between groups, and its depiction of humans as a naturally monogamous pair-bonding species, without a concomitant emphasis on the prevalence of infidelity, desertion, polygamy, Machiavellian mating strategies and even rape.  

Another element that jibed with the zeitgeist of the sixties was Morris’s emphasis on human sexuality, with Morris famously declaring: 

The naked ape is the sexiest primate alive” (p64). 

Are humans indeed the ‘sexiest’ of primates? How can we assess this claim? It depends, of course, on precisely how we define ‘sexiness’. 

Obviously, if beauty is in the eye of the beholder, then sexiness is located in a rather different part of the male anatomy, but equally subjective in nature. 

Thus, humans like ourselves find other humans more sexy than other primates (or most of us do) because we have evolved to do so. A male chimpanzee, however, would likely disagree and regard a female chimpanzee as sexier. 

However, Morris presumably has something else in mind when he describes humans as the “sexiest” of primates. 

What he seems to mean is that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates. Thus, for example, he cites as evidence the extended or continuous sexual receptivity of human females, writing: 

There is much more intense sexual activity in our own species than in any other primates” (p56) 

However, the claim that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates is difficult to maintain when you have read about the behavior of some of our primate cousins. Thus, for example, both chimpanzees and especially bonobos, our closest relatives among extant non-human primates, are far more promiscuous than all but the sluttiest of humans

Indeed, one might cynically suggest that what Morris had most in mind when he described humans as “the sexiest primate alive” was simply a catchy marketing soundbite that very much tapped into the zeitgeist of the era (i.e. the 1960s) and might help boost sales for his book. 

Penis Size

As further evidence for our species’ alleged “sexiness” Morris also cites the supposedly unusually large size of the human penis, reporting: 

The [human] male has the largest penis of any primate. It is not only extremely long when fully erect, but also very thick when compared with the penises of other species” (p80). 

This claim, namely that the human male has an unusually large penis, may originate with Morris, and has certainly since enjoyed wide currency in subsequent decades. 

Thus, competing theories have been formulated to account for the (supposedly) unusual size of our penes.

One idea is that our large penes evolved through sexual selection, more specifically female choice, with females preferring either the appearance, or the internal ‘feel’, of a large penis during coitus, and hence selecting for increased penis size among men (e.g. Mautz et al 2013; The Mating Mind: p234-6).

Of course, one might argue that the internal ‘feel’ of a large penis during intercourse is a bit late for mate choice to operate, since, by this time, the choice in question has already been made. Indeed, in cultures where, prior to the immiediate initiation of sexual intercourse, the genitalia are usually covered with clothing, even exercising mate choice on the basis of the external appearance of the penis, especially of an erect penis, might prove difficult or, at the very least, socially awkward.

However, given that, in humans, most sexual intercourse is non-reproductive (i.e. does note result in conception, let alone in offspring), the idea is not entirely implausible.

This idea, namely the our large penes evolved through sexual selection, dovetails neatly with Richard Dawkins’ tentative suggestion in an endnote appended to later editions of The Selfish Gene (reviewed here) that the capacity to maintain an erection (presumably especially a large erection) without a penis bone (since most other primates do possess a penis bone) may function as an honest signal of health in accordance with Zahavi’s handicap principle, an idea I have previously discussed here (The Selfish Gene: p307-8).

An alternative explanation for the relatively large size of our penes implicates sperm competition. On this view, human penes are designed to remove sperm deposited by rival males in the female reproductive tract by functioning as a “suction piston” during intercourse, as I discuss below (Human Sperm Competition: p170-171; Gallup & Burch 2004; Gallup et al 2004; Goetz et al 2005; Goetz et al 2007). 

Yet, in fact, according to Alan F Dixson, the human penis is not unusually long by primate standards, being roughly the same length as that of the chimpanzee (Sexual Selection and the Origins of Human Mating Systems: p64). 

Instead, Dixson reports: 

The erect human penis is comparable in length to those of other primates, in relation to body size. Only its circumference is unusual when compared to the penes of other hominids” (Sexual Selection and the Origins of Human Mating Systems: p65). 

The human penis is unusual, then, only in its width or girth. 

As to why our penes are so wide, the answer is quite straightforward, and has little to do with the alleged ‘sexiness’ of the human species, whatever that means. 

Instead, it is a simple, if indirect, reflection of our increased brain-size.

Increased brain-size first selected for changes in the size and shape of female reproductive anatomy. This, in turn, led to changes in male reporoductive anatomy.

Thus, Bowman suggests: 

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008). 

Similarly, Robin Baker and Mark Bellis report: 

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: Copulation, Masturbation and Infidelity: p171). 

In turn, larger vaginas select for larger penises in order to fill this larger vagina (Bowman 2008).  

Interestingly, this theory directly contradicts the alleged claim of infamous race scientist Philippe Rushton (whose work I have reviewed here) that there is an inverse correlation between brain-size and penis-size, which relationship supposedly explains race differences in brain and genital size. Thus, Rushton was infamously quoted as observing: 

It’s a trade off, more brains or more penis. You can’t have everything.[9]

On the contrary, this analysis suggests that, at least as between species (and presumably as between sub-species, i.e. races, as well), there is a positive correlation between brain-size and penis-size.[10]

According to Baker and Bellis, one reason male penis size tracks that of female vagina size (both being relatively large, and especially wide, in humans) is that the penis functions as, in Baker and Bellis’s words, a “suction piston” during intercourse, the repeated thrusting functioning to remove any sperm previously deposited by rival males – a form of sperm competition

Thus, they report:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171). 

Interestingly, this theory – namely that the human penis functions as a sperm displacement device – although seemingly fanciful, actually explains some otherwise puzzling aspects of human coitus (and presumably coitus in some other species too), such as its relatively extended duration, the male refractory period and related Coolidge effect – i.e. why a male cannot immediately recommence intercourse immediately after orgasm, unless perhaps with a new female (though this exception has yet to be experimentally demonstrated in humans), since to do so would maladaptively remove one’s own sperm from the female reproductive tract. 

Though seemingly fanciful, this theory even has some empirical support (Gallup & Burch 2004; Goetz et al 2005; Goetz et al 2007), including some delightful experiments involving sex toys of various shapes and sizes (Gallup et al 2004). 

Morris writes:

“[Man] is proud that he has the biggest brain of all the primates, but attempts to conceal the fact that he also has the biggest penis, preferring to accord this honor falsely to the mighty gorilla” (p9). 

Actually, the gorilla, mighty though he indeed may be, has relatively small genitalia. This is on account of his polygynous, but non-polyandrous, mating system, which involves minimal sperm competition.[11]

Moreover, the largeness of our brains, in which, according to Morris, we take such pride, may actually be the cause of the largeness of our penes, for which, according to Morris, we have such shame (here, he speaks for few men). 

Thus, large brains required larger heads which, in turn, required larger vaginas in order to successfully birth larger-headed babies. This in turn selected for larger penises to fill the larger vagina. 

In short, the large size, or rather large girth/width, of our penes has less to do with our being the “sexiest primate” and more to do with our being the brainiest

Female Breasts

In addition to his discussion of human penis size, Morris also argues that various other features of human anatomy that not usually associated with sex nevertheless evolved, in part, due to their role in sexual signaling. These include our earlobes (p66-7), everted lips (p68-70) and, tentatively and rather bizarrely, perhaps even our large fleshy noses (p67). 

He makes the most developed and persuasive case, however, in respect of another physiological peculiarity of the human species, and of human females in particular, namely the female breasts

Thus, Morris argues: 

For our species, breast design is primarily sexual rather than maternal in function” (p106). 

The evolution of protruding breasts of a characteristic shape appears to be yet another example of sexual signalling” (p70). 

As evidence, he cites the differences in shape between women’s breasts and both the breasts of other primates and the design of baby bottles (p93). In short, the shape of human breasts do not seem ideally conducive to nursing alone. 

The notion that breasts have a secondary function as sexual advertisements is indeed compelling. In most other mammals, large breasts develop only during pregnancy, but human breasts are permanent, developing at puberty, and, except during pregnancy and lactation, composed predominantly of fat not milk (see Møller et al 1995; Manning et al 1997; Havlíček et al 2016). 

On the other hand, it is difficult to envisage how breasts ever first became co-opted as a sexually-selected ornament. 

After all, the presence of developed breasts on a female would originally, as among other primates, have indicated that the female in question was pregnant, and hence infertile. There would therefore initially have been strong selection pressure among males against ever finding breasts sexually attractive, since it would lead to their pursuing infertile women whom they could not possibly impregnate. As a consequence, there would be strong selection against a female ever developing permanant breasts, since it would result in her being perceived as currently infertile and hence unattractive to males.

How then did breasts ever make the switch to a sexually attractive, sexually-selected ornament? This is what George Francis, at his blog, ‘Anglo Reaction’, terms the breast paradox.[12]

Morris does not address, nor even draw attention to or seemingly recognise, this not insignificant problem. However, he does suggest that two other human traits that are, among primates, unique to humans may have facilitated the process. 

Our so-called nakedness (i.e. relative hairlessness as compared to other mammals), the trait that furnished Morris’s book with its title, and Morris himself with his preferred name for our species, is the first of these traits. 

Swollen breast-patches in a shaggy-coated female would be far less conspicuous as signalling devices, but once the hair has vanished they would stand out clearly” (p70-1). 

Secondly, Morris argues that our bipedalism (i.e. the fact we walk on two legs) and resulting vertical posture, necessarily put the female reproductive organs out of sight underneath a woman when she adopts a standing position, and hence generally out of the sight of potential mates. There was therefore, Morris suggests, a need for some frontal sexual-signaling. 

This, he argues, was further necessitated by what he argues is our species’ natural preference for ventro-ventral (i.e. missionary position) intercourse. 

In particular, Morris argues that human female breasts evolved in order to mimic the appearance of the female buttocks, a form of what he terms ‘self-mimicry’. 

The protuberant, hemispherical breasts of the female must surely be copies of the fleshy buttocks” (p76). 

Everted Lips 

Interestingly, he makes a similar argument in respect of another trait of humans not shared by other extant primates – namely, our inverted lips.

The word ‘everted’ refers to the fact that our lips are turned outwards, as is easily perceived by comparing human lips with the much thinner-appearing lips of our closest non-human relatives

Again, this seems intuitively plausible, since, like female breasts, lips do indeed seem to be a much-sexualized part of the human anatomy, at least in western societies, and in at least some non-western cultures as well, if erotic art is to be taken as evidence.[13]

These everted lips, he argues, evolved to mimic the appearance of the female labia. Again, as with breasts, this was supposedly required because our bipedalism and resulting posture put the female genitals out of sight of most males.

As with Morris’s idea that female breasts evolved to mimic the appearance of female buttocks, the idea that our lips, and women’s use of lipstick, is designed to imitate the appearance of the female sexual organs has been much mocked.[14]

However, the similarity in appearance of the labia and human lips can hardly be doubted. After all, it is even attested to in the very etymology of the word ‘labia, which derives from the Old English word for the lips. 

Of course, inverted lips reach their most extreme form among extant sub-species of human among black Africans. This Morris argues is because: 

If climatic conditions demand a darker skin, then this will work against the visual signalling capacity of the lips by reducing their colour contrast. If they really are important as visual signals, then some kind of compensating development might be expected, and this is precisely what seems to have occurred, the negroid lips maintaining their conspicuousness by becoming larger and more protuberant. What they have lost in colour contrast, they have made up for in size and shape” (p69-70).

Unforunately, however, if we look at other relatively dark-skinned, but non-Negroid, populations of human, the theory receives, at best, only partial support.

On the one hand, Australian Aboriginals, another dark-skinned but unrelated group, do indeed tend to have quite large lips. However, these lips are not especially everted.

On the other hand, however, the dark-skinned Dravidian peoples of South India are not generally especially large-lipped, but are rather quite Caucasoid in facial morphology. Indeed, they, like the generally lighter-complexioned, Indo-European speaking, ‘Aryan’ populations of North India, were generally (but not always) classified as ‘Caucasoid by most early-twentieth century racial anthropologists, though some suggested.

At any rate, rejecting the politically-incorrect notion that black Africans are, as a race, somehow more primitive than other humans, Morris instead emphasizes the fact that, in respect of this trait (namely, everted lips), they are actually the most differentiated from non-human primates.  

Thus, all humans, compared to non-human primates, have everted lips, but black African lips are the most everted. Therefore, Morris concludes, using the word ‘primitive’ is in the special phylogenetic sense

Anatomically, these negroid characters do not appear to be primitive, but rather represent a positive advance in the specialization of the lip region” (p70).

In other words, whereas whites and Asians may be more advanced than blacks when it comes to intelligence, brain-size, science, technology and building civilizations, when it comes to everted lips, black Africans have us all beaten! 

Female Orgasm

Morris also discusses the function of the female orgasm, a topic which has subsequently been the subject of much speculation and no little controversy among evolutionists.  

Again, Morris suggests that humans’ unusual vertical posture, brought on by our bipedal means of locomotion, may have been central to the evolution of this trait. 

Thus, if a female were to walk off immediately after sexual intercourse had occurred, then: 

Under the simple influence of gravity the seminal fluid would flow back down the vaginal tract and much of it would be lost” (p79).  

This obviously makes successful impregnation less likely. As a result, Morris concludes: 

There is therefore a great advantage in any reaction that tends to keep the female horizontal when the male ejaculates and stops copulating” (p79). 

The chief adaptive function of the female orgasm therefore, according to Morris, is the tiredness, and perhaps post-coital tristesse, that immediately follows orgasm, and motivates the female experiencing these emotions to remain in a horizontal position even after intercourse has ended, and hence retain the male ejaculate within her reproductive tract. 

The violent response of female orgasm, leaving the female sexually satiated and exhausted has precisely this effect” (p79).[15]

However, there are several problems with Morris’s theory, the first being is that it predicts that female orgasm should be confined to humans, since, at least among extant primates, we represent the only bipedal ape.

Morris does indeed argue that the female orgasm is, like our nakedness, bipedal locomotion and large brains, an exclusively human trait, describing how, among most, if not all, non-human primates: 

At the end of a copulation, when the male ejaculates and dismounts, the female monkey shows little sign of emotional upheaval and usually wanders off as if nothing had happened” (p79). 

Unfortunately for Morris’s theory, however, evidence has subsequently accumulated that some non-human (and non-bipedal) female primates do indeed seem to sometimes experience responses seemingly akin to orgasm during copulation. 

As professor of philosophy Elizabeth Lloyd relates in her book The Case of the Female Orgasm:

There is robust evidence—developed since Morris wrote—that some nonhuman primate females do have orgasm. The best evidence comes from experiments in which stumptail macaques were wired up so that their heart and respiration rates and the muscle contractions in their uteruses or vaginas could be measured electronically… previous observations by Suzanne Chevalier-Skolnikoff… showed a ‘naturally occurring complete orgasmic behavioral pattern for female stumptails’. She documented three occasions on which a female mounting another female (rubbing her genitals against the back of the mounted female) displayedall the behavioral manifestations of male stumptail orgasm and ejaculation” (The Case of the Female Orgasm: p54-5).

Thus, Alan Dixson reports: 

Female orgasm is not confined to Homo sapiens. Putatively homologous responses [have] been reported in a number of non-human primates, including stump-tail and Japanese Macaques, rhesus monkeys and chimpanzees… Pre-human ancestors of Homo sapiens, such as the australopithecines, probably possessed a capacity to exhibit female orgasm, as do various extant ape and monkey species. The best documented example concerns the stump tailed macaque (Macaca arctoides), in which orgasmic uterine contractions have been recorded during female-female mounts… as well as during copulation… De Waal… estimates that female stump-tails show their distinctive ‘climax face’ (which correlates with the occurrence of uterine contractions) once in every six copulations. Vaginal spasms were noted in two female rhesus monkeys as a result of extended periods of stimulation (using an artificial penis) by an experimenter… Likewise, a female chimpanzee exhibited rhythmical vaginal contractions, clitoral erection, limb spasms, and body tension in response to manual stimulation of its genitalia… Masturbatory behaviour, accompanied by behavioural and physiological responses indicative of orgasm, has also been noted in Japanese macaques… and chimpanzees” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Thus, in relation to Morris’s theory, Dixson concludes that the theory lacks “comparative depth” because: 

Monkey and apes exhibit female orgasm in association with dorso-ventral copulatory postures and an absence of post-mating rest periods” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Certainly, female orgasm, unlike male orgasm, is hardly a prerequisite for successful impregnation. 

Thus, American physician, Robert Dickson, in his book, Human Sex Anatomy (1933), reports that, in a study of a thousand women who attended his medical practice afflicted with so-called ‘frigitity’ (i.e they were incapable of orgasmic response during intercourse): 

The frigid were not notably infertile, having the expected quota of living children, and somewhat less than the average incidence of sterility” (Human Sex Anatomy: p92). 

Further problems with Morris’s theory are identified by Elizabeth Lloyd in The Case of the Female Orgasm: Bias in the Theory of Evolution, the only book-length treatment of the topic of the evolution of the female orgasm.

In particular, unlike among human males, women do not, in general, appear to experience sensations of tiredness immediately following orgasm. On the contrary, she reports:

States of sleepiness and exhaustion [experienced following orgasm] are, in fact, predominantly true for men but not for women” (The Case of the Female Orgasm: p52).

On the contrary, she quotes feminist sexologist Shere Hite as reporting, in her famous Hite Report, that, the most common post-orgasmic sensations reported by women were “wanting to be close, and ‘feeling strong and wide awake, energetic and alive’”, both of which reactions “represent continued arousal” (Ibid.).

Thus, she reports:

A sizable proportion of women are not ‘satiated and exhausted’ by orgasm but, rather, energized and aroused. An ‘energized’ woman seems less rather than more likely to lie down” (The Case of the Female Orgasm: p57).

This, in turn, suggests that a female who had experienced orgasm during intercourse would be likely to lose more semen through the force of gravity than a woman who had not experienced orgasm, since, if a person is “energized and aroused”, they are supposedly more, not less, likely to stand up and move around.

Finally, Lloyd repeated the familiar feminist factoid (basaed on the Kinsey data) that women are actually most likely to achieve orgasm during intercourse by using sexual positions where the female partner is on top, where, again, gravitational forces would presumably work against successful conception, writing:

Given that a relatively low percentage of women have orgasms during intercourse, and that of those who do, a high percentage have them in the superior position, it seems more likely that the occurrence of female orgasm would have the reverse gravitational effect from the one that Morris describes” (The Case of the Female Orgasm: p57).

In conclusion, therefore, the bulk of the evidence seems incompatible with Morris’s superficiallly plausible gravitational theory of the evolution of the female orgasm and it must be rejected.

Why then did the female orgasm evolve, not only in humans, but also apparently also in other species of primate, if not other mammals?

In the years since the first publication of Morris’s book, various other theories for the evolution of the female orgasm have been developed by evolutionists.

However, as argued by Donald Symons in his groundbreaking The Evolution of Human Sexuality (which I have reviewed here), the most parsomonious theory of the evolution of female orgasm remains that it represents simply a non-adaptive byproduct of male orgasm, which is, of course, itself adaptive (see Sherman 1989Case Of The Female Orgasm: Bias in the Science of Evolution; see also my discussion here).

The female orgasm and clitoris thus represents, if you like, the female equivalent of male nipples – only more fun.

Hymen

Interestingly, Morris also hypothesizes regarding the evolutionary function of another peculiarity of human female reproductive anatomy which, in contrast to the controversy regarding the evolutionary function, if any, of the female orgasm and clitoris (and of the female breasts), has received surprisingly scant attention from evolutionists – namely, the hymen

In most mammals, Morris reports, “it occurs as an embryonic stage in the development of the urogenital system” (p82). However, only in humans, he reports, is it, when not ruptured, retained into adulthood. 

Regarding the means by which it evolved, the trait is then, Morris concludes, like our large brains, upright posture and hairlessness, “part of the naked ape’s neoteny” (p82). 

However, as with our hairlessness, neoteny only the means by which this trait was retained into adulthood among humans, not the evolutionary reason for its retention.  

In other words, he suggests, the hymen, like other traits retained into adulthood among humans, must serve some evolutionary function. 

What is this evolutionary function? 

Morris suggests that, by making first intercourse painful for females, it deters young women from engaging in intercourse too early, and hence risking pregnancy, without first entering a relationship (‘pair-bond’) of sufficient stability to ensure that male parental investment, and provisioning, will be forthcoming (p73). 

However, the problem with the theory is that the pain experienced during intercourse obviously occurs rather too late to deter first intercourse, because, by the time this pain is experienced, intercourse has already occurred. 

Of course, given our species’ unique capacity for speech and communication, the pain experienced during first intercourse could be communicated to young virginal women through conversation with other non-virginal women who had already experienced first intercourse.  

However, this would be an unreliable method of inducing fear and avoidance regarding first intercourse, especially given the sort of taboos regarding discussion of sexual activities which are common in many cultures. 

At any rate, why would natural, or sexual, selection not instead simply directly select for fear and anxiety regarding first intercourse – i.e. a psychological rather than a physiological adaptation.

After all, as evolutionary psychologists and sociobiologists have convincingly demonstrated, our psychology is no less subject to natural selection than is our physiology. 

Although, as already noted, the evolutionary function, if any, of the female hymen has received surprisingly little attention from evolutionists, I can myself independently formulate at least three alternative hypotheses regarding the evolutionary significance of the hymen. 

First, it may have evolved among humans as a means of advertising to prospective suitors a prospective bride’s chastity, and hence reassuring the suitor of the paternity of offspring.  

This would, in turn, increase the perceived attractiveness of the female in question, and help secure her a better match with a higher-status male, who would then also be more willing to invest in offspring whose paternity is not in doubt, and hence increase her own reproductive success

Thus, it is notable that, in many cultures, prospective brides are inspected for virginity, a so-called virginity test, sometimes by the prospective mother-in-law or another older woman, before being considered marriageable and accepted as brides. 

Alternatively, and more prosaically, the hymen may simply function to protect against infection, by preventing dirt and germs from entering a woman’s body by this route. 

This, of course, would raise the question as to why, at least according to Morris, the trait is retained into sexual maturity only among humans?  

Actually, however, as with his claim that the female orgasm is unique to humans, Morris’s claim that only humans retain the hymen into sexual maturity is disputed by other sources. Thus, for example, Catherine Blackledge reports: 

Hymens, or vaginal closure membranes or vaginal constrictions, as they are often referred to, are found in a number of mammals, including llamas, guinea-pigs, elephants, rats, toothed whales, seals, dugongs, and some primates, including some species of galagos, or bushbabys, and the ruffed lemur” (The story of V: p145). 

Finally, perhaps even more prosaically, the hymen may simply represent a nonadaptive vestige of the developmental process, or a nonadaptive by-product of our species’ neoteny

This would be consistent with the apparent variation with which the trait presents itself, suggesting that it has not been subject to strong selection pressure that has weeded out suboptimal variations. 

This then would appear to be the most parsimonious explanation. 

Zoological Nomenclature 

The works on human ethology of both Richard Ardrey and Konrad Lorenz attracted much attention and no little controversy in their day. Indeed, they perhaps attracted even more controversy than Morris’s own ‘The Naked Ape’, not least because they tended to place greater emphasis on humankind’s capacity, and alleged innate proclivity, towards violence. 

In contrast, Morris’s own work, placing less emphasis on violence, and more on sex, perhaps jibed better with the zeitgeist of the era, namely the 1960s, with its hippy exhortations to ‘make love not war’. 

Yet, although all these works were first published at around the same time, the mid- to late-sixties (though Adrey continued publishing books of this subject into the 1970s), Morris’s ‘The Naked Ape’ seems to be the only of these books that remains widely read, widely known and still in print, to this day. 

Partly, I suspect, this reflects its brilliant and provocative title, which works on several levels, scientific and literary.  

Morris, as we have seen, justifies referring to humans by this perhaps unflattering moniker on zoological grounds.  

Certainly, he acknowledges that humans possess many other exceptional traits that distinguish us from all other extant apes, and indeed all other extant mammals. 

Thus, we walk on two legs, use and make tools, have large brains and communicate via a spoken language. Thus, the zoologist could refer to us by any number of descriptors – “the vertical ape, the tool-making ape, the brainy ape” are a few of Morris’s own suggestions (p41).  

But, he continues, adopting the disinterested detachment of the proverbial alien zoologist: 

These were not the first things we noticed. Regarded simply as a zoological specimen in a museum, it is the nakedness that has the immediate impact” (p41) 

This name has, Morris observes, several advantages, including “bringing [humans] into line with other zoological studies”, emphasizing the zoological approach, and hence challenging human vanity. 

Thus, he cautions: 

The naked ape is in danger of being dazzled by [his own achievements] and forgetting that beneath the surface gloss he is still very much a primate. (‘An ape’s an ape, a varlet’s a valet, though they be clad in silk or scarlet’). Even a space ape must urinate” (p23). 

Thus, the title works also on another metaphoric level, which also contributed to the title’s power.  

The title ‘Naked Ape’ promises to reveal, if you like, the ‘naked’ truth about humanity—to strip humanity down in order to reveal the naked truth that lies beneath the façade and finery. 

Morris’s title reduces us to a zoological specimen in the laboratory, stripped naked on the laboratory table, for the purposes of zoological classification and dissection. 

Interestingly, humans have historically liked to regard ourselves as superior to other animals, in part, precisely because we are the only ones who did clothe ourselves. 

Thus, beside Adam and Eve, it was only primitive tropical savages who went around in nothing but a loincloth, and they were disparaged as uncivilized precisely on this account. 

Yet even tropical savages wore loincloths. Indeed, clothing, in some form, is sometimes claimed to be a human universal

Yet animals, on the other hand, go completely unclothed – or so we formerly believed. 

But Morris turns this reasoning on its head. In the zoological sense, it is humans who are the naked ones, being largely bereft of hairs sufficient to cover most of our bodies. 

Stripping humanity down in this way, Morris reveals the naked truth that beneath, the finery and façade of civilization, we are indeed an animal, an ape and a naked one at that. 

The power of Morris’s chosen title ensures that, even if, like all science, his book has quickly dated, his title alone has stood the test of time and will, I suspect, be remembered, and employed as a descriptor of the human species, long after Morris himself, and the books he authored, are forgotten and cease to be read. 

Endnotes

[1] In fact, as I discuss in a later section of this review, it is possible that the female hymen evolved through just such a process, namely as a means of advertising female virginity and premarital chastity (and perhaps implying post-marital fidelity), and hence as a paternity assurance mechanism, which benefited the female by helping secure male parental investment, provisioning and hypergamy.

[2] Morris is certainly right that anthropologists have overemphasized the exotic and unfamiliar (“bizarre mating customs, strange kinship systems, or weird ritual procedures”, as Morris puts it). Partly, this is simply because, when first encountering an alien culture, it is the unfamiliar differences that invariably stand out, whereas the similarities are often the very things which we tend to take for granted.
Thus, for example, on arriving in a foreign country, we are often struck by the fact that everyone speaks a foreign unintelligible language. However, we often take for granted the more remarkable fact that all cultures around the world do indeed have a spoken language, and also that all languages supposedly even share in common a universal grammar.
However, anthropologists have also emphasized the alien and bizarre for other reasons, not least to support theories of radical cultural malleability, sometimes almost to the verge of outright fabrication (e.g. Margaret Mead’s studies in Samoa).

[3] It is true that there has been some significant human evolution since the dawn of agriculture, notably the evolution of lactase persistence in populations with a history of dairy agriculture. Indeed, as Cochran and Harpending emphasize in their book The 10,000 Year Explosion, far from evolution having stopped at the dawn of agriculture or the rise of ‘civilization’, it has in fact sped up, as a natural reflection of the rapid change in environmental conditions that resulted. Thus, as Nicholas Wade concludes in A Troublesome Inheritance, much human evolution has been “recent, copious and regional”, leading to substantial differentiation between populations (i.e. race differences), including in psychological traits such as intelligence. Nevertheless, despite such tinkering, the core adaptations that identify us as a species were undoubtedly molded in ancient prehistory, and are universal across the human species.

[4] However, it is indeed important to recognize that the lifestyle of our own ancestors was not necessarily identical to that of those few extant hunter-gatherer groups that have survived into modern times, not least because the latter tend to be concentrated in marginal and arid environments (e.g. the San people of the Kalahari DesertEskimos of the Arctic region, Aboriginal Austrailians of the Australian outback), with those formerly inhabiting more favorable environments having either themselves transitioned to agriculture or else been displaced or absorbed by more advanced invading agriculturalists with higher population densities and superior weapons and other technologies.

[5] This passage is, of course, sure to annoy feminists (always a good thing), and is likely to be disavowed even by many modern evolutionary psychologists since it relies on a rather crude analogy. However, Morris acknowledges that, since “’hunting’… has now been replaced by ‘working‘”: 

The males who set off on their daily working trips are liable to find themselves in heterosexual groups instead of the old all-male parties. All too often it [the pair bond] collapses under the strain” (p81). 

This factor, Morris suggests, explains the prevalence of marital infidelity. It may also explain the recent hysteria, and accompanying witch-hunts, regarding so-called ‘sexual harassment’ in the workplace.
Relatedly, and also likely to annoy feminists, Morris champions the then-popular man the hunter theory of hominid evolution, which posited that the key development in human evolution, and the development of human intelligence in particular, was the switch from a largely, if not wholly, herbivorous diet and lifestyle, to one based largely on hunting and the consumption of meat. On this view, it was the cognitive demands that hunting placed on humans that selected for increased intelligence among humans, and also the nutritional value of meat that made possible increases in  highly metabolically expensive brain tissue.
This theory has since fallen into disfavor. This seems to be primarily because it gives the starring role in human evolution to men, since men do most of the hunting, and relegates women to a mere supporting role. It hence runs counter to the prevailing feminist zietgeist.
The main substantive argument given against the ‘man the hunter theory’ is that other carnivorous mammals (e.g. lions, wolves) adapted to carnivory without any obvious similar increase in brain-size or intelligence. Yet Morris actually has an answer to this objection.
Our ancestors, fresh from the forests, were relative latecomers to carnivory. Therefore, Morris contends, had we sought to compete with tigers and wolves by mimicking them (i.e. growing our fangs and claws instead of our brains) we would inevitably have been playing a losing game of evolutionary catch-up. 

Instead, an entirely new approach was made, using artificial weapons instead of natural ones, and it worked” (p22).

However, this theory fails to explain how female intelligence evolved. One possibility is that increases in female intelligence are an epiphenomenal byproduct of selection for male intelligence, rather like the female equivalent of male nipples.
On this view, men would be expected to have higher intelligence than women, just as male nipples (and breasts) are smaller than female nipples, and the male penis is bigger than the female clitoris. That adult men have greater intelligence than adult women is indeed the conclusion of a recent controversial theory (Lynn 1999). However, the difference, if it even exists (which remains unclear), is very small in magnitude, certainly much smaller than than the relative difference in size betweeen male and female breasts. There is also evidence this sexual division of labour between hunting and gathering led to sex differences spatio-visual intelligence (Eals & Silverman 1994).

[6] Another difference from modern evolutionary psychologists derives from Morris’s ethological approach, which involves a focus on human-typical behaviour patterns. For example, he discusses the significance of body language and facial expressions, such as smiling, which is supposedly homologous with an appeasement gesture (baring clenched teeth, aka a ‘fear grin’) common to many primates, and staring, which represents a form of threat across many species.

[7] Interestingly, however, he acknowledges that this statement does not apply to all human races. Thus, he observes: 

Negroes have undergone a real as well as an apparent hair loss” (p42). 

Thus, it seems blacks, unlike Caucasians, have fewer hairs on their body than do chimpanzees. This fact is further evidence that, contrary to the politically correct orthodoxy, race differences are real and important, though this fact is, of course, played down by Morris and other popular science writers.

[8] Edward O Wilson, for example, in Sociobiology: The New Synthesis (which I have reviewed here) dismisses aquatic ape theory, as then championed by Elaine Morgan in The Descent of Woman, as feminist-inspired pop-science “contain[ing] numerous errors” and as being “far less critical in its handling of the evidence than the earlier popular books”, including, incidentally, that of Morris, who is mentioned by name in the same paragraph (Sociobiology: The New Synthesis: p29).

[9] Actually, I suspect this infamous quotation may be apocryphal, or at best a misconstrued joke. Certainly, while I think Rushton’s theory of race differences (which he calls ‘differential K theory’) is flawed, as I explain in my review of his work, there is nothing in it to suggest a direct trade-off between penis-size and brain-size. Indeed, one problem with Rushton’s theory, or at least his presentation of it, is that he never directly explains how traits such as penis-size actually relate to r/K selection in the first place.
The quotation is usually traced to a hit piece in Rolling Stone, a leftist hippie rag with a notorious reputation for low editorial standards, misinformation and so-called ‘fake news. However, Jon Entine, in his book on race differences in athletic ability, instead traces it to a supposed interview between Rushton and Geraldo Rivera broadcast on the Geraldo’ show in 1989 (Taboo: Why Black Athletes Dominate Sports: p74).
Interestingly, one study has indeed reported that there is a “demonstrated negative evolutionary relationship”, not between brain-size and penis-size, but rather between brain-size and testicle size, if only on account of the fact that each contain “metabolically expensive tissues” (Pitnick et al 2006).

[10] Interestingly, Baker and Bellis attribute race differences in penis-size, not to race differences in brain-size, but rather to race differences in birth weight. Thus, they conclude:

Racial differences in size of penis (Mongoloid < Caucasoid < Negroid…) reflects racial differences in birth weight… and hence presumably, racial differences in size of vagina” (Human Sperm Competition: p171). 

[11] In other words, a male silverback gorilla may mate with the multiple females in his harem, but each of the females in his harem likely have sex with only one male, namely that silverback. This means that sperm from rival males are rarely simultaneously present in the same female’s oviduct, resulting in minimal levels of sperm competition, which is known to select for larger testicles in particular, and also often more elaborate penes as well.

[12] Alternative theories for the evolution of permanent fatty breasts in women is that they function analogously to camel humps, i.e. as a storehouse of nutrients to guard against and provide reserves in the event of future scarcity or famine. On this view, the sexually dimorphic presentation (i.e. the fact that fatty breasts are largely restricted to women) might reflect the caloric demands of pregnancy. Indeed, this might explain why women have higher levels of fat throughout their bodies. (For a recent review of rival theories for human breast evolution see Pawłowski & Żelaźniewicz 2021.)

[13] However, to be pedantic, this phraseology is perhaps problematic, since, to say that breasts and lips are ‘sexualized’ in western, and at least some non-western, cultures implicitly presupposes that they are not already inherently sexual parts of our anatomy by virtue of biology, which is, of course, the precisely what Morris is arguing. 

[14] For example, if I recall correctly, extremely annoying, left-wing 1980s-era British comedian Ben Elton once commented in a one of his stand-up routines that the male anthropologist (i.e. Morris, actually not an anthropologist, at least not by training) who came up with this idea (namely, that lips and lipstick mimiced the appearance of the labia) had obviously never seen a vagina in his life. He also, if I recall correctly, attributed this theory to the supposed male-dominated, androcentric nature of the field of anthropology – an odd notion given that Morris is not an anthropologist by training, and cultural anthropology is, in fact, one of the most leftist-dominated, feminist-infested, politically correct fields in the whole of academia, this side of ‘gender studies’, which, in the present, politically-correct world of academia, is saying a great deal.

[15] This theory is rather simpler, and has hence always struck me as more plausible, than the more elaborate, but also more widely championed so-called ‘uterine upsuck hypothesis’, whereby uterine contractions experienced by women during orgasm are envisaged as somehow functioning to aid the transfer of semen deeper into the cervix. This idea is largely based on a single study involving two experiments on a single human female subject (Fox et al 1970). However, two other studies failed to produce any empirical support for the theory (Grafenberg 1950; Masters & Johnson 1966). Baker and Bellis’s methodologically problematic work on what they call ‘flowback’ provides, at best, ambivalent evidence (Baker & Bellis 1993). For detailed critique, see Dixson’s Sexual Selection and the Origins of Human Mating Systems: p74-6.

References 

Baker & Bellis (1993) Human sperm competition: ejaculate manipulation by females and a function for the female orgasm. Animal Behaviour 46:887–909. 
Bowman EA (2008) Why the human penis is larger than in the great apes. Archives of Sexual Behavior 37(3): 361. 
Eals & Silverman (1994) The Hunter-Gatherer theory of spatial sex differences: Proximate factors mediating the female advantage in recall of object arrays. Ethology and Sociobiology 15(2): 95-105.
Fox et al 1970. Measurement of intra-vaginaland intra-uterine pressures during human coitus by radio-telemetry. Journal of Reproduction and Fertility 22:243–251. 
Gallup et al (2004). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277–289 
Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology 2:12-23. 
Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: A preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences 38:749-763 
Goetz et al (2007) Sperm Competition in Humans: Implications for Male Sexual Psychology, Physiology, Anatomy, and Behavior. Annual Review of Sex Research 18:1. 
Grafenberg (1950) The role of urethra in female orgasm. International Journal of Sexology 3:145–148. 
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226. 
Lynn (1999) Sex differences in intelligence and brain size: A developmental theory. Intelligence 27(1):1-12.
Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236. 
Masters & Johnson (1966) Human Sexual Response (Boston: Little, Brown, 1966).
Mautz et al (2013) Penis size interacts with body shape and height to influence male attractiveness, Proceedings of the National Academy of Sciences 110(17): 6925–30.
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219. 
Pawłowski & Żelaźniewicz (2021) The evolution of perennially enlarged breasts in women: a critical review and a novel hypothesis. Biological reviews of the Cambridge Philosophical Society 96(6): 2794-2809. 
Pitnick et al (2006) Mating system and brain size in bats. Proceedings of the Royal Society B: Biological Sciences 273(1587): 719-24. 

Pierre van den Berghe’s ‘The Ethnic Phenomenon’: Ethnocentrism and Racism as Nepotism Among Extended Kin

Pierre van den Berghe, The Ethnic Phenomenon (Westport: Praeger 1987) 

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common. 

In ‘The Ethnic Phenomenon’, pioneering sociologist-turned-sociobiologist Pierre van den Berghe attempts to explain this universal phenomenon. 

In the process, he not only provides a persuasive ultimate evolutionary explanation for the universality of ethnocentrism, but also produces a remarkable synthesis of scholarship that succeeds in incorporating virtually every aspect of ethnic relations as they have manifested themselves throughout history and across the world, from colonialism, caste and slavery to integration and assimilation, within this theoretical and explanatory framework. 

Ethnocentrism as Nepotism? 

At the core of Pierre van den Berghe’s theory of ethnocentrism and ethnic conflict is the sociobiological theory of kin selection. According to van den Berghe, racism, xenophobia, nationalism and other forms of ethnocentrism can ultimately be understood as kin-selected nepotism, in accordance with biologist William D Hamilton’s theory of inclusive fitness (Hamilton 1964a; 1964b). 

According to inclusive fitness theory (also known as kin selection), organisms evolved to behave altruistically towards their close biological kin, even at a cost to themselves, because close biological kin share genes in common with one another by virtue of their kinship, and altruism towards close biological kin therefore promotes the survival and spread of these genes. 

Van den Berghe extends this idea, arguing that humans have evolved to sometimes behave altruistically towards, not only their close biological relatives, but also sometimes their distant biological relatives as well – namely, members of the same ethnic group as themselves. 

Thus, van den Berghe contends: 

Racial and ethnic sentiments are an extension of kinship sentiments [and] ethnocentrism and racism are… extended forms of nepotism” (p18). 

Thus, while social scientists, and social psychologists in particular, rightly emphasize the ubiquity, if not universality, of in-group preference, namely a preference for and favouring of individuals of the same social group as oneself, they also, in my view, rather underplay the extent to which the group identities which have led to the most conflict, animosity, division and discrimination throughout history and across the world, and are also most apparently impervious to resolution, are ethnic identities.

Thus, divisions such as those between social classes, or the sexes, different generations, or between members of different political factions, or youth subcultures (e.g. between mods’ and ‘rockers), or supporters of different sports teams, may indeed lead to substantial conflict, at least in the short-term, and are often cited as quintessential examplars of ‘tribal’ identity and conflict.

Indeed, social psychologists emphasize that individuals even evince an in-group preference in what they referred to as the minimal group situation – namely where experimental subjects have been assigned to one group or another on the basis of wholly arbitrary, trivial or even entirely fictitious criteria.

However, in the real world, the most violent and intransigent of group conflicts almost invariably seem to be those between ethnic groups – namely groups to which a person is assigned at birth, and where this group membership is passed down in families, from parent to offspring, in a quasi-biological fashion, and where group identity is based on a perception of shared kinship.

In contrast, aspects of group identity that vary even between individuals within a single family, including those that are freely chosen by individuals, tend to be somewhat muted in intensity, perhaps precisely because most people share bonds with close family members of a different group identity.

Thus, there has never, to my knowledge, been a civil war arising from conflict between the sexes, or between supporters of one or another football team.[1]

Ethnic Groups as Kin Groups?

Before reading van den Berghe’s book, I was skeptical regarding whether the degree of kinship shared among co-ethnics would ever be sufficient to satisfy Hamilton’s rule, whereby, for altruism to evolve, the cost of the altruistic act to the altruist, measured in terms of reproductive success, must be outweighed by the benefit to the recipient, also measured in terms of reproductive success, multiplied by the degree of relatedness of the two parties (Brigandt 2001; cf. Salter 2008; see also On Genetic Interests). 

Thus, Brigandt (2001) takes van den Berghe to task for his formulation of what the latter catchily christens “the biological golden rule”, namely: 

Give unto others as they are related unto you” (p20).[2]

However, contrary to both critics of his theory (e.g. Brigandt 2001) and others developing similar ideas (e.g. Rushton 2005; Salter 2000), van den Berghe is actually agnostic on the question of whether ethnocentrism is ever actually adaptive in modern societies, where the shared kinship of large nations or ethnic groups is, as van den Berghe himself readily acknowledges, “extremely tenuous at best” (p243). Thus, he concedes: 

Clearly, for 50 million Frenchmen or 100 million Japanese, any common kinship that they may share is highly diluted … [and] when 25 million African-Americans call each other ‘brothers’ and ‘sisters’, they know that they are greatly extending the meaning of these terms” (p27).[3]

Instead, van den Berghe suggests that nationalism and racism may reflect the misfiring of a mechanism that evolved when our ancestors still still lived in small kin-based groups of hunter-gatherers that represented little more than extended families (p35; see also Tooby and Cosmides 1989; Johnson 1986). 

Thus, van den Berghe explains: 

Until the last few thousand years, hominids interacted in relatively small groups of a few score to a couple of hundred individuals who tended to mate with each other and, therefore, to form rather tightly knit groups of close and distant kin” (p35). 

Therefore, in what evolutionary psychologists now call the environment of evolutionary adaptedness or EEA:

The natural ethny [i.e. ethnic group] in which hominids evolved for several thousand millennia probably did not exceed a couple of hundred individuals at most” (p24) 

Thus, van den Berghe concludes: 

The primordial ethny is thus an extended family: indeed, the ethny represents the outer limits of that inbred group of near or distant kinsmen whom one knows as intimates and whom therefore one can trust” (p25). 

On this view, ethnocentrism was adaptive when we still resided in such groups, where members of our own clan or tribe were indeed closely biologically related to us, but is often maladaptive in contemporary environments, where our ethnic group may include literally millions of people. 

Another not dissimilar theory has it that racism in particular might reflect the misfiring of an adaptation that uses phenotype matching, in particular physical resemblance, as a form of kin recognition

Thus, Richard Dawkins in his seminal The Selfish Gene (which I have reviewed here), cautiously and tentatively speculates: 

Conceivably, racial prejudice could be interpreted as an irrational generalization of a kin-selected tendency to identify with individuals physically resembling oneself, and to be nasty to individuals different in appearance” (The Selfish Gene: p100). 

Certainly, van den Berghe takes pains to emphasize that ethnic sentiments are vulnerable to manipulation – not least by exploitative elites who co-opt kinship terms such as ‘motherland’, fatherland and ‘brothers-in-arms‘ to encourage self-sacrifice, especially during wartime (p35; see also Johnson 1987; Johnson et al 1987; Salmon 1998). 

However, van den Berghe cautions, “Kinship can be manipulated but not manufactured [emphasis in original]” (p27). Thus, he observes how: 

Queen Victoria could cut a motherly figure in England; she even managed to proclaim her son the Prince of Wales; but she could never hope to become anything except a foreign ruler of India; [while] the fiction that the Emperor of Japan is the head of the most senior lineage descended from the common ancestor of all Japanese might convince the Japanese peasant that the Emperor is an exalted cousin of his, but the myth lacks credibility in Korea or Taiwan” (p62-3). 

This suggests that the European Union, while it may prove successful as customs union, single market and even an economic union, and while integration in other non-economic spheres may also prove a success, will likely never command the sort of loyalty and allegiance that a nation-state holds over its people, including, sometimes, the willingness of men to fight and lay down their lives for its sake. This is because its members come from many different cultures and ethnicities, and indeed speak many different languages. 

For van den Berghe, national identity cannot be rooted in anything other than a perception of shared ancestry or kinship. Thus, he observes: 

Many attempts to adopt universalistic criteria of ethnicity based on legal citizenship or acquisition of educational qualifications… failed. Such was the French assimilation policy in her colonies. No amount of proclamation of Algérie française could make it so” (p27). 

Thus, so-called civic nationalism, whereby national identity is based, not on ethnicity, but rather, supposedly, on a shared commitment to certain common values and ideals (democracy, the ‘rule of law’ etc.), as encapsulated by the notion of America as a proposition nation’, is, for van den Berghe, a complete non-starter. 

Yet this is today regarded as the sole basis for national identity and patriotic feeling that is recognised as legitimate, not only in the USA, but also all other contemporary western polities, where any assertion of racial nationalism or a racially-based or ethnically-based national identity is, at least for white people, anathema and beyond the pale. 

Moreover, due to the immigration policies of previous generations of western political leaders, policies that largely continue today, all contemporary western polities are now heavily multi-ethnic and multi-racial, such that any sense of national identity that was based on race or ethnicity is arguably untenable as it would necessarily exclude a large proportion of their populations.

On the other hand, however, van den Berghe’s reasoning also suggests that the efforts of some white nationalists to construct a pan-white, or pan-European, ethnic identity is also, like the earlier efforts of Japanese imperialist propagandists to create a pan-Asian identity, and of Marcus Garvey’s UNIA to construct a pan-African identity, likely to end in failure.[4]

Racism vs Ethnocentrism 

Whereas ethnocentrism is therefore universal, adaptive and natural, van den Berghe denies that the same can be said for racism

There is no evidence that racism is inborn, but there is considerable evidence that ethnocentrism is” (p240). 

Thus, van den Berge concludes: 

The genetic propensity is to favor kin, not those who look alike” (p240).[5]

As evidence, he cites:

The ease with which parental feelings take precedence over racial feeling in cases of racial admixture” (p240). 

In other words, fathers who sire mixed-race offspring with women of other races, and the women of other races with whom they father such offspring, often seemingly love and care for the resulting offspring just as intensely as do parents whose offspring is of the same race as themselves.[6]

Thus, cultural, rather than racial, markers are typically adopted to distinguish ethnic groups (p35). These include: 

  • Clothing (e.g. hijabs, turbans, skullcaps);
  • Bodily modification (e.g. tattoos, circumcision); and 
  • Behavioural criteria, especially language and dialect (p33).

Bodily modification and language represent particularly useful markers because they are difficult to fake, bodily modification because it is permanent and hence represents a costly commitment to the group (in accordance with Zahavi’s handicap principle), and language/dialect, because this is usually acquirable only during a critical period during childhood, after which it is generally not possible to achieve fluency in a second language without retaining a noticeable accent. 

In contrast, racial criteria, as a basis for group affiliation, is, van den Berghe reports, actually quite rare: 

Racism is the exception rather than the rule in intergroup relations” (p33). 

Racism is also a decidedly modern phenomenon. 

This is because, prior to recent technological advances in transportation (e.g. ocean-going ships, aeroplanes), members of different races (i.e. groups distinguishable on the basis of biologically inherited physiological traits such as skin colour, nose shape, hair texture etc.) were largely separated from one another by the very geographic barriers (e.g. deserts, oceans, mountain ranges) that reproductively isolated them from one another and hence permitted their evolution into distinguishable races in the first place. 

Moreover, when different races did make contact, then, in the absence of strict barriers to exogamy and miscegenation (e.g. the Indian caste system), racial groups typically interbred with one another and hence become phenotypically indistinguishable from one another within just a few generations. 

This, van den Berghe explains, is because: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Therefore, in the ancestral environment for which our psychological adaptations are designed (i.e. before the development of ships, aeroplanes and other methods of long-distance intercontinental transportation), different races did not generally coexist in the same locale. As a result, van den Berghe concludes: 

We have not been genetically selected to use phenotype as an ethnic marker, because, until quite recently, such a test would have been an extremely inaccurate one” (p 240). 

Humans, then, have simply not had sufficient time to have evolved a domain-specificracism module’ as suggested by some researchers.[7]

Racism is therefore, unlike ethnocentrism, not an innate instinct, but rather “a cultural invention” (p240). 

However, van den Berghe rejects the fashionable, politically correct notion that racism is “a western, much less a capitalist monopoly” (p32). 

On the contrary, racism, while not innate, is, not a unique western invention, but rather a recurrent reinvention, which almost invariably arises where phenotypically distinguishable groups come into contact with one another, if only because: 

Genetically inherited phenotypes are the easiest, most visible and most reliable predictors of group membership” (p32).

For example, van den Berghe describes the relations between the Tutsi, Hutu and Pygmy Twa of Rwanda and neighbouring regions as “a genuine brand of indigenous racism” which, according to van den Berghe, developed quite independently of any western colonial influence (p73).[8]

Moreover, where racial differences are the basis for ethnic identity, the result is, van den Berghe claims, ethnic hierarchies that are particularly rigid, intransient and impermeable.

For van den Berghe, this then explains the failure of African-Americans to wholly assimilate into the US melting pot in stark contrast to successive waves of more recently-arrived European immigrants. 

Thus, van den Berghe observes: 

Blacks who have been English-speaking for several generations have been much less readily assimilated in both England… and the United States than European immigrants who spoke no English on arrival” (p219). 

Thus, language barriers often break down within a generation. 

As Judith Harris emphasizes in support of peer group socialization theory, the children of immigrants whose parents are not at all conversant in the language of their host culture nevertheless typically grow up to speak the language of their host culture rather better than they do the first language of their parents, even though the latter was the cradle tongue to which they were first exposed, and first learnt to speak, inside the family home (see The Nurture Assumption: which I have reviewed here). 

As van den Berghe observes: 

It has been the distressing experience of millions of immigrant parents that, as soon as their children enter school in the host country, the children begin to resist speaking their mother tongue” (p258). 

While displeasing to those parents who wish to pass on their language, culture and traditions to their offspring, this response is wholly adaptive from the perspective of the offspring themselves:  

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (p258).[9]

However, whereas one can learn a new language, it is not usually possible to change one’s race – the efforts of Rachel Dolezal, Elizabeth Warren, Jessica Krug and Michael Jackson notwithstanding. However, due to the one-drop rule and the history of miscegenation in America, passing is sometimes possible (see below). 

Instead, phenotypic (i.e. racial) differences can only be eradicated after many generations of miscegenation, and sometimes, as in the cases of countries like the USA and Brazil, not even then. 

Meanwhile, van den Berghe observes, often the last aspect of immigrant culture to resist assimilation is culinary differences. However, he observes, increasingly even this becomes only a ‘ceremonial’ difference reserved for family gatherings (p260). 

Thus, van den Berghe surmises, Italian-Americans probably eat hamburgers as often as Americans of any other ethnic background, but at family gatherings they still revert to pasta and other traditional Italian cuisine

Yet even culinary differences eventually disappear. Thus, in both Britain and America, sausage has almost completely ceased to be thought of as a distinctively German dish (as have hamburgers, originally thought to have been named in reference to the city of Hamburg) and now pizza is perhaps on the verge of losing any residual association with Italians. 

Is Racism Always Worse than Ethnocentrism? 

Yet if raciallybased ethnic hierarchies are particularly intransigent and impermeable, they are also, van den Berghe claims, “peculiarly conflict-ridden and unstable” (p33). 

Thus, van den Berghe seems to believe that racial prejudice and animosity tends to be more extreme and malevolent in nature than mere ethnocentrism as exists between different ethnic groups of the same race (i.e. not distinguishable from one another on the basis of inherited phenotypic traits such as skin colour). 

For example, van den Berghe claims that, during World War Two: 

There was a blatant difference in the level of ferociousness of American soldiers in the Pacific and European theaters… The Germans were misguided relatives (however distant), while the ‘Japs’ or the ‘Nips’ were an entirely different breed of inscrutable, treacherous, ‘yellow little bastards.’ This was reflected in differential behavior in such things as the taking (versus killing) of prisoners, the rhetoric of war propaganda (President Roosevelt in his wartime speeches repeatedly referred to his enemies as ‘the Nazis, the Fascists, and the Japanese’), the internment in ‘relocation camps’ of American citizens of Japanese extraction, and in the use of atomic weapons” (p57).[10]

Similarly, in his chapter on ‘Colonial Empires’, by which he means “imperialism over distant peoples who usually live in noncontiguous territories and who therefore look quite different from their conquerors, speak unrelated languages, and are so culturally alien to their colonial masters as to provide little basis for mutual understanding”, van den Berghe writes: 

Colonialism is… imperialism without the restraints of common bonds of history, culture, religion, marriage and blood that often exist when conquest takes place between neighbors” (p85). 

Thus, he claims: 

What makes for the special character of the colonial situation is the perception by the conqueror that he is dealing with totally unrelated, alien and, therefore, inferior people. Colonials are treated as people totally beyond the pale of kin selection” (p85). 

However, I am unpersuaded by van den Berghe’s claim that conflict between more distantly related ethnic groups is always, or even typically, more brutal than that among biologically and culturally more closely related groups. 

After all, even conquests of neighbouring peoples, identical in race, if not always in culture, to the conquering group, are often highly brutal, for example the British in Ireland or the Japanese in Korea and China during the first half of the twentieth century. 

Indeed, many of the most intense and intractable ethnic conflicts are those between neighbours and ethnic kin, who are racially (and culturally) very similar to one another. 

Thus, for example, Catholics and Protestants in Northern Ireland, Greeks and Turks in Cyprus, and Bosnians, Croats, Serbs and Albanians in the Balkans, and even Jews and Palestinians in the Middle East, are all racially and genetically quite similar to one another, and also share many aspects of their culture with one another too. (The same is true, to give a topical example at the time of writing, of Ukrainians and Russians.) However, this has not noticeably ameliorated the nasty, intransient and bloody conflicts that have been, and continue to be, waged among them.  

Of course, the main reason that most ethnic conflict occurs between close neighbours is because neighbouring groups are much more likely to come into contact, and hence into conflict, with one another, especially over competing claims to land.[11]

Yet these same neighbouring groups are also likely to be related to one another, both culturally and genetically, because of both shared origins and the inevitable history of illicit intermarriage or miscegenation, and cultural borrowings, that inevitably occur even among the most hostile of neighbours.[12]

Nevertheless, the continuation of intense ethnic animosity between ethnic groups who are genetically, close to one another seems to pose a theoretical problem, not only for van den Berghe’s theory, but also, to an even greater degree, for Philippe Rushton’s so-called genetic similarity theory (which I have written about here), which argues that conflict between different ethnic groups is related to their relative degree of genetic differentiation from one another (Rushton 1998a; 1998b; 2005). 

It also poses a problem for the argument of political scientist Frank K Salter, who argues that populations should resist immigration by alien immigrants proportionally to the degree to which the alien immigrants are genetically distant from themselves (On Genetic Interests; see also Salter 2002). 

Assimilation, Acculturation and the American Melting Pot 

Since racially-based hierarchies result in ethnic boundaries that are both “peculiarly conflict-ridden and unstable” and also peculiarly rigid and impermeable, Van den Berghe controversially concludes: 

There has never been a successful multiracial democracy” (p189).[13]

Of course, in assessing this claim, we must recognize that ‘success’ is not only a matter of degree, but also can also be measured on several different dimensions. 

Thus, many people would regard the USA as the quintessential “successful… democracy”, even though the US has been multiracial, to some degree, for the entirety of its existence as a nation. 

Certainly, the USA has been successful economically, and indeed militarily.

However, the US has also long been plagued by interethnic conflict, and, although successful economically and militarily, it has yet to be successful in finding a way to manage its continued interethnic conflict, especially that between blacks and whites.

The USA is also afflicted with a relatively high rate of homicide and gun crime as compared to other developed economies, as well as low levels of literacy and numeracy and educational attainment. Although it is politically incorrect to acknowledge as much, these problems also likely reflect the USA’s ethnic diversity, in particular its large black underclass.

Indeed, as van den Berghe acknowledges, even societies divided by mere ethnicity rather than race seem highly conflict-prone (p186). 

Thus, assimilation, when it does occur, occurs only gradually, and only under certain conditions, namely when the group which is to be assimilated is “similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (p219). 

Thus, van den Berghe observes: 

People tend to assimilate and acculturate when their ethny [i.e. ethnic group] is geographically dispersed (often through migration), when they constitute a numerical minority living among strangers, when they are in a subordinate position and when they are allowed to assimilate by the dominant group” (p185). 

Moreover, van den Berghe is careful distinguish what he calls assimilation from mere acculturation.  

The latter, acculturation, involves a subordinate group gradually adopting the norms, values, language, cultural traditions and folkways of the dominant culture into whom they aspire to assimilate. It is therefore largely a unilateral process.[14]

In contrast, however, assimilation goes beyond this and involves members of the dominant host culture also actually welcoming, or at least accepting, the acculturated newcomers as a part of their own community.  

Thus, van den Berghe argues that host populations sometimes resist the assimilation of even wholly acculturated and hence culturally indistinguishable out-groups. Examples of groups excluded in this way include, according to van den Berghe, pariah castes, such as the untouchable dalits of the Indian subcontinent, the Burakumin of Japan and blacks in the USA.[15]

In other words, assimilation, unlike acculturation, is very much a two-way street. Thus, just as it ‘takes two to tango’, so assimilation is very much a bilateral process: 

It takes two to assimilate” (p217).  

On the one hand, minority groups may sometimes themselves resist assimilation, or even acculturation, if they perceive themselves as better off maintaining their distinct identify. This is especially true of groups who perceive themselves as being, in some respects, better-off than the host outgroup into whom they refuse to be absorbed. 

Thus, middleman minorities, or market-dominant minorities, such as Jews in the West, the overseas Chinese in contemporary South-East Asia, the Lebanese in West Africa and South Asians in East Africa, being, on average, much wealthier than the bulk of the host populations among whom them live, often perceive no social or economic advantage to either assimilation or acculturation and hence resist the process, instead stubbornly maintaining their own language and traditions and marrying only among themselves. 

The same is also true, more obviously, of alien ruling elites, such as the colonial administrators, and settlers, in European colonial empires in Africa, India and elsewhere, for whom assimilation into native populations would have been anathema.

Passing’, ‘Pretendians’ and ‘Blackfishing’ 

Interestingly, just as market-dominant minorities, middleman minorities, and European colonial rulers usually felt no need to assimilate into the host society in whose midst they lived, because to do so would have endangered their privileged position within this host society, so recent immigrants to America may no longer perceive any advantage to assimilation. 

On the contrary, there may now be an economic disincentive operating against assimilation, at least if assimilation means forgoing from the right to benefit from affirmative action in employment and college admissions

Thus, in the nineteenth and early twentieth centuries, the phenomenon of passing, at least in America, typically involved non-whites, especially light-skinned mixed-race African-Americans, attempting to pass as white or, if this were not realistic, sometimes as Native American.  

Some non-whites, such as Bhagat Singh Thind and Takao Ozawa, even brought legal actions in order to be racially reclassified as ‘white’ in order to benefit from America’s then overtly racialist naturalization law.

Contemporary cases of passing, however, though rarely referred to by this term, typically involve whites themselves attempting to somehow pass themselves off as some variety of non-white (see Hannam 2021). 

Recent high-profile recent examples have included Rachel Dolezal, Elizabeth Warren and Jessica Krug

Interestingly, all three of these women were both employed in academia and involved in leftist politics – two spheres in which adopting a non-white identity is likely to be especially advantageous, given the widespread adoption of affirmative action in college admissions and appointments, and the rampant anti-white animus that infuses so much of academia and the cultural Marxist left.[16]

Indeed, the phenomenon is now so common that it even has its own associated set of neologisms, such as Pretendian, ‘blackfishing’ and, in Australia, box-ticker.[17]

Indeed, one remarkable recent survey purported to uncover that fully 34% of white college applicants in the United States admitted to lying about their ethnicity on their applications, in most cases either to improve their chances of admission or to qualify for financial aid

Although Rachel Dolezal, Elizabeth Warren and Jessica Krug were all women, this survey found that white male applicants were even more likely to lie about their ethnicity than were white female applicants, with only 16% of white female applicants admitting to lying, as compared to nearly half (48%) of white males.[18]

This is, of course, consistent with the fact that it is white males who are the primary victims of affirmative action and other forms of discrimination.  

This strongly suggests that, whereas there were formerly social (and legal) benefits that were associated with identifying as white, today the advantages accrue to instead to those able to assume a non-white identity.  

For all the talk of so-called ‘white privilege’, when whites and mixed-race people, together with others of ambiguous racial identity, preferentially choose to pose as non-white in order to take advantage of the perceived benefits of assuming such an identity, they are voting with their feet and thereby demonstrating what economists call revealed preferences

This, of course, means that recent immigrants to America, such as Hispanics, will have rather less incentive in integrate into the American mainstream than did earlier waves of European immigrants, such as Irish, Poles, Jews and Italians, the latter having been, primarily, the victims of discrimination rather than its beneficiaries

After all, who would want to be another, boring whiteAnglo’ or unhyphenated American when to do so would presumably mean relinquishing any right to benefit from affirmative action in job recruitment or college admissions, not to mention becoming a part of the hated white ‘oppressor’ class. 

In short, ‘white privilege’ isn’t all it’s cracked up to be. 

This perverse incentive against assimilation obviously ought to be worrying to anyone concerned with the future of American as a stable unified polity. 

Ethnostates – or Consociationalism

Given the ubiquity of ethnic conflict, and the fact that assimilation occurs, if at all, only gradually and, even then, only under certain conditions, a pessimist (or indeed a racial separatist) might conclude that the only way to prevent ethnic conflict is for different ethnic groups to be given separate territories with complete independence and territorial sovereignty. 

This would involve the partition of the world into separate ethnically homogenous ethnostates, as advocated by racial separatists and many in the alt-right. 

Yet, quite apart from the practical difficulties such an arrangement would entail, not least the need for large-scale forcible displacements of populations, this ‘universal nationalism’, as championed by political scientist Frank K Salter among others, would arguably only shift the locus of ethnic conflict from within the borders of a single multi-ethnic state to between those of separate ethnostates – and conflict between states can be just as destructive as conflict within states, as countless wars between states throughout history have amply proven.  

In the absence of assimilation, then, perhaps fairest and least conflictual solution is what van den Berghe terms consociationalism. This term refers to a form of ethnic power-sharing, whereby elites from both groups agree to share power, each usually retaining a veto power regarding major decisions, and there is proportionate representation for each group in all important positions of power. 

This seems to be roughly the basis of the power sharing agreement imposed on Northern Ireland in the Good Friday Agreement, which was largely successful in bringing an end to the ethnic conflict known as ‘the Troubles.[19]

On the other hand, however, power-sharing was explicitly rejected by both the ANC and the international anti-apartheid movement as a solution in another ethnically-divided polity, namely South Africa, in favour of majority rule, even though the result has been a situation very similar to the situation in Northern Ireland which led to the Troubles, namely an effective one-party state, with a single party in power for successive decades and institutionalized discrimination against minorities.[20]

Consociationalism or ethnic power-sharing also arguably the model towards which the USA and other western polities are increasingly moving, with quotas and so-called ‘affirmative action increasingly replacing the earlier ideals of appointment by merit, color blindness or freedom of association, and multiculturalism and cultural pluralism replacing the earlier ideal of assimilation

Perhaps the model consociationalist democracy is van den Berghe’s own native Belgium, where, he reports: 

All the linguistic, class, religious and party-political quarrels and street demonstrations have yet to produce a single fatality” (p199).[21]

Belgium is, however, very much the exception rather than the rule, and, at any rate, though peaceful, remains very much a divided society

Indeed, power-sharing institutions, in giving official, institutional recognition to the existing ethnic divide, function only to institutionalize and hence reinforce and ossify the existing ethnic divide, making successful integration and assimilation almost impossible – and certainly even less likely to occur than it had been in the absence of such institutional arrangements. 

Moreover, consociationalism can be maintained, van den Berghe emphasizes, only in a limited range of circumstances, the key criterion being that the groups in question are equal, or almost equal, to one another in status, and not organized into an ethnic hierarchy. 

However, even when the necessary conditions are met, it invariably involves a precarious balancing act. 

Just how precarious is illustrated by the fate of other formerly stable consociationalist states. Thus, van den Bergh notes the irony that earlier writers on the topic had cited Lebanon as “a model [consociationalist democracy] in the Third World” just a few years before the Lebanese Civil War broke out in the 1970s (p191). 

His point is, ironically, only strengthened by the fact that, in the three decades since his book was first published, two of his own examples of consociationalism, namely the USSR and Yugoslavia, have themselves since descended into civil war and fragmented along ethnic lines. 

Slavery and Other Recurrent Situations  

In the central section of the book, van den Berghe discusses such historically recurrent racial relationships as “slavery”, middleman minorities, “caste” and “colonialism”. 

In large part, his analyses of these institutions and phenomena do not depend on his sociobiological theory of ethnocentrism, and are worth reading even for readers unconvinced by this theory – or even by readers skeptical of sociobiology and evolutionary psychology altogether. 

Nevertheless, the sociobiological model continues to guide his analysis. 

Take, for example, his chapter on slavery. 

Although the overtly racial slavery of the New World was quite unique, slavery often has an ethnic dimension, since slaves are often captured during warfare from among enemy groups. 

Indeed, the very word slave is derived from the ethnonym, Slav, due to the frequency with which the latter were captured as slaves, both by Christians and Muslims.[22]

In particular, van den Berghe argues that: 

An essential feature of slave status is being torn out of one’s network of kin selection. This condition generally results from forcible removal of the slave from his home group by capture and purchase” (p120).

This then partly explains, for example, why European settlers were far less successful in enslaving the native inhabitants of the Americas than they were in exploiting the slave labour of African slaves who had been shipped across the Atlantic, far from their original kin groups, precisely for this purpose.[23]

Thus, for van den Berghe, the quintessential slave is: 

Not only involuntarily among ethnic strangers in a strange land: he is there alone, without his support group of kinsmen and fellow ethnics” (p115)

Here van den Berghe seemingly anticipates the key insight of Jamaican sociologist Orlando Peterson in his comparative study of slavery, Slavery and Social Death, who terms this key characteristic of slavery natal alienation.[24]

This, however, is likely to be only a temporary condition, since, at least if allowed to reproduce, then, gradually over time, slaves would put down roots, produce new families, and indeed whole communities of slaves.[25]

When this occurs, however, slaves gradually, over generations, cease to be true slaves. The result is that: 

Slavery can long endure as an institution in a given society, but the slave status of individuals is typically only semipermanent and nonhereditary… Unless a constantly renewed supply of slaves enters a society, slavery, as an institution, tends to disappear and transform itself into something else” (p120). 

This then explains the gradual transformation of slavery during the medieval period into serfdom in much of Europe, and perhaps also the emergence of some pariah castes such as the untouchables of India. 

Paradoxically, van den Berghe argues that racism became particularly virulent in the West precisely because of Western societies’ ostensible commitment to notions of liberty and the rights of man, notions obviously incompatible with slavery. 

Thus, whereas most civilizations simply took the institution of slavery for granted, feeling no especial need to justify the practice, western civilization, given its ostensible commitment to such lofty notions as individual liberty and the equality of man, was always on the defensive, feeling a constant need to justify and defend slavery. 

The main justification hit upon was racialism and theories of racial superiority

If it was immoral to enslave people, but if at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (p115).  

This then explains much of the virulence of western racialism in the much of the eighteenth, nineteenth and even early-twentieth centuries.[26]

Another important, and related, ideological justification for slavery was what van den Berghe refers to as ‘paternalism’. Thus, Van den Berghe observes that: 

All chattel slave regimes developed a legitimating ideology of paternalism” (p131). 

Thus, in the American South, the “benevolent master” was portrayed a protective “father figure”, while slaves were portrayed as childlike and incapable of living an independent existence and hence as benefiting from their own enslavement (p131). 

This, of course, was a nonsense. As van den Berghe cynically observes: 

Where the parentage was fictive, so, we may assume, is the benevolence” (p131). 

Thus, exploitation was, in sociobiological terms, disguised as kin-selected parental benevolence

However, despite the dehumanization of slaves, the imbalance of power between slave and master, together with the men’s innate and evolved desire for promiscuity, made the sexual exploitation of female slaves by male masters all but inevitable.[27]

As van den Berghe observes: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Thus, he notes the hypocrisy whereby: 

Dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women” (p33). 

The result was that the fictive ideology of ‘paternalism’ that served to justify slavery often gave way to literal paternity of the next generation of the slave population. 

This created two problems. First, it made the racial justification for slavery, namely the ostensible inferiority of black people, ring increasingly hollow, as ostensibly ‘black slaves acquired greater European ancestry, lighter skins and more Caucasoid features with each successive generation of miscegenation. 

Second, and more important, it also meant that the exploitation of this next generation of slaves by their owners potentially violated the logic of kin selection, because: 

If slaves become kinsmen, you cannot exploit them without indirectly exploiting yourself” (p134).[28]

This, van den Berghe surmises, led many slave owners to free those among the offspring of slave women whom they themselves, or their male relatives, had fathered. As evidence, he observes:  

In all [European colonial] slave regimes, there was a close association between manumission and European ancestry. In 1850 in the United States, for example, an estimated 37% of free ‘negroes’ had white ancestry, compared to about 10% of the slave population” (p132). 

This leads van den Bergh to conclude that many such free people of color – who were referred to as people of color precisely because their substantial degree of white ancestry precluded any simple identification as black or negro – had been freed by their owner precisely because their owner was now also their kinsmen. Indeed, many may have been freed by the very slave-master who had been responsible for fathering them. 

Thus, to give a famous example, Thomas Jefferson is thought to have fathered six offspring, four of whom survived to adulthood, with his slave, Sally Hemings – who was herself already three-quarters white, and indeed Jefferson’s wife’s own half-sister, on account of miscegenation in previous generations. 

Of these four surviving offspring, two were allowed to escape, probably with Jefferson’s tacit permission or at least acquiescence, while the remaining two were freed upon his death in his will.[29]

This seems to have been a common pattern. Thus, van den Berghe reports: 

Only about one tenth of the ‘negro’ population of the United States was free in 1860. A greatly disproportionate number of them were mulattoes, and, thus, presumably often blood relatives of the master who emancipated them or their ancestors. The only other slaves who were regularly were old people past productive and reproductive age, so as to avoid the cost of feeding the aged and infirm” (p129). 

Yet this made the continuance of slavery almost impossible, because each new generation more and more slaves would be freed.  

Other slave systems got around this problem by continually capturing or importing new slaves in order to replenish the slave population. However, this option was denied to American slaveholders by the abolition of the slave trade in 1807

Instead, the Americans were unique in attempting to ‘breed’ slaves. This leads van den Berghe to conclude that: 

By making the slave woman widely available to her master…Western slavery thus literally contained the genetic seeds of its own destruction” (p134).[30]

Synthesising Marxism and Sociobiology 

Given the potential appeal of his theory to nationalists, and even to racialists, it is perhaps surprising that van den Berghe draws heavily on Marxist theory. Although Marxists were almost unanimously hostile to sociobiology, sociobiologists frequently emphasized the potential compatibility of Marxist theory and sociobiology (e.g. The Evolution of Human Sociality). 

However, van den Berghe remains, to my knowledge, the only figure (except myself) to actually successfully synthesize sociobiology and Marxism in order to produce novel theory.  

Thus, for example, he argues that, in almost every society in existence, class exploitation is disguised by an ideology (in the Marxist sense) that disguises exploitation as either: 

1) Kin-selected nepotistic altruism – e.g. the king or dictator is portrayed as benevolent ‘father’ of the nation; or
2) Mutually beneficial reciprocity – i.e. social contract theory or democracy (p60). 

However, contrary to orthodox Marxist theory, van den Berghe regards ethnic sentiments as more fundamental than class loyalty since, whereas the latter is “dependent on a commonality of interests”, the former is often “irrational” (p243). 

Nationalist conflicts are among the most intractable and unamenable to reason and compromise… It seems a great many people care passionately whether they are ruled and exploited by members of their own ethny or foreigners” (p62). 

In short, van den Berghe concludes: 

Blood runs thicker than money” (p243). 

Another difference is that, whereas Marxists view control over the so-called means of production (i.e. the means necessary to produce goods for sale) as the ultimate factor determining exploitation and conflict in human societies, Darwinians instead focus on conflict over access to what I have termed the means of reproduction – in other words, the means necessary to produce offspring (i.e. fertile females, their wombs and vaginas etc.). 

This is because, from a Darwinian perspective: 

The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (p165). 

Thus, unlike his contemporary Darwin, Karl Marx was, for all his ostensible radicalism, in his emphasis on economics rather than sex, just another Victorian sexual prude.[31]

Mating, Miscegenation and Intermarriage 

Given that reproduction, not production, is the ultimate focus of individual and societal conflict and competition, van den Berghe argues that ultimately questions of equality, inequality and assimilation must be also determined by reproductive, not economic, criteria. 

Thus, he concludes, intermarriage, especially if it occurs, not only frequently, but also in both directions (i.e. involves both males and females of both ethnicities, rather than always involving males of one ethnic group, usually the dominant ethnic group, taking females of the other ethnic group, usually the subordinate group, as wives), is the ultimate measure of racial equality and assimilation: 

Marriage, especially if it happens in both directions, that is with both men and women of both groups marrying out, is probably the best measure of assimilation” (p218). 

In contrast, however, he also emphasizes that mere “concubinage is frequent [even] in the absence of assimilation” (p218). 

Moreover, such concubinage invariably involves males of the dominant-group taking females from the subordinate-group as concubines, whereas dominant-group females are invariably off-limits as sexual partners for subordinate group males. 

Thus, van den Berghe observes, although “dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women”, they nevertheless are jealously protective of their own women and enforce strict double-standards (p33). 

For example, historian Wynn Craig Wade, in his history of the Ku Klux Klan (which I have reviewed here), writes: 

In [antebellum] Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation.” (The Fiery Cross: p20). 

The result, van den Berghe reports, is that: 

The subordinate group in an ethnic hierarchy invariably ‘loses’ more women to males of the dominant group than vice versa” (p75). 

Indeed, this same pattern is even apparent in the DNA of contemporary populations. Thus, geneticist James Watson reports that, whereas the mitochondrial DNA of contemporary Columbians, which is passed down the female line, shows a “range of Amerindian MtDNA types”, the Y-chromosomes of these same Colombians, are 94% European. This leads him to conclude: 

The virtual absence of Amerindian Y chromosome types, reveals the tragic story of colonial genocide: indigenous men were eliminated while local women were sexually ‘assimilated’ by the conquistadors” (DNA: The Secret of Life: p257). 

As van den Berghe himself observes: 

It is no accident that military conquest is so often accompanied by the killing, enslavement and castration of males, and the raping and capturing of females” (p75). 

This, of course, reflects the fact that, in Darwinian terms, the ultimate purpose of power is to maximize reproductive success

However, while the ethnic group as a whole inevitably suffers a diminution in its fitness, there is a decided gender imbalance in who bears the brunt of this loss. 

The men of the subordinate group are always the losers and therefore always have a reproductive interest in overthrowing the system. The women of the subordinate group, however frequently have the option of being reproductively successful with dominant-group males” (p27). 

Indeed, subordinate-group females are not only able, and sometimes forced, to mate with dominant-group males, but, in purely fitness terms, they may even benefit from such an arrangement.  

Hypergamy (mating upward for women) is a fitness enhancing strategy for women, and, therefore, subordinate-group women do not always resist being ‘taken over’ by dominant-group men” (p75). 

This is because, by so doing, they thereby obtain access to both the greater resources that dominant group males are able to provide in return for sexual access or as provisioning for their offspring, as well as the superior’ genes which facilitated the conquest in the first place. 

Thus, throughout history, women and girls have been altogether too willing to consort and intermarry with their conquerors. 

The result of this gender imbalance in the consequences of conquest and subjugation, is, a lack of solidarity as between men and women of the subjugated group. 

This sex asymmetry in fitness strategies in ethnically stratified societies often creates tension between the sexes within subordinate groups. The female option of fitness maximization through hypergamy is deeply resented by subordinate group males” (p76). 

Indeed, even captured females who were enslaved by their conquerers sometimes did surprisingly well out of this arrangement, at least if they were young and beautiful, and hence lucky enough to be recruited into the harem of a king, emperor or other powerful male.

One slave captured in Eastern Europe even went on to become effective queen of the Ottoman Empire at the height of its power. Hurrem Sultan, as she came to be known, was, of course, exceptional, but only in degree. Members of royal harems may have been secluded, but they also lived in some luxury.

Indeed, even in puritanical North America, where concubinage was very much frownded upon, van den Berghe reports that “slavery was much tougher on men than on women”, since: 

Slavery drastically reduced the fitness of male slaves; it had little or no such adverse effect on the fitness of female slaves whose masters had a double interest – financial and genetic – in having them reproduce at maximum capacity” (p133) 

Van den Berghe even tentatively ventures: 

It is perhaps not far-fetched to suggest that, even today, much of the ambivalence in relations between black men and women in America… has its roots in the highly asymmetrical mating system of the slave plantation” (p133).[32]

Miscegenation and Intermarriage in Modern America 

Yet, curiously, however, patterns of interracial dating in contemporary America are anomalous – at least if we believe the pervasive myth that America is a ‘systemically racist’ society where black people are still oppressed and discriminated against

On the one hand, genetic data confirms that, historically, matings between white men and black women were more frequent than the reverse, since African-American mitochondrial DNA, passed down the female line, is overwhelmingly African in origin, whereas their Y chromosomes, passed down the male line, are often European in origin (Lind et al 2007). 

However, recent census data suggests that this pattern is now reversed. Thus, black men are now about two and a half times as likely to marry white women as black women are to marry white men (Fryer 2007; see also Sailer 1997). 

This seemingly suggests white American males are actually losing out in reproductive competition to black males. 

This observation led controversial behavioural geneticist Glayde Whitney to claim: 

By many traditional anthropological criteria African-Americans are now one of the dominant social groups in America – at least they are dominant over whites. There is a tremendous and continuing transfer of property, land and women from the subordinate race to the dominant race” (Whitney 1999: p95). 

However, this conclusion is difficult to square with the continued disproportionate economic deprivation of much of black America. In short, African-Americans may be reproductively successful, and perhaps even, in some respects, socially privileged, but, despite benefiting from systematic discrimination in employment and admission to institutions of higher education, they are clearly also, on average, economically much worse-off as compared to whites and Asians in modern America.  

Instead, perhaps the beginnings of an explanation for this paradox can be sought in van den Berghe’s own later collaboration with anthropologist, and HBD blogger, Peter Frost

Here, in a co-authored paper, van den Berghe and Frost argue that, across cultures, there is a general sexual preference for females with somewhat lighter complexion than the group average (van den Berghe and Frost 1986). 

However, as Frost explains in a more recent work, Fair Women, Dark Men: The Forgotten Roots of Racial Prejudice, preferences with regard to male complexion are more ambivalent (see also Feinman & Gill 1977). 

Thus, whereas, according to the title of a novel, two films and a hit Broadway musical, ‘Gentlemen Prefer Blondes’ (who also reputedly, and perhaps as a consequence, have more fun), the idealized male romantic partner is instead tall, dark and handsome

In subsequent work, Frost argues that ecological conditions in sub-Saharan Africa permitted high levels of polygyny, because women were economically self-supporting, and this increased the intensity of selection for traits (e.g. increased muscularity, masculinity, athleticism and perhaps outgoing, sexually-aggressive personalities) which enhance the ability of African-descended males to compete for mates and attract females (Frost 2008). 

In contrast, Frost argues that there was greater selection for female attractiveness (and perhaps female chastity) in areas such as Northern Europe and Northeast Asia, where, to successfully reproduce, women were required to attract a male willing to provision them during cold winters throughout their gestation, lactation and beyond (Frost 2008). 

This then suggests that African males have simply evolved to be, on average, more attractive to women, whereas European and Asian females have evolved to be more attractive to men

This speculation is supported by a couple of recent studies of facial attractiveness, which found that black male faces were rated as most attractive to members of the opposite sex, but that, for female faces, the pattern was reversed (Lewis 2011; Lewis 2012). 

These findings could also go some way towards explaining patterns of interracial dating in the contemporary west (Lewis 2012). 

The Most Explosive Aspect of Interethnic Relations” 

However, such an explanation is likely to be popular neither with racialists, for whom miscegenation is anathema, nor with racial egalitarians, for whom, as a matter of sacrosanct dogma, all races must be equal in all things, even aesthetics and sex appeal.[33]

Thus, when evolutionary psychologist Satoshi Kanazawa made a similar claim in 2011 in a blog post (since deleted), outrage predictably ensued, the post was swiftly deleted, his then-blog dropped by its host, Psychology Today, and the author reprimanded by his employer, the London School of Economics, and forbidden from writing any blog or non-scholarly publications for a whole year. 

Yet all of this occurred within a year of the publication of the two papers cited above that largely corroborated Kanazawa’s finding (Lewis 2011; Lewis 2012). 

Yet such a reaction is, in fact, little surprise. As van den Berghe points out: 

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (p75). 

After all, from a sociobiological perspective, competition over reproductive access to fertile females is Darwinian conflict in its most direct and primordial form

Van den Berghe’s claim that interethnic sexual contact is “the most explosive aspect” of interethnic relations also has support from the history of racial conflict in the USA and elsewhere

The spectre of interracial sexual contact, real or imagined, has motivated several of the most notorious racially-motivated ‘hate-crimes’ of American history, from the torture-murder of Emmett Till for allegedly propositioning a white woman, to the various atrocities of the reconstruction-era Ku Klux Klan in defence of the ostensible virtue of ‘white womanhood, to the recent Charleston church shooting, ostensibly committed in revenge for the allegedly disproportionate rate of rape of white women by black man.[34]

Meanwhile, interracial sexual relations are also implicated in some of American history’s most infamous alleged miscarriages of justice, from the Scottsboro Boys and Groveland Four cases, and the more recent Central Park jogger case, all of which involved allegations of interracial rape, to the comparatively trivial conduct alleged, but by no means trivial punishment imposed, in the so-called Monroe ‘kissing case

Allegations of interracial rape also seem to be the most common precursor of full-blown race riots

Thus, in early-twentieth century America, the race riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 were all ignited, at least in part, by allegations of interracial rape or sexual assault

Meanwhile, on the other side of the Atlantic, multi-racial Britain’s first modern post-war race riot, the Notting Hill riot in London 1958, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before turning on them both. 

Meanwhile, Britain’s most recent unambiguous race riot, the 2005 Birmingham riot, an entirely non-white affair, was ignited by the allegation that a black girl had been gang-raped by South Asians.

[Edit: Interestingly, Britain’s latest race riot, which occurred in Kirkby, Merseyside, and took place some months after this piece was first posted, also follows the same pattern, having been provoked by the allegation that local underage girls were being sexually propositioned and harassed by asylum seekers who were being housed in a local hotel.]

Meanwhile, at least in the west, whites no longer seem participate in race riots, save as victims. However, an exception was the 2005 Cronulla riots in Sydney, Australia, which were ignited by the allegation that Middle Eastern males were sexually harassing white Australian girls on Sydney beaches. 

Similarly, in Britain, though riots have yet to result, the spectre of so-called Muslim grooming gangs, preying on, and pimping out, underage white British girls in northern towns across the England, has arguably done more to ignite anti-Muslim sentiment among whites in the UK than a whole series of Jihadist terrorist attacks on British civilian targets

Thus, in Race: The Reality of Human Differences (which I have reviewed here) Sarich and Miele caution that miscegenation, often touted as the universal panacea to racism simply because, if practiced sufficiently widely, it would eventually eliminate all racial differences, or at least blur the lines between racial groups, may actually, at least in the short-term, actually incite racist attacks. 

This, they argue, is because: 

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (Race: The Reality of Human Differences: p256) 

This “racial solidarist perspective” is, of course, a crudely group selectionist view of Darwinian competition, and it leads Sarich and Miele to hypothesize: 

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (Race: The Reality of Human Differences: p255). 

In other words, in sociobiological terms, ‘Robert’, a character from one of Michel Houellebecq’s novels, has it right when he claims: 

What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 

Endnotes

[1] Admittedly, the Croatian War of Independence is indeed sometimes said to have been triggered, or at least precipitated, by a football match between Dinamo Zagreb and Red Star Belgrade, and the riot that occurred at the ground on that day. However, this war was, of course, ethnic in origin, fought between Croats and Serbians, and the football match served as a triggering event only because the two teams were overwhelmingly supported by Croats and Serbians respectively.
This leads to an interesting observation – namely that rivalries such as those between supporters of different football teams tend to become especially malignant and acrimonious when support for one team or the other comes to be inextricably linked to ethnic identity.
Thus it is surely no accident that, in the UK, the most intense rivalry between groups of football supporters is that between between supporters of Ragners and Celtic in Glasgow, at least in part because the rivalry has become linked to religion, which was, at least until recently, a marker for ancestry and ethnicity, while an apparently even more intense rivalry was that between Linfield and Belfast Celtic in Northern Ireland, which was also based on a parallel religious and ethnic divide, and ultimately became so acrimonious that one of the two teams had to withdraw from domestic football and ultimately ceased to exist.

[2] Actually, however, contrary to Brigandt’s critique, it is clear that van den Berghe intended his “biological golden rule” only as a catchy and memorable aphorism, crudely summarizing Hamilton’s rule, rather than a quantitative scientific law akin to, or rivalling, Hamilton’s Rule itself. Therefore, this aspect of Brigandt’s critique is, in my view, misplaced. Indeed, it is difficult to see how this supposed rule could be applied as a quantitative scientific law, since relatedness, on the one hand, and altruism, on the other, are measured in different currencies. 

[3] Thus, van den Berghe concedes that: 

In many cases, the common descent acribed to an ethny is fictive. In fact, in most cases, it is partly fictive” (p27). 

[4] The question of racial nationalism (i.e. encompassing all members of a given race, not just those of a single ethnicity or language group) is actually more complex. Certainly, members of the same race do indeed share some degree of kinship, in so far as they are indeed (almost by definition) on average more closely biologically related to one another than to members of other races – and indeed that relatedness is obviously apparent in their phenotypic resemblance to one another. This suggests that racial nationalist movements such as that of, say, UNIA or of the Japanese imperialists, might have more potential as a viable form of nationalism than do attempts to unite racially disparate ethnicities, such as civic nationalism in the contemporary USA. The same may also be true of Oswald Mosley’s Europe a Nation campaign, at least while Europe remained primarily monoracial (i.e. white). However, any such racial nationalism would incorporate a far larger and more culturally, linguistically and genetically disparate group than any form of nationalism that has previously proven capable of mobilizing support.
Thus, Marcus Garvey’s attempt to create a kind of pan-African ethnic identity enjoyed little success and was largely restricted to North America, where African-Americans, do indeed share a common language and culture in addition to their race. Similarly, the efforts of Japanese nationalists to mobilize a kind of pan-Asian nationalism in support of their imperial aspirations during the first half of the twentieth century was an unmitigated failure, though this was partly because of the brutality with which they conquered and suppressed the other Asian nationalities whose support for pan-Asianism they intermittently and half-heartedly sought to enlist.
On the other hand, it is sometimes suggested that, in the early twentieth century, a white supremacist ideology was largely taken for granted among whites. However, while to some extent true, this shared ideology of white supremacism did not prevent the untold devastation wrought by the European wars of the early twentieth century, namely World Wars I and II, which Patrick Buchanan has collectively termed The Great Civil War of the West.
Thus, European nationalisms usually defined themselves by opposition to other European peoples and powers. Thus, just as Irish nationalism is defined largely by opposition to Britain, and Scottish nationalism by opposition to England, so English (and British) nationalism has itself traditionally been directed against rival European powers such as France and Germany (and formerly Spain), while French nationalism seems to have defined itself primarily in opposition to the Germans and the British, and German nationalism in opposition to the French and Slavs, etc.
It is true that, in the USA, a kind of pan-white American nationalism did seem to prevail in the early twentieth century, albeit initially limited to white protestants, and excluding at least some recent European immigrants (e.g. Italians, Jews). This is, however, a consequence of the so-called melting pot, and really only amounts to yet another parochial nationalism, namely that of a newly-formed ethnic group – white Americans.
At any rate, today white American nationalism is, at most, decidedly muted in form – a kind of implicit white racial consciousness, or, to coin a phrase, the nationalism that dare not speak its name. Thus, Van den Berghe observes: 

In the United States, the whites are an overwhelming majority, so much so that they cannot be meaningfully conceived of as a ruling group at all. The label ‘white’ in the United States does not correspond to a well-defined ethnic or racial group with a high degree of social organization or even self-consciousness, except regionally in the south” (p183). 

Van den Berghe wrote this in 1981. Today, of course, whites are no longer such an “overwhelming majority” of the US population. On the contrary, they are already well on the way to becoming a minority in America, a milestone that is likely to be reached over the coming decades.
Yet, curiously, white ‘racially consciousness’ is seemingly even more muted and implicit today than it was back when van den Berghe authored his book – and this is seen even in the South, which van den Berghe cited as an exception and lone bastion of white identity politics.
True, White Southerners may vote as a solidly for Republican candidates as they once did for the Democrats. However, overt appeals to white racial interests are now as anathema in the South as elsewhere.
Thus, as recently as 1990, a more or less open white racialist like David Duke was able to win a majority of the white vote in Louisiana in his run for the Senate. Today, this is unimaginable.
If the reason that whites lack any ‘racial consciousness’ is indeed, as van den Berghe claims, because they represent such an “overwhelming majority” of the American population, then it is interesting to speculate if and when, during the ongoing process of white demographic displacement, this will cease to be the case.
One thing seems certain: If and when it does ever occur, it will be too late to make any difference to the ongoing process of demographic displacement that some have termed ‘The Great Replacement’ or a third demographic transition.

[5] Of course, a preference for those who look similar to oneself (or one’s other relatives) may itself function as a form of kin recognition (i.e. of recognizing who is kin and who is not). This is referred to in biology as phenotype matching. Moreover, as Richard Dawkins has speculated in The Selfish Gene (reviewed here), racial feeling could conceivably have evolved through a misfiring of such a crude heuristic (The Selfish Gene: p100).

[6] Actually, I suspect that, on average, at least historically, both mothers and fathers may indeed, on average, have provided rather less care for their mixed-race offspring than for offspring of the same race as themselves, simply because mixed-race offspring were more likely to be born out of wedlock, not least because interracial marriage was, until recently, strongly frowned upon, and, in some jurisdictions, either not legally permitted or even outright criminalized, and both mothers and fathers tended to provide less care for illegitimate offspring, fathers because they often refused to acknowledge their illegitimate offspring and had little or no contact with them and may not even have been aware of their existence, and mothers because, lacking paternal support, they usually had no means of raising their illegitimate offspring alone and hence often gave them up for adoption or fostering.

[7] On the other hand, in his paper, An integrated evolutionary perspective on ethnicity, controversial antiSemitic evolutionary psychologist Kevin Macdonald disagrees with this conclusion, citing personal communication from geneticist and anthropologist Henry Harpending for the argument that: 

Long distance migrations have easily occurred on foot and over several generations, bringing people who look different for genetic reasons into contact with each other. Examples include the Bantu in South Africa living close to the Khoisans, or the pygmies living close to non-pygmies. The various groups in Rwanda and Burundi look quite different and came into contact with each other on foot. Harpending notes that it is ‘very likely’ that such encounters between peoples who look different for genetic reasons have been common for the last 40,000 years of human history; the view that humans were mostly sessile and living at a static carrying capacity is contradicted by history and by archaeology. Harpending points instead to ‘starbursts of population expansion’. For example, the Inuits settled in the arctic and exterminated the Dorsets within a few hundred years; the Bantu expansion into central and southern Africa happened in a millennium or less, prior to which Africa was mostly the yellow (i.e., Khoisan) continent, not the black continent. Other examples include the Han expansion in China, the Numic expansion in northern America, the Zulu expansion in southern Africa during the last few centuries, and the present day expansion of the Yanomamo in South America. There has also been a long history of invasions of Europe from the east. ‘In the starburst world people would have had plenty of contact with very different looking people’” (Macdonald 2001: p70). 

[8] Others have argued that the differences between Tutsi and Hutu are indeed largely a western creation, part of the divide and rule strategy supposedly deliberately employed by European colonialists, as well as a theory of Tutsi racial superiority promulgated by European racial anthropologists known as the Hamitic theory of Tutsi origins, which suggested that the Tutsi had migrated from the Horn of Africa, and had benefited from Caucasoid ancestry, as reflected in their supposed physiological differences from the indigenous Hutu (e.g. lighter complexions, greater height, narrower noses).
On this view, the distinction between Hutu and Tutsi was originally primarily socioeconomic rather than racial, and, at least formerly, the boundaries between the two groups were quite fluid.
I suspect this view is nonsense, reflecting political correctness and the leftist tendency to excuse any evidence of dysfunction or oppression in non-Western cultures as necessarily of product of the malign influence of western colonizers. (Most preposterously, even the Indian caste system has been blamed on British colonizers, although it actually predated them, in one form or another, by several thousand years.)
With respect to the division between Tutsi and Hutu, there are not only morphological differences between the two groups in average stature, nose width and complexion, but also substantial differences in the prevalence of genes for lactose tolerance and sickle-cell. These results do indeed seem to suggest that, as predicted by the reviled ‘Hamitic theory’, the Tutsi do indeed have affinities with populations from the Horn of Africa and East Africa. Modern genome analysis tends to confirm this conclusion. 

[9] Exceptions, where immigrant groups retain their distinctive language for multiple generations, occur where immigrants speaking a particular language arrive in sufficient numbers, and are sufficiently isolated in ethnic enclaves and ghettos, that they mix primarily or exclusively with people speaking the same language as themselves. A related exception is in respect of economically, politically or socially dominant minorities, such as alien colonizers, as well as market-dominant or middleman minorities, who often resist assimilation into the mainstream culture precisely so as to maintain their cultural separateness and hence their privileged position within society, and who also, partly for this reason, take steps to socialize, and ensure their offspring socialize, primarily among their own group. 

[10] Some German-Americans were also interred during World War II. However, far fewer were interred than among Japanese-Americans, especially on a per capita basis.
Nevertheless, some German-Americans were treated very badly indeed, yet the latter, unlike the Japanese, have yet to receive a government apology or compensation. Moreover, there was perhaps justification for the differing treatment accorded Japanese- and German-Americans, since the latter were generally longer established and, being white, were also more successfully integrated into mainstream American society, and there was perceived to be a real threat of enemy sabotage.
Also, with regard to van den Berghe’s observation that nuclear atomic weapons were used only against Japan, this is rather misleading. Nuclear weapons could not have been used against Germany, since, by the time of the first test detonation of a nuclear device, Germany had already surrendered. Yet, in fact, the Manhattan Project seems to have been begun with the Germans very much in mind as a prospective target. (Many of the scientists involved were Jewish, many having fled Nazi-occupied Europe for America, and hence their hostility towards the Nazis, and perhaps Germans in general, is easy to understand.)
Whether it is true that, as van den Berghe claims, atomic bombs were never actually likely to be “dropped over, say, Stuttgart or Dortmund” is a matter of supposition. Certainly, there were great animosity towards the Germans in America, as illustrated by the Morgenthau Plan, which, although ultimately never put into practice, was initially highly influential in directing US policy in Europe and even supported by President Roosevelt.
On the other hand, Roosevelt’s references to ‘the Nazis, the Fascists, and the Japanese’ might simply reflect the fact that there was no obvious name for the faction or regime in control of Japan during the Second World War, since, unlike in Germany and Italy, no named political party had seized power. I am therefore unconvinced that a great deal can necessarily be read into this.

[11] This was especially so in historical times, before the development of improved technologies of long-distance transportation (ships, aeroplanes) enabled more distantly related populations to come into contact, and hence conflict with one another (e.g. blacks and whites in the USA and South Africa, South Asians and English in the UK or under the British Raj). Thus, the ancient Indian treatise on statecraft and strategy, Arthashastra, observed that a ruler’s natural enemies are his immediate neighbours, whereas his next-but-one neighbours, being immediate neighbours of his own immediate neighbours, are his natural allies. This is sometimes credited as the origin of the famous aphorism, The enemy of my enemy is my friend.

[12]  The idea that neighbouring groups tend to be in conflict with one another precisely because, being neighbours, they are also in close contact, and hence competition, with one another, ironically posits almost the exact opposite relationship between ‘contact’ and intergroup relations than that posited by the famous contact theory of mid-twentieth psychology, which posited that increased contact between members of different racial and ethnic groups would lead to reduced prejudice and animosity.
This, of course, depends, at least partly, on the nature of the ‘contact’ in question. Contact that involves territorial rivalry, economic competition and war, obviously exacerbates conflict and animosity. In contrast, proponents of contact theory typically had in mind personal contact, rather than, say, the sort of impersonal, but often deadly, contact that occurs between rival belligerent combatants in wartime.
In fact, however, even at the personal level, contact can take many different forms, and often functions to increase inter-ethnic animosity. Hence the famous proverb, ‘familiarity breeds contempt’.
Indeed, social psychologists now concede that only ‘positive’ interactions with members with members of other groups (e.g. friendship, cooperation, acts of altruism, mutually beneficial trade) reduces animosity and conflict.
In contrast, negative interactions (e.g. being robbed, mugged or attacked by members of another group) only serves to reinforce, exacerbate, or indeed create intergroup animosity. This, of course, reduces the contact hypothesis to little more than common sense – positive experiences with a given group lead to positive perceptions of that group; negative interactions to negative perceptions.
This in turn suggests that stereotypes are often based on real experiences and therefore tend to be true – if not of all individuals, then at least at the statistical, aggregate group level.
I would add that, anecdotally, even positive interactions with members of disdained outgroups do not always shift perceptions regarded the disdained outgroup as a whole. Instead, the individuals with whom one enjoys positive interactions, and even friendships, are often seen as exceptions to the rule (‘one of the good ones’), rather than representative of the demographic to which they belong. Hence the familiar phenomenon of even virulent racists having friendships and sometimes even heroes among members of races whom they generally otherwise disdain. 

[13] However, Van den Berghe acknowledges that racially diverse societies have lived in “relative harmony” in places such as Latin America, where government gives no formal political recognition to racial groups (e.g. racial preferences and quotas for members of certain races) and where the latter do not organize on a racial basis, such that government is, in van den Berghe’s terminology, “non-racial” rather than “multiracial” (p190). However, this is perhaps a naïvely benign view of race relations in Latin American countries such as Brazil, which is, despite the fluidity of racial identity and lack of clear dividing lines between races, nevertheless now viewed by most social scientists, not so much as a model racial democracy, so much as a racially-stratified pigmentocracy , where skin tone correlates with social status. It is also arguably an outdated view of race relations in Latin America, because, perhaps due to indirect cultural and political influence emanating from the USA, ethnic groups in much of Latin America (e.g. blacks in Brazil, indigenous populations in Bolivia) increasingly do organize and agitate on a racial basis.

[14] I am careful here not to refer to refer the dominant culture as that of either a ‘host population’ or a ‘majority population’, or the subordinate group as a ‘minority group’ or an incoming group of migrants. This is because sometimes newly-arrived settlers successfully assimilate the indigenous populations among whom they settle, and sometimes it is the majority group who ultimately assimilate to the norms and culture of the minority. Thus, for example, the Anglo-Saxons imposed their Germanic language on the indigenous inhabitants of what is today England, and indeed ultimately most of the inhabitants of Scotland, Wales and Ireland as well, even though they likely never represented a majority of the population even in England, and may have made only a comparatively modest contribution to the ancestry of the people whom we today call ‘English’.

[15] Interestingly, and no doubt controversially, Van den Berghe argues that blacks in the USA do not have any distinctive cultural traits that distinguish them from the white American mainstream, and that their successful assimilation has been prevented only by the fact that, until very recently, whites have refused to ‘assimilate’ them. He is particularly skeptical regarding the notion of any cultural inheritances from Africa, dismissing “the romantic search for survivals of African Culture” as “elusive” (p177).
Indeed, for van den Berghe, the whole notion of a distinct African-American culture is “largely ideological and romantic” (p177). “Afro-Americans are,” he argues, “culturally ‘Anglo-Saxon’” and hence paradoxically ”as Anglo as anyone… in America” (p177). He concludes:

The case for ‘black culture’ rests… largely on the northern ghetto lumpenproletariat, a class which has no direct counterpart. Even in that group, however, much of the distinctiveness is traceable to their southern, rural origins” (p177). 

This reference to “southern rural origins” anticipates Thomas Sowell’s later black redneck hypothesis. Certainly, many aspects of black culture, such as dialect (e.g. the use of terms such as y’all and ain’t and the pronunciation of ‘whores’ as ‘hoes’) and stereotypical fondness for fried chicken, are obvious inheritances from Southern culture rather than distinctively black, let alone an inheritance from Africa. Thus, van den Berghe observes:

Ghetto lumpenproletariat blacks in Chicago, Detroit and New York may seem to have a distinct subculture of their own compared collectively to their white neighbors, but the black Mississippi sharecropper is not very different, except for his skin pigment, from his white counterparts” (p177). 

Any remaining differences not attributable to their Southern origins are, van den Berghe claims, not “African survivals, but adaptation to stigma” (p177). Here, van den Berghe perhaps has in mind the inverse morality, celebration of criminality, and bad nigger’ archetype prevalent in, for example, gangsta rap music. Thus, van den Berghe concludes that: 

Afro-Americans owe their distinctiveness overwhelmingly to the fact that they have been first enslaved and then stigmatized as a pariah group. They lack a territorial base, the necessary economic, and political resources and the cultural and linguistic pluralism ever to constitute a successful nation. Their pluralism is strictly a structural pluralism inflicted on them by racism. A stigma is hardly an adequate basis for successful nationalism” (p184). 

[16] Thus, Elizabeth Warren was a law professor who became a Democratic Party Senator and Presidential candidate, and had described herself as ‘American Indian, and been cited by her University employers as an ethnic minority, in order to benefit from informal affirmative action, despite having only a very small amount of Native American ancestry. Krug and Dolezal, meanwhile, taking advantage of the one drop rule, both identified as African-American, Krug, a history professor and leftist activist, taking advantage of her Middle-Eastern appearance, itself likely a reflection of her Jewish ancestry. Dolezal, however, was formerly a white, blonde girl, but, through the simple expedient of getting a perm and tan, managed to become an adjunct professor of black studies at a local university and local chapter president of the NAACP in an overwhelmingly white town and state. Whoever said blondes have more fun? 

[17] It has even given rise to a popular new hairstyle among young white males attempting to escape the stigma of whiteness by adopting a racially ambiguous appearance – the mulatto perm

[18] Interestingly, the examples cited by Paddy Hannam in his piece on the phenomenon, The rise of the race fakers also seem to have been female (Hannam 2021). Steve Sailer wisely counsels caution with regard to the findings of this study, noting that anyone willing to lie about their ethnicity on their college application, is also likely even more willing to lie in an anonymous survey (Sailer 2021 ; see also Hood 2007). 

[19] Actually, the Northern Ireland settlement is often classed as centripetalist rather than consociationalist. However, the distinction is minimal, with the former arrangement representing a modification of the latter designed to encourage cross-community cooperation, and prevent, or at least mitigate, the institutionalization and ossification of the ethnic divide that is perceived to occur under consociationalism, where constitutional recognition is accorded to the divide between the two (or more) communities. There is, however, little evidence that centripetalism have ever actually been successful in encouraging cross-community cooperation, beyond what is necessitated by the consitutional system, let alone encouraging assimilation of the rival communities and the depoliticization of ethnic identity. 

[20] The reason for the difference in the attitudes of leftists and liberals towards majority-rule in Northern Ireland and South Africa respectively seems to reflect the fact that, whereas in Northern Ireland, the majority protestant population were perceived of as the dominant oppressor’ group, the black majority in South Africa were perceived of as oppressed.
However, it is hard to see why this would mean black majority-rule in South Africa would be any less oppressive of South Africa’s white, coloured, and Asian minorities than Protestant majority rule had been of Catholics in Ulster. On the contrary, precisely because the black majority in South Africa perceive themselves as having been ‘oppressed’ in the past, they are likely to be especially vengeful and feel justified in seeking recompense for their earlier perceived oppression. This indeed seems to be what is occurring in South Africa, and Zimbabwe, today. 
Interestingly, van den Berghe, writing in 1981 was wisely prophetic regarding the long-term prospects for both apartheid – and for white South Africans. Thus, on the one hand he predicted: 

Past experience with decolonization elsewhere in Africa, especially in Zimbabwe (which is in almost every respect a miniature version of South Africa) seems to indicate that the end of white domination is in sight. The only question is whether it will take the form of a prolonged civil war, a negotiated partition or a frantic white exodus. The odds favor, I think, a long escalating war of attrition accompanied by a gradual economic winddown and a growing white emigration” (p174). 

Thus, van den Berghe was right in so far as he predicted the looming end of the apartheid system – though hardly unique in making this prediction. However, he was wrong in his predictions as to how this end would come about. On the other hand, however, with ongoing farm murders and the overtly genocidal rhetoric of populist politicians like Julius Malema, van den Berghe was probably right regarding the long-term prognosis of the white community in South Africa when he observed: 

Five million whites perched precariously at the tip of a continent inhabited by 400 millions blacks, with no friends in sight. No matter what happens whites will lose heavily, perhaps their very lives, or at least their place in the African sun that they love so much” (p172). 

However, perhaps surprisingly, van den Berghe denies that apartheid was entirely a failure: 

Although apartheid failed in the end, it was a rational course for the Afrikaners to take, given their collective aims, and probably did postpone the day of reckoning by about 30 years” (p174).

[21] The only other polity that perhaps has a competing claim to representing the world’s model consociationalist democracy is Switzerland. However, van den Berghe emphasizes that Switzerland is very much a special case, the secret of its success being that:

Switzerland is one of those rare multiethnic states that did not originate either in conquest or in the breakdown of multinational empires” (p194).

It managed to avoid conquest by its richer and more powerful neighbours simply because:

The Swiss had the dual advantage in resisting outside conquest: favorable terrain and lack of natural resources” (p194)

Also, it provided valuable services to these neighbours, first providing mercenaries to fight in their armed forces and later specialising in the manufacture of watches and what van den Berghe terms “the management of shady foreigners’ ill-gotten capital” (p194).
In reality, however, although divided linguistically and religiously, Switzerland does not, in van den Berghe’s constitute true consociationalism, since the country, with originated as confederation of fomerly independent hill tribes, remains highly decentralized, and power is shared, not by ethnic groups, but rather between regional cantons. Therefore, van den Berghe concludes:

The ethnic diversity of Switzerland is only incidental to the federalism, it does not constitute the basis for it” (p196-7).

In addition, most cantons, where much of the real power lies, are themselves relatively monoethnic and monoliguistic, at least as compared to the country as a whole.

[22] Indeed, since the Slavs of Eastern Europe were the last group in Europe to be converted to Christianity, and it was forbidden by Papal decree to enslave fellow-Christians or sell Christian slaves to non-Christians (i.e. Muslims, among whom there was a great demand for European slaves), Slavs were preferentially targeted by Christians for enslavement, and even those non-Slavic people who were enslaved or sold into bondage were often falsely described as Slavs in order to justify their enslavement and sale to Muslim slaveholders. The Slavs, for geographic reasons, were also vulnerable to capture and enslavement directly by the Muslims themselves.

[23] Another reason that it proved difficult to enslave the indigenous inhabitants of the Americas, according to van den Berghe, is the lifestyle of the latter prior to colonization. Thus, prior to the arrival of Euopean colonists, the indigenous people in many parts of the Americas were still relatively primitive, many subsisting, in whole or in part, as nomadic or semi-nomadic hunter-gatherers. This meant, not only that they had low population densities and were hence few in number and vulnerable to infectious diseases introduced by European colonizers, but also that:

Such aborigines as existed were mobile, elusive and difficult to control. They typically had a vast hinterland into which they could escape labor exploitation” (p93).

Thus, van den Berghe reports, when, in what is today Brazil, Portuguese colonists led raiding expeditions in an attempt to capture and enslave natives, so many of the latter “escaped, committed suicide or died of disease” that the attempt was soon abandoned (p93).
Perhaps more interestingly, van den Berghe also argues that another reason that it proved difficult to enslave nomadic peoples was that:

Nomads typically are unused to being exploited since their own societies are often relatively egalitarian, ill-adapted to steady hard labor and lacking in the skills useful to colonial exploiters (as cultivators, for example). They are, in short, lovers of freedom and make very poor colonial underlings… They are regarded by their conquerors as lazy, shiftless and unreliable, as an obstacle to development and as a nuisance to be displaced” (p93).

In contrast, whereas sub-Saharan Africa is usually stereotyped, not entirely inaccurately, as technologically backward as compared to other cultures, and this very backwardness as facilitating their enslavement, in fact, van den Berghe explains, it was the relatively socially advanced nature of West African societies that permitted the transatlantic slave trade to be so successful.

Contrary to general opinion, Africans were so successfully enslaved, not because they belonged to primitive cultures, but because they had a complex enough technology and social organization to sustain heavy losses of manpower without appreciable depopulation. Even the heavy slaving of the 18th century made only a slight impact on the demography of West Africa. The most heavily raided areas are still today among the most densely populated” (p126).

[24] Although this review is based on the 1987 edition, The Ethnic Phenomenon was first published in 1981, whereas Orlando Peterson’s Slavery and Social Death came out just a year later in 1982.

[25] In the antebellum American South, much is made of the practice of slave-owners selling the spouses and offspring of their slaves to other masters, thereby breaking up families. On the basis of van den Berghe’s arguments, this might actually have represented an effective means of preventing slaves from putting down roots and developing families and slave communities, and might therefore have helped perpetuate the institution of slavery.
However, even assuming that such practices would indeed have had this effect, it is doubtful that there was any such deliberate long-term policy among slaveholders to break up families in this way. On the contrary, van den Berghe reports:

It is not true that slave owners systematically broke up slave couples… On the contrary, it was in their interest to foster stable slave families for the sake of morale, and to discourage escape” (p133). 

Thus, though it certainly occurred and may indeed have been tragic where it did occur, slaveholders generally preferred to keep slave families intact, precisely because, in forming families, slaves would indeed ‘put down roots’ and hence be less likely to try to escape, lest, in the process, they would leave other family members behind to face the vengeance of their former owners alone and without any protection and support they might otherwise have been in a position to offer. The threat of breaking up families, however, surely remained a useful tool in the arsenal of slaveholders to maintain control over slaves. 

[26] While acknowledging, and indeed emphasizing, the virulence of western racialism, van den Berghe, bemoaning the intrusion of “moralism” (and, by extension, ethnomasochism) into scholarship, has little time for the notion that western slavery was intrinsically more malign than forms of slavery practised in other parts of the world or at other times in history (p116). This, he dismisses as “the guilt ascription game: whose slavery was worse?” (p128).
Whereas today, when discussing slavery, white liberal ethnomasochists focus almost exclusively on black slaves in the American South, forms of slavery practised concurrently in other parts of the world were, in many respects, even more brutal. For example, male slaves in the Islamic world were routinely castrated before being sold (p117). 
Given the dangers of this procedure, and the unsterile conditions under which it was performed, Thomas Sowell, in his excellent essay ‘The Real History of Slavery’, reports that “the great majority of those operated on died as a result” (Black Rednecks and White Liberals: p126). Indeed, van den Berghe himself reports that as many as “80 to 90% died of the operation” (p117).
In contrast, while it is true that slaves in the American South had unusually low rates of manumission (i.e. the granting of freedom to slaves), they also enjoyed surprisingly high standards of living, were well-fed and enjoyed long lives. Indeed, not only did slaves in the American South enjoy standards of living superior to those of most other slave populations, they even enjoyed, by some measures, living standards comparable to many non-slave populations, including industrial workers in Europe and the Northern United States, and poor white Southerners, during the same time period (The End of Racism: p88-91; see also Time on the Cross: the Economics of American Slavery). 
Ironically, living standards were so high for the very same reason that rates of manumission were so low – namely, slaves, especially after the abolition and suppression of the transatlantic slave-trade (but also even before then due to the costs of transportation during the Middle Passage) were an expensive commodity. Masters therefore fully intended to get their money’s worth out of their slaves, not only by rarely granting them their freedom, but also ensuring that they lived a long and healthy life.
In this endeavour, they were surprisingly successful. Thus, van den Berghe reports, in the fifty years that followed the prohibition on the import of new slaves into the USA in 1908, the black population of the USA nevertheless more than tripled (p128). In short, slaves may have been property, but they were valuable property – and slaveholders made every effort to protect their investment.
Ironically, therefore, indentured servants (themselves, in America, often white, and later, in Africa, usually South or East Asian) were, during the period of their indenture, often worked harder, and forced to live in worse conditions, than were actual slaves. This was because, since they were indentured for only a set number of years before they would be free, there was less incentive on the part of their owners to ensure that they lived a long and healthy life.
For example, Thomas Sowell reports how, in the antebelum American South, the most dangerous work on cotton plantations was often reserved for Irish labourers, not slaves, precisely because slaves were too valuable to be risked by employing them in such work (Applied Economics: p37-38).
Van den Berghe concludes: 

“The blanket ascription of collective racial guilt for slavery to ‘whites’ that is so dear to many liberal social scientists is itself a product of the racist mentality produced by slavery. It takes a racist to ascribe causality and guilt to racial categories” (p130). 

Indeed, as Dinesh D’Souza in The End of Racism and Thomas Sowell in his essay ‘The Real History of Slavery’, included in the collection Black Rednecks and White Liberals, both emphasize, whereas all civilizations have practised slavery, what was unique about western civilization was that it was the first civilization ever known to have abolished slavery (at, as it ultimately turned out, no little economic cost to itself).
Therefore, even if liberals and leftists do insist that we play what van den Berghe disparagingly calls “the guilt ascription game”, then white westerners actually come out rather well in the comparison.
As Thomas Sowell observes in this context:

Often it is those who are most critical of a ‘Eurocentric’ view of the world who are most Eurocentric when it comes to the evils and failings of the human race” (Black Rednecks and White Liberals p111).

[27] Indeed, in most cultures and throughout most of history, the use of female slaves as concubines was, not only widespread, but also perfectly socially acceptable. For example, in the Islamic world, the use of female slaves as concubines was entirely open and accepted, not only attracting literally no censure or criticism in the wider society or culture, but also receiving explicit prophetic sanction in the Quran. For this reason, in the Islamic world, females slaves tended to be in greater demand than males, and usually commanded a higher price.
In contrast, most slaves transported to the Americas were male, since males were more useful for hard, intensive agricultural labour and, in puritanical North America, sexual contact with between slaveholder and slave was very much frowned upon, even though it certainly occurred. Thus, van den Berghe cynically observes:  

Concubinage with slaves was somewhat more clandestine and hypocritical in the English and Dutch colonies than in the Spanish, Portuguese and French colonies where it was brazen, but there is no evidence that the actual incidence of interbreeding was any higher in the Catholic countries” (p132). 

Partial corroboration for this claim is provided by historian Eugene Genovese, who, in his book Roll, Jordan, Roll: The World the Slaves Made, reports that, in New Orleans slave markets:

First-class blacksmiths were being sold for $2,500 and prime field hands for about $1,800, but a particularly beautiful girl or young woman might bring $5,000” (Roll, Jordan, Roll: p416).

[28] Actually, exploitation can still be an adaptive strategy, even in respect of close biological relatives. This depends of the precise relative gain and loss in fitness to both the exploiter (the slave owner) and his victim (the slave), and their respective coefficient of relatedness, in accordance with Hamilton’s rule. Thus, it is possible that a slaveholder’s genes may benefit more from continuing to exploit his slaves as slaves than by freeing them, even if the latter are also his kin. Possibly the best strategy will often be a compromise of, say, keeping your slave-kin in bondage, but treating them rather better than other non-related slaves, or freeing them after your death in your will. 
Of course, this is not to suggest that individual slaveholders consciously (or subconsciously) perform such a calculation, nor even that their actual behaviour is usually adaptive (see the Sahlins fallacy, discussed here). Slaveholding is likely an ‘environmental novelty’ to which we are yet to have evolved adaptive responses

[29] Others suggest that Thomas Jefferson himself did not father any offspring with Sally Hemmings and that the more likely father is Jefferson’s wayward younger brother Randolph, who would, of course, share the same Y chromosome as his elder brother. For present purposes, this is not especially important, since, either way, Heming’s offspring would be blood relatives of Jefferson to some degree, hence likely influencing his decision to free them or permit them to escape.

[30] Quite how this destruction can be expected to have manifested itself is not spelt out by van den Berghe. Perhaps, with each passing generation, as slaves became more and more closely biologically related to their masters, more and more slaves would have been freed until there were simply no more left. Alternatively, perhaps, as slaves and slaveowners increasingly became biological kin to one another, the institution of slavery would gradually have become less oppressive and exploitative until ultimately it ceased to constitute true slavery at all. At any rate, in the Southern United States this (supposed) process was forestalled by the American Civil War and Emancipation Proclamation, and neither does it appear to have occurred in Latin America.  

[31] Another area of conflict between Marxism and Darwinism is the assumption of the former that somehow all conflict and exploitation will end in a future posited communist utopia. Curiously, although healthily cynical about exploitation under Soviet-style communism (p60), van den Berghe describes himself as an anarchist (van den Berghe 2005). However, anarchism seems even more hopelessly utopian than communism, given humanity’s innate sociality and desire to exploit reproductive competitors. In short, a Hobbesian state of nature is surely no one’s utopia (except perhaps Ragnar Redbeard). 

[32] The idea that there is “ambivalence in relations between black men and women in America” seems anecdotally plausible, given, for example, the delightfully misogynistic lyrics found in much African-American rap music. However, it is difficult to see how this could be a legacy of the plantation era, when everyone alive today is several generations removed from that era and living in a very different sexual and racial milieu. Today, black men do rather better in the mating market place than do black women, with black men being much more likely to marry non-black women than black women are to marry non-black men, suggesting that black men have a larger dating pool from which to choose (Sailer 1997; Fryer 2007).
Moreover, black men and women in America today are, of course, the descendants of both men and women. Therefore, even if black women did have a better time of it that black men in the plantation era, how would black male resentment be passed down the generations to black men today, especially given that most black men are today raised primarily by their mothers in single-parent homes and often have little or no contact with their fathers?

[33] Indeed, being perceived as attractive, or at least not as ugly, seems to be rather more important to most women that does being perceived as intelligent. Therefore, the question of race differences in attractiveness is seemingly almost as controversial as that of race differences in intelligence. This, then, leads to the delightfully sexist Sailer’s first law of female journalism, which posits that: 

The most heartfelt articles by female journalists tend to be demands that social values be overturned in order that, Come the Revolution, the journalist herself will be considered hotter-looking.” 

[34] A popular alt-right meme has it that there are literally no white-on-black rapes. This is, of course, untrue, and reflects the misreading of a table in a US departnment of Justice report that actually involved only a small sample. In fact, the government does not currently release data on the prevalence of interracial rape. Nevertheless, the US Department of Justice report (mis)cited by some white nationalists does indeed suggest that black-on-white rape is much more common than white-on-black rape in the contemporary USA, a conclusion corroborated by copious other data (e.g. Lebeau 1985).
Thus, in his book Paved with Good Intentions, Jared Taylor reports:

“In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93). 

Indeed, the authors of one recent textbook on criminology even claim that: 

“Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544). 

Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by the Human Rights Watch report, No Escape: Male Rape in US Prisons

References

Brigandt (2001) The homeopathy of kin selection: an evaluation of van den Berghe’s sociobiological approach to ethnicity. Politics and the Life Sciences 20: 203-215. 
Feinman & Gill (1977) Sex differences in physical attractiveness preferences, Journal of Social Psychology 105(1): 43-52. 
Frost (2008) Sexual selection and human geographic variation. Special Issue: Proceedings of the ND Annual Meeting of the Northeastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4): 169-191 
Fryer (2007) Guess Who’s Been Coming to Dinner? Trends in Interracial Marriage over the 20th Century, Journal of Economic Perspectives 21(2), pp. 71-90 
Hannam (2021) The rise of the race fakers. Spiked-Online.com, 5 November. 
Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52. 
Hood (2017) The privilege no one wants, American Renaissance, December 11.
Johnson (1986) Kin selection, socialization and patriotism. Politics and the Life Sciences 4(2): 127-154. 
Johnson (1987) In the Name of the Fatherland: An Analysis of Kin Term Usage in Patriotic Speech and Literature. International Political Science Review 8(2): 165-174.
Johnson, Ratwik and Sawyer (1987) The evocative significance of kin terms in patriotic speech pp157-174 in Reynolds, Falger and Vine (eds) The Sociobiology of Ethnocentrism: Evolutionary Dimensions of Xenophobia, Discrimination, Racism, and Nationalism (London: Croom Helm). 
Lebeau (1985) Rape and Racial Patterns. Journal of Offender Counseling Services Rehabilitation, 9(1- 2): 125-148 
Lewis (2011) Who is the fairest of them all? Race, attractiveness and skin color sexual dimorphism. Personality & Individual Differences 50(2): 159-162. 
Lewis (2012) A Facial Attractiveness Account of Gender Asymmetries in Interracial Marriage PLoS One. 2012; 7(2): e31703. 
Lind et al (2007) Elevated male European and female African contributions to the genomes of African American individuals. Human Genetics 120(5) 713-722 
Macdonald 2001 An integrative evolutionary perspective on ethnicity. Poiltics & the Life Sciences 20(1):67-8. 
Rushton (1998a). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Warfare, and Ideology: Evolutionary perspectives (pp. 369-388). Oxford: Berghahn Books. 
Rushton (1998b). Genetic similarity theory and the roots of ethnic conflict. Journal of Social, Political, and Economic Studies, 23, 477-486. 
Rushton, (2005) Ethnic Nationalism, Evolutionary Psychology and Genetic Similarity Theory, Nations and Nationalism 11(4): 489-507. 
Sailer (1997) Is love colorblind? National Review, July 14. 
Sailer (2021) Do 48% of White Male College Applicants Lie About Their Race? Interesting, if It Replicates. Unz Review, October 21. 
Salmon (1998) The Evocative Nature of Kin Terminology in Political Rhetoric. Politics & the Life Sciences, 17(1): 51-57.   
Salter (2000) A Defense and Extension of Pierre van den Berghe’s Theory of Ethnic Nepotism. In James, P. and Goetze, D. (Eds.)  Evolutionary Theory and Ethnic Conflict (Praeger Studies on Ethnic and National Identities in Politics) (Westport, Connecticut: Greenwood Press). 
Salter (2002) Estimating Ethnic Genetic Interests: Is It Adaptive to Resist Replacement Migration? Population & Environment 24(2): 111–140. 
Salter (2008) Misunderstandings of Kin Selection and the Delay in Quantifying Ethnic Kinship, Mankind Quarterly 48(3): 311–344. 
Tooby & Cosmides (1989) Kin selection, genic selection and information dependent strategies Behavioral and Brain Sciences 12(3): 542-544 
Van den Berghe (2005) Review of On Genetic Interests: Family, Ethny and Humanity in the Age of Mass Migration by Frank Salter Nations and Nationalism 11(1) 161-177 
Van den Berghe & Frost (1986) Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9: 87-113.
Whitney G (1999) The Biological Reality of Race. American Renaissance, October 1999.

Kevin Macdonald’s ‘Culture of Critique’: A Fundamentally Flawed Theory of Twentieth Century Jewish Intellectual and Political Activism

Kevin Macdonald, The Culture of Critique: An Evolutionary Involvement of Jewish Involvement in Twentieth Century Intellectual and Political Movements (1st Books Library 2002). 

In A People That Shall Dwell Alone (which I have reviewed here), psychologist Kevin Macdonald conceptualized Judaism as a group evolutionary strategy that functioned to promote the survival and prospering of the Jewish people and religion in diaspora. 

In ‘Culture of Critique’, its more famous (and controversial) sequel, Macdonald purports to extend this theory to the behaviour of secular twentieth-century intellectuals of Jewish ancestry

Here, however, he encounters an immediate and, in my view, ultimately fatal problem. 

For, in A People That Shall Dwell Alone (PTSA) (reviewed here), Macdonald was emphatic that his theory of Judaism was a theory of cultural, not biological, group selection

In other words, it is a strategy that is encoded, not in Jewish genes, but in the rather teachings of Judaism, the religion. 

It is therefore a theory, not of genetics, but rather memetics, in accordance with the idea of memes’ as units of cultural selection analogous to genes, as first proposed by Richard Dawkins in The Selfish Gene (which I have reviewed here).[1]

Yet Macdonald envisages even secular Jews as continuing to pursue this so-called group evolutionary strategy, even though they have long previously abandoned the religion in whose precepts this cultural group strategy is ostensibly contained, or, in some cases, raised in secular homes, never even exposed to it in the first place.[2]

Presumably Macdonald is not arguing that these intellectuals, many of them militant atheists (e.g. Marx and Freud), are actually secret practitioners of Judaism, engaging in what Macdonald somewhat conspiratorially terms crypsis

How then is this possible? 

Group Commitment 

Macdonald never really directly addresses, or even directly acknowledges, this fundamental problem with his theory. 

The closest he comes to addressing it is by arguing that, since Jewish collectivism and ethnocentrism are, at least according to Macdonald, partly innate, secular Jews continued to pursue ethnocentric ends even after abandoning the religion of their forebears. 

Moreover, just as Jewish ethnocentrism is innate, so, Macdonald argues, is Jewish intelligence and other aspects of the typical Jewish personality profile. Thus, Macdonald claims that the ethnic Jews drawn to movements such as psychoanalysis and Marxism

Retained their high IQ, their ambitiousness, their persistence, their work ethic, and their ability to organize and participate in cohesive highly committed groups” (p4). 

These traits, he argues, gave them a key advantage in competition with other intellectual currents. 

The success of these intellectual movements (i.e. Freudianism, Boasian anthropology, Marxism, the Frankfurt School) reflected, then, not their (decidedly modest) explanatory power, but rather the intense commitment and dedication of their adherents to the movement and ideology. 

Thus, just as Macdonald attributes the economic success of Jews to their collectivism and hence their tendency to operate  price-fixing trade cartels and favour their co-ethnics in commercial operations, so, he argues, the success of Jewish intellectual movements reflects the commitment and solidarity of their members: 

Cohesive groups outcompete individualist strategies. The fundamental truth of this axiom has been central to the success of Judaism throughout its history whether in business alliances and trading monopolies or in the intellectual and political movements discussed here” (p5-6; see also p209-10). 

Thus, Macdonald emphasizes the cult-like qualities of psychoanalysis, Marxism and Boasian anthropology, whose members evince a fanatical quasi-religious devotion to the movement, its ideology and leaders. 

He argues that these movements recreated the structure of traditional Jewish religious groups in Eastern European shtetlach, being grouped around a charismatic leader (a rebbe) who is the object of reverence and veneration, and against whom no dissent was tolerated on pain of excommunication from the group (p225-6).  

Thus, according to Macdonald, ideologies such as Marxism, psychoanalysis and the ‘standard social science model’ (SSM) in psychology, sociology and anthropology take on many features of traditional religion, including the tendency to persecute heresy

This does indeed seem to represent an accurate model of how the psychoanalytic movement operated under the dictatorial leadership of Freud. It is also an accurate model of how the Soviet Union operated under communism, with deviationism relentlessly persecuted and suppressed in successive purges

Similarly, among social scientists, biological approaches to understanding human behaviour, such as sociobiology, evolutionary psychology and behavioural genetics, and especially theories of sex and race differences (and social class differences), for example in intelligence, have aroused an opposition among sociologists and anthropologists that often borders on persecution and witch-hunts

However, such quasi-religious political cults are hardly exclusive to Jews

On the contrary, National Socialism in Germany evinced a very similar structure, being organized around a charismatic leader (Hitler), who elicited reverence and whose word was law (the so-called führerprinzip). 

But Nazism was, of course, a movement very much composed of and led by white European Gentiles. 

To this, Macdonald would, I suspect, respond by quoting from the previous installment in the Culture of Critique series, where he argued: 

Powerful group strategies tend to beget opposing group strategies that in many ways provide a mirror image of the group which they combat” (Separation and Its Discontents: pxxxvii). 

Thus, in Separation and its Discontents, Macdonald provocatively contends: 

National Socialist ideology was a mirror image of traditional Jewish ideology… [Both shared] a strong emphasis on racial purity and on the primacy of group ethnic interests rather than individual interests[and] were greatly concerned with eugenics” (Separation and Its Discontents: p194). 

On this view, Judaism provided, if not necessarily the conscious model for Nazism, then at least its ultimate catalyst. Nazism was, on this view, ultimately a defensive, or at least reactive, strategy.[3]

In other words, Macdonald suggests cult-like movements in Europe are mostly either manifestations of Judaism as a group evolutionary strategy, or reactions against Judaism as a group evolutionary strategy. 

This strikes me as doubtful, and as according the Jews an importance in determining the course of European history which, for all their gargantuan and vastly dispropotionate contributions to European culture, science and civilization, they do not wholly warrant. 

Instead, I believe there is a pan-human tendency to form such fanatical cult-like groups led by charismatic leaders. 

Indeed, in Separation and Its Discontents, Macdonald himself acknowledges that there is a pan-human proclivity to form such groups but insists that “Jews are higher on average in this system” than are other Europeans (Separation and Its Discontents: p31). 

At any rate, Macdonald’s claim at least has the advantage that it leads to testable predictions, namely that: 

(1) That few such cult-like movements existed in Europe before the settling of Jews, or in regions where Jews were largely absent; and

(2) That all (or most) such movements were either:

(a) Jewish movements, led and dominated by Jews; or
(b) Anti-Semitic movements opposed to Jews.

As noted above, I doubt these predictions can be borne out. However, interestingly, in Separation and Its Discontents, Macdonald does cite two studies that supposedly found that Jews were indeed “overrepresented among [members of] non-Jewish religious cults” (Separation and Its Discontents: p24).[4]

At any rate, a final problem with Macdonald’s theory is that, even if the Jewish tendency towards ethnocentrism and collectivism is indeed partly innate, this surely involves a disposition towards, not a specifically Jewish ethnocentrism, but rather an ethnocentrism in respect of whatever group the person in question comes to identify as. 

Thus, since many Jews are raised in secular households, often not even especially aware of their Jewish ancestry, we would hence expect Jewish ethnocentrism to manifest itself in disproportionate numbers of Jews joining the white nationalist movement![5]

Debunking Marx, Boas and Freud 

Undoubtedly the strongest part of Macdonald’s book is his debunking of the scientific merits of such intellectual paradigms as Boasian anthropology, the the standard social science model and Freudian psychoanalysis

Macdonald fails to convince me that these ideologies and belief-systems function as part of a Jewish ‘group evolutionary strategy’ (read: Jewish conspiracy) to subvert Western culture. He does, however, amply demonstrate that they are indeed pseudo-scientific nonsense. 

Yet, for Macdonald, the very scientific weakness of such paradigms as Marxism, Freudian psychoanalysis and the Standard Social Science Model is positive evidence that they serve a group evolutionary function, as otherwise their success in attracting adherents is difficult to explain. 

Thus, he writes: 

The scientific weakness of these movements is evidence of their group-strategic function” (pvi). 

Here, however, Macdonald goes too far. 

The scientific weakness of the theories and movements in question does indeed suggest that the reason for their popularity and success in attracting adherents must reflect something other than their explanatory power. However, he is wrong in presupposing this something is necessarily their supposed “group strategic function” in ethnic competition.[6]

Therefore, Macdonald’s critique of the theoretical and scientific merits of the intellectual movements discussed is not only the best part of his book, but also, in principle, entirely separable from his theory of the role of these movements in promoting an ostensible Jewish group evolutionary strategy. 

Take, for example, his critiques of Boasian anthropology and Freudian psychoanalysis, which are, of those discussed by Macdonald, the two intellectual movements with which I am most familiar and hence with respect to which I am most qualified to assess the merits of his critique.[7]

In assessing the scientific merits of Boasian cultural anthropology, Macdonald concludes that Boasian anthropology was not so much a science, nor even a pseudo-science, as an outright rejection of science: 

An important technique of the Boasian school was to cast doubt on general theories of human evolution, such as those implying developmental sequences, by emphasizing the vast diversity and chaotic minutiae of human behavior, as well as by emphasizing the relativism of standards of cultural evaluation. The Boasians argued that general theories of cultural evolution must await a detailed cataloguing of cultural diversity, but in fact no general theories emerged from this body of research in the ensuing half-century of its dominance of the profession… Because of its rejection of fundamental scientific activities such as generalization and classification, Boasian anthropology may thus be characterized more as an anti-theory than as a theory” (p24). 

In other words, the Boasian paradigm involves, and seeks to make a perverse virtue out of, throwing one’s arms up in despair and declaring that human behaviour is simply too complex, and too culturally variable, to permit the formulation of any sort of general theory. 

This reminds me of David Buss’s critique of the notion that ‘culture’ is itself an adequate explanation for cultural differences, another idea very much derived from post-Boasian American anthropology. Buss writes: 

Patterns of local within-group similarity and between-group differences are best regarded as phenomena that require explanation. Transforming these differences into an autonomous causal entity called ‘culture’ confuses the phenomena that require explanation with a proper explanation of the phenomena. Attributing such phenomena to culture provides no more explanatory power than attributing them to God, consciousness, learning, socialization, or even evolution, unless the causal processes subsumed by these labels are properly described. Labels for phenomena are not proper causal explanations for them” (Evolutionary Psychology: The New Science of the Mind: p404). 

Accepting that no society is more advanced than another, that there is no general direction to cultural change and that all differences between societies and cultures are purely random is essentially to accept the null hypothesis as true and abandoning, or ruling out a priori, any attempt to generate a causal framework for explaining cultural differences. 

It is not science, but a form of obscurantism in direct opposition to science. 

Jews and the Left 

Another interesting element of Macdonald’s work is his summary of just how predominantly Jewish-dominated some of these ostensibly Jewish intellectual movements indeed really were. 

This is something of a revelation precisely because this is a topic politely passed over in most mainstream histories of, say, revolutionary communism in Eastern Europe and America, or the psychoanalytic movement, both those sympathetic, and those hostile, to the movements under discussion. 

The topic of Jewish involvement in the Bolshevik revolution in Russia is one of great controversy, not least on account Nazi propaganda regarding so-called Judeo-Bolshevism. However, Macdonald does not address this topic in any great depth in ‘Culture of Critique’, and readers interested in Macdonald’s take on this subject might instead seek out his essay Stalin’s Willing Executioners, a review of Yuri Slezkine’s critically aclaimed The Jewish Century, a book which itself also addresses this fraught topic in one of its chapters.[8]

Instead, in his chapter on “Jews and the Left”, Macdonald focusses instead primarily of Jewish involvement in radical leftist movements in Poland and the United States.

In the USA, the Jewish overrepresentation among radical leftists is especially striking, probably because of both the relatively high numbers of Jews resident in the USA and the only very low levels of support for socialism among non-Jewish Americans throughout most of the twentieth century.[9]  

Thus, Macdonald reports that: 

From 1921 to 1961, Jews constituted 33.5 percent of the Central Committee members [of the Communist Party USA] and the representation of Jews was often above 40 percent (Klehr 1978, 46). Jews were the only native-born ethnic group from which the party was able to recruit. Glazer (1969, 129) states that at least half of the CPUSA membership of around 50,000 were Jews into the 1950s” (p72). 

Similarly, Macdonald reports: 

In the 1930s Jews ‘constituted a substantial majority of known members of the Soviet underground in the United States’ and almost half the individuals prosecuted under the Smith Act of 1947 (Rothman & Lichter 1982)” (p74).

Likewise, with respect to the so-called new left and 1960s student radicalism, Macdonald reports: 

Flacks (1967: 64) found that 45% of students involved in a protest at the University of Chicago were JewishJews constituted 80% of the students signing a petition to end the ROTC at Harvard and 30-50% of the Students for a Democratic Society – the central organization for radical students. Adelson (1972) found that 90 percent of his sample of radical students at the University of Michigan were JewishBraungart (1979) found that 43% of the SDS had at least one Jewish parent and an additional 20 percent had no religious affiliation. The latter are most likely to be predominantly Jewish: Rothman and Lichter (1982: 82) found that the ‘overwhelming majority of radical students who claimed that their parents were atheists had Jewish backgrounds” (p76-7).  

In short, it appears not unreasonable to claim that the radical left in twentieth century America, which never gained significant electoral support but nevertheless had a substantial social, cultural, academic and indirect political influence on American society, would scarcely have existed were it not for the presence of Jewish radicals.

However, in this respect, the USA was quite exceptional, due to both the relatively large numbers of Jews resident in the country, and the almost complete lack of support of radical leftism among non-Jewish Americans until very recently.[10]

Jewish Dominated Sciences – and Pseudo-Sciences

Just as Jews numberically dominated the American radical left, so, Macdonald reveals, they dominated the psychoanalytic movement. Thus, we learn from Macdonald’s account that, not only were the leaders of the psychoanalytic movement, and individual psychoanalysts, disproportionately Jewish, so were their clients: 

Jews have been vastly overrepresented as patients seeking psychoanalytic treatments, accounting for 60 percent of the applicants to psychoanalytic clinics in the 1960s” (p133). 

Indeed, Macdonald reports that there was: 

A Jewish subculture in New York in mid-twentieth-century America in which psychoanalysis was a central cultural institution that filled some of the same functions as traditional religious affiliation” (p133). 

This was that odd, and now fast disappearing, New York subculture, familiar to most of us only through watching Woody Allen movies, where visiting a psychoanalyst was a regular weekly ritual analogous to attending a church or synogogue. 

Yet, as noted above, the overrepresentation of Jews in the psychoanalytic movement is an aspect of Freudianism that is usually downplayed in most discussions or histories of the psychoanalytic movement, including those hostile to psychoanalysis. 

For example, Hans Eysenck, in his Decline and Fall of the Freudian Empire, mentions the allegation that psychoanalysis was a ‘Jewish science’, only to dismiss it as irrelevant to question of the substantive merits of psychoanalysis as a theoretical paradigm or method of treatment (Decline and Fall of the Freudian Empire: p12).  

Yet, here, Eysenck is right. Whether an intellectual movement is Jewish-dominated, or even part of a ‘Jewish group evolutionary strategy’, is ultimately irrelevant to whether its claims are true and represent a useful and empirically-productive way of viewing the world.[11]

For example, many German National Socialists dismissed theoretical physics as a ‘Jewish science, and, given the overrepresentation of Jews among leading theoretical physicists in Germany and elsewhere, it was indeed a disproportionately Jewish-dominated field. 

However, whereas psychoanalysis was indeed a pseudoscience, theoretical physics certainly was not. 

Indeed, the fact that so many leading theoretical physicists were forced to flee Germany and German-occupied territories in the mid-twentieth century on account of their Jewishness, together with the National Socialist regime’s a priori dismissal of theoretical physics as a discredited Jewish science, has even been implicated as a key factor in the Nazis ultimate defeat, as it arguably led to their failure to develop an atom bomb

Cofnas’s Default Hypothesis 

In a recent critique of Macdonald’s work, Nathan Cofnas (2018) argues that Jews are in fact overrepresented, not only in the political and intellectual movements discussed by Macdonald, but indeed in all intellectual and political movements that are not overtly antisemitic

Here, Cofnas is surely right. Whatever your politics (short of Nazism), you are likely to count Jews among your intellectual heroes. 

For example, Karl Popper was ethnically Jewish, yet was also a leading critic of both psychoanalysis and Marxism, dismissing both as quintessential unfalsifiable pseudo-sciences. Likewise, Robert Trivers and David Barash were pioneering early-sociobiologists, but also of Jewish ethnicity. 

Indeed, Macdonald, to his credit, himself helpfully lists several prominent Jewish sociobiologists and behavior geneticists, acknowledging: 

Several Jews have been prominent contributors to evolutionary thinking as it applies to humans as well as human behavioral genetics, including Daniel G Freedman, Richard Herrnstein, Seymour Itzkoff, Irwin Silverman, Nancy Sigel, Lionel Tiger and Glenn Weisfeld” (p39) (p39). 

Indeed, ethnic Jews are even seemingly overrepresented among race theorists

These include Richard Herrnstein, co-author of The Bell Curve (which I have reviewed here); Stanley Garn, the author of Human Races and co-author, with Carleton Coon, of Races: A Study of the Problems of Race Formation in Man; Nathaniel Weyl, the author of, among other racialist works, The Geography of Intellect; Daniel Freedman, the author of some controversial and, among racialists, seminal, studies on race differences in behaviour among newborn babies; and philosopher Michael Levin, author of Why Race Matters.[12]

Likewise, the most prominent champions of hereditarianism with regard to race differences in intelligence in the mid- to late twentieth, namely Hans Eysenck and Arthur Jensen, were half-Jewish and a quarter-Jewish respectively.[13]

Meanwhile the most prominent contemporary populariser and champion of hereditarianism, including with respect to race differences, is Steven Pinker, who is also ethnically Jewish.[14]

Indeed, Nathan Cofnas is himself Jewish and likewise a staunch hereditarian

Also, although not a racial theorist as such, it is perhaps also worth noting that the infamous nineteenth-century ‘positivist criminologist’, Cesare Lombroso, a bête noire of radical environmental determinists, who infamously argued that criminals were an atavistic throwback to an earlier stage in human evolution, was also of Jewish background, albeit Sephardic rather than Ashkenazi. 

On the other hand, however, the first five opponents of sociobiology I could name offhand when writing this review (namely, Stephen Jay Gould, Richard Lewontin, Leon Kamin, Steven Rose and Marshall Sahlins) were all ethnic Jews to a man.[15]

In short, if ethnic Jews are vastly overrepresented among malignly influential purveyors of obscurantist pseudoscience, they are also vastly overrepresented among important contributors to real science, including in controversial areas such as the study of innate sex differences and race differences in intelligence and behaviour

Indeed, if there is a national or ethnic group disproportionately responsible for obscurantist, faddish, anti-scientific and just plain bad (but nevertheless highly influential) ideas in philosophy, social science, and the humanities, then I would say that it is not Jewish intellectuals, but rather French intellectuals.[16]

Are we then to posit that these intellectuals were somehow secretly advancing a ‘Group Evolutionary Strategy’ to advance the interests of the France? 

Why Are Jews Overrepresented Among Leading Intellectuals? 

Cofnas (2018), for his part, attributes the overrepresentation of Jews among leading intellectuals to: 

1) The higher average IQ of Jews; and
2) The disproportionate concentration of Jews in urban areas.

In explaining the overrepresentation of Jews by reference to just two factors, Cofnas’s theory is certainly simpler and more parsimonious than Macdonald’s theory of partly unconscious group strategizing, which comes close to being a conspiracy theory. 

Indeed, if one were to go through passages of Macdonald’s work replacing the words “Jewish Group Evolutionary Strategy” with “Jewish conspiracy”, it would read much a traditional antisemitic conspiracy theory. 

However, I suspect Macdonald is right that a further factor is the tendency of Jews to promote the work of their co-ethnics. Thus, he cites one interesting study which used surname analysis to suggest that academic researchers with stereotypically Jewish surnames were more likely to both collaborate with, and cite the work of, other academic researchers with stereotypically Jewish surnames, as compared to those with non-Jewish surnames (p210; Greenwald & Schuh 1994). 

This, of course, reflects an ethnocentric preference. However, to admit as much is not necessarily to agree with Macdonald that Jews are any more ethnocentric than Gentile Europeans, but rather to recognize that ethnocentrism is a pan-human psychological trait and Jews are no more exempt from this tendency than are other groups (see The Ethnic Phenomenon: which I have reviewed here). 

Leftism and Iconoclasm 

But there is one thing that Cofas’s default hypothesis cannot explain—namely why, if Jews are overrepresented in leadership positions among all political and intellectual movements, they are nevertheless especially overrepresented on the Left (see here for data confirming this pattern). 

This overrepresentation on the left is paradoxical, since Jews are disproportionately wealthy, and leftism is hence against their economic interests. 

Moreover, Macdonald himself argues in A People That Shall Dwell Alone that Jews traditionally acted as agents and accessories of governmental oppression (e.g. as tax farmers), resented by the poor, but typically protected by their elite patrons.[17]

Why, then, were Jews, throughout most of the twentieth century, especially overrepresented on the left?

Cofnas (2018) suggests that Jews will be overrepresented among any political or intellectual movements that are not overtly antisemitic

However, this cannot explain the especial overrepresentation of Jews on the Left, since, since at least by the middle of the twentieth century, overt antisemitism has been as anathema among mainstream conservatives as it is among leftists.[18]

Yet all the movements discussed by Macdonald are broadly leftist. 

Perhaps the only exception is Freudian psychoanalysis.  

Indeed, although Macdonald emphasizes its co-option by the Left, especially by the Frankfurt School, some leftists dismiss Freudianism as inherently reactionary, as when student radicalism is dismissed as a form of adolescent rebellion against a father-figure, and feminism as a form of penis envy.[19]

Indeed, amusingly, in this context, Rod Liddle even claims that:

Many psychoanalysts believe that the Left’s aversion to capitalism is simply a displaced loathing of Jews” (Liddle 2005).

Nevertheless, though not intrinsically leftist, Freudianism is certainly iconoclastic. 

Thus, one almost universal feature of Jewish intellectuals has been iconoclasm

Thus, Jews seem as overrepresented among leading libertarians as among leftists. For example, Ludwig von Mises, Ayn Rand, Milton Friedman, Robert Nozick and Murray Rothbard were all of Jewish ancestry. 

Yet libertarianism is usually classed as an extreme right-wing ideology, at least in accordance with the simplistic one-dimensional left-right axis by which most people attempt to conceptualize the political spectrum and plot people’s politics. 

However, in reality, far from being in any sense ‘conservative’, libertarian ideas, if and when put into practice, are just as destructive of traditional societal mores as is Marxism, possibly more so. It is therefore anything but ‘conservative’ in the true sense. 

In contrast, while prominent among neoliberals and, of course, so-called neoconservatives, relatively few Jews seem to be socially conservative (e.g. in relation to issues like abortion, gay rights and feminism, not to mention immigration).  

Orthodox and Conservative Jews are perhaps an exception here. However, the latter are highly insular, living very much in a closed world, like religious Jews in the pre-emancipation era.  

Therefore, although they may indeed vote predominantly for conservative candidates, beyond voting, they rarely involve themselves in politics outside their own communities, either as candidates or activists. 

Macdonald himself seeks to explain Jewish iconoclasm in terms of social identity theory

On this view, Jews, by virtue of their alien origins, enforced separation and minority status, not to mention the discrimination and resentment often directed towards them by host populations, felt estranged and alienated from mainstream culture and hence developed a hostility towards it. 

Here, Macdonald echoes Thorstein Veblen’s theory of Jewish intellectual preeminence (Veblen 1919). 

Veblen argued that Jewish intellectual achievements reflected their only partial assimilation into western societies, which meant that they were less committed to the prevailing dogmas of those societies, which produced both a degree of scholarly detachment and objectivity, and a highly skeptical, and enquiring, state of mind, which ideally suited them to careers in scholarship and science. 

At first, Macdonald reports: 

Negative views of gentile institutions were… confined to internal consumption within the Jewish community” (p7). 

However, with emancipation and secularization, Jewish critiques of the West increasingly went mainstream and began to gain a following even among Gentiles. 

Jewish Radical Critique… of Judaism Itself? 

However, the problem with seeing Jewish iconoclasm as an attack on Gentile culture is that the ideologies espoused necessarily entail a rejection of traditional Jewish culture too. 

Thus, if Christianity was indeed delusional, repressive and patriarchal, then this critique applied equally to the religion whence Christianity derived – namely Judaism

Indeed, far from Judaism being a religion that, unlike Christianity and Islam, is not sexually repressive (a view Macdonald attributes to Freud), the most sexually repressive, illiberal and, from a contemporary left-liberal perspective, problematic elements of Christian doctrine almost all derive directly from Judaism and the Old Testament

Thus, explicit condemnation of homosexuality occurs, not in the teaching of Jesus, but rather in the Old Testament (Leviticus 18:22; Leviticus 20:13). Similarly, it is principally from a passage in the Old Testament, that the Christian opposition to masturbation and coitus interruptus derives (Genesis 38:8-10). 

The Old Testament also, of course, contains the most racist and genocidal biblical passages (e.g. Deuteronomy 20:16-17; Joshua 10:40) as well as the only biblical commandments seemingly advocating mass rape and sexual enslavement (e.g. Deuteronomy 20: 13-14; Numbers 31: 17-18) – see discussion here

Only in respect of the question of divorce and remarriage is the teaching of Jesus in the New Testament arguably less liberal than that in the Old Testament.[20]

Likewise, if the nuclear family was pathological, patriarchal and the root cause of all neurosis, then this applied also to the traditional Jewish family. 

In short, radical critique is necessarily destructive of all traditional values and institutions, Jewish values and traditions very much included. 

Neither is this radical critique of Jewish culture always merely implicit. 

True, many Jewish iconoclasts concentrated their fire on Christian and Gentile cultural traditions. However, this might be excused by reference to the fact that it was Christian and gentile cultural traditions that represented the dominant cultural traditions within the societies in which they found themselves. 

However, secular Jewish intellectuals had, not least by virtue of their secularism, rejected Jewish culture and traditions too. 

Indeed, far from arbitrarily exempting Jews from their radical critique of traditional society and religion, many Jewish intellectuals were positively anti-Semitic in the degree of their criticism of Jews and of Judaism.  

A case in point is the granddaddy of Jewish Leftism, Karl Marx, who receives comparatively scant attention from Macdonald, probably for precisely this reason.[21]

Yet Marx’s writings, especially but not exclusively, in his infamous essay On the Jewish Question, are so anti-Jewish that, were it not for Marx’s own Jewish background and impeccable leftist credentials, modern readers would surely dismiss him as a raving anti-Semite, if not insist upon his cancellation for crimes against political correctness (see Whisker 1984).[22]

Although I dislike the term self-hating Jew on account of its pejorative and Freudian connotations of psychopathology, the tradition of Jewish self-criticism continues – from the anti-Zionism of radical leftists like Noam Chomsky and Norman Finkelstein, to broadly ‘alt right’ Jews like Ron Unz and David Cole.[23]

Macdonald claims that Jewish leftists envisaged an ethnically inclusive society in which Jews would continue to exist as a distinct group. 

Actually, however, in my understanding, most radical leftists envisaged all forms of religious or ethnic identity as withering away in the coming communist utopia, such that both Judaism as a religion and the Jews as a people would ultimately cease to exist in a post-revolutionary society.

Thus, Yuri Slezkine, in The Jewish Century, like Macdonald, emphasizes the hugely disproportionate role of Jews in the Bolshevik revolution, yet interprets their motivation quite differently.

Most Jewish rebels did not fight the state in order to become free Jews; they fought the state in order to become free from Jewishness—and thus Free. Their radicalism was not strengthened by their nationality; it was strengthened by their struggle against their nationality. Latvian or Polish socialists might embrace universalism, proletarian internationalism, and the vision of a future cosmopolitan harmony without ceasing to be Latvian or Polish. For many Jewish socialists, being an internationalist meant not being Jewish at all… The Jews, as a group, were the only true Marxists because they were the only ones who truly believed that their nationality was ‘chimerical’; the only ones who—like Marx’s proletarians but unlike the real ones—had no motherland” (The Jewish Century: p152-3).

Admittedly, Macdonald does amply demonstrate that even secular Jewish leftists, in both the West and Soviet Russia, continued to socialize, and intermarry, overwhelmingly among themselves.

Yet this is hardly surprising, since ethnocentrism and in-group preference are universal phenomena, and people in general tend to marry, and socialize with, those with similar backgrounds and personal chatacteristics to themselves, a phenomenon referred to by biologists as assortative mating.

Also, Macdonald comes close to contradicting himself, since, in addition to emphasizing that Jewish radicals, including the Bolshevik leaders in the USSR, married overwhelmingly among themselves, he also makes play out of the fact that, among those Bolshevik leaders who were not Jewish, many had Jewish wives (p97).

Moreover, what Macdonald does not acknowledge is that, in the aftermath of the Bolshevik revolution, there was actually a massave increase in the rate of Jewish-Gentile intermarriage, Slezkine reporting:

Between 1924 and 1936, the rate of mixed marriages for Jewish males increased from 1.9 to 12.6 percent (6.6 times) in Belorussia, from 3.7 to 15.3 percent (4.1 times) in Ukraine, and from 17.4 to 42.3 percent (2.4 times) in the Russian Republic. The proportions grew higher for both men and women as one moved up the Bolshevik hierarchy. Trotsky, Zinoviev, and Sverdlov were married to Russian women… The non-Jews Andreev, Bukharin, Dzerzhinsky, Kirov, Kosarev, Lunacharsky, Molotov, Rykov, and Voroshilov, among others, were married to Jewish women” (The Jewish Century: p179).

Indeed, it is difficult to see how Jews could indefinitely remain an separate and endogamous ethnic group in the long-term in the absence of a shared religion, not just in the Soviet Union, but also in the west as a whole, as, over time, the basis for their shared kinship will inevitably become increasingly remote. 

It is true that some Marranos, in Iberia and elsewhere, managed to retain a Jewish identity over multiple generations by secretly continuing to practise Judaism, practising what Macdonald and others have called crypsis.  

However, this could hardly apply to Jewish leftists, since even Macdonald does not go as far as to claim that such militant secularists and anti-religionists as Marx and Freud were actually secret practitioners of Judaism.[24]

Macdonald also argues that, since the Jewish tendency towards higher IQs, high conscientiousness and highinvestment parenting is (supposedly) partly innate, Jews were relatively immunized against the destructive effects of the sexual revolution on rates of divorce, illegitimacy and single-parenthood (p147-9).[25]

Likewise, if the Jewish tendency towards ethnocentrism is also innate, Jews would be presumably less vulnerable to the impact of universalist and antiracist ideologies on group cohesion.

However, even assuming that this is true, does Macdonald actually envisage that the Jewish psychoanalysts and other Jewish thinkers who (supposedly) promoted hedonism and universalism actually consciously foresaw and intended that their social, intellectual and political activism would have a greater effect on gentile family and culture than on that of Jews for this reason?

This is surely implausible and would amount to a conspiracy theory. 

Moreover, it might instead be argued that, since Jews were at the forefront of, and overrepresented within, these intellectial movements, Jewish culture was actually especially vulnerable to the effect of such ideologies. 

Thus, perhaps Orthodox Jews were indeed relatively insulated from, and insulated against, the effects of the 1960s counterculture. But, then, so were the Amish and Christian fundamentalists. 

On the other hand, however, many Jewish student radicals very much practised what they preached (e.g. hedonism, promiscuity, drug abuse, and terrorism). 

Immigration 

Macdonald’s penultimate chapter discusses the role of Jews in reforming immigration law in the USA.[26]

Macdonald shows that Jewish individuals, networks and organizations played a central role in advocating for the opening up of America’s borders, and the passage of the 1965 Immigration Act, which exposed white America to replacement levels of non-white immigration, resulting in an ongoing, and now surely irreversible, demographic displacement.[27]

The basis of Macdonald’s thesis is that Jews perceive themselves as safer in multi-ethnic societies where they, as Jews, don’t stand out so much. This essence of this cynical logic was perhaps best distilled by Jewish comedienne, Sarah Silverman, who, during one of her stand-up routines, claimed: 

The Holocaust would never have happened if black people lived in Germany in the 1930s and 40s… well, it wouldn’t have happened to Jews.”[28]

There is indeed some truth to this idea. If I walk around London and see Sikhs in turbans, Muslims in burqas and hijabs and people of all different racial phenotypes, then even the elaborate apparel of Hasidic Jews might not jump out at me as overly strange. 

As for those Jews the only evidence of whose ethnicity is, say, a skullcap or an especially large nose, I am likely to see them as just another white person, no more exotic than, say, an Italian-American. 

Thus, today, most people see Jews as white and hence fail to notice their overrepresentation in media, politics, government and big business, and, when leftist campaigners protest that the Oscars are so white, the average man in the street is perhaps to be forgiven for not enquiring too far into the precise ethnic background of all these ostensibly ‘white’ Hollywood executives and movie producers.

However, I’m not entirely convinced that mass immigration is indeed ‘good for the Jews’. 

For one thing, many such immigrants, especially in Europe, tend to be Muslim, and Muslims have their own ‘beef’ with the Jews regarding the conquest, expulsion and subsequent persecution of their coreligionists in Palestine.[29]

Thus, while stories periodically trend in the media regarding an increase in anti-Semitic hate-crimes in Europe, what is almost invariably missed out of these news stories is that those responsible for these anti-Semitic hate crimes in Europe are almost invariably Muslims youths (see The Retreat of Reason, reviewed here: p107-11).[30]

In addition, some blacks, like Nation of Islam leader Louis Farrakhan, also stand accused of anti-Semitism

In fact, however, Farrakhan’s anti-Semitism is, in one sense, overblown. His religion holds that all white people, Jew and Gentile alike, are a race of white devils invented by an evil black scientist called Yakub (the most preposterous part of which theory is arguably the idea of a black scientist inventing something that useful).  

His comments about Jews are thus no more disparaging than his beliefs about whites in general. The particular outrage that his anti-Jewish comments have garnered reflect only the greater ‘victim-status’ accorded Jews in the contemporary West as compared to other whites, despite their hugely disproportionate wealth and political power

In contrast, anti-white rhetoric is all but ubiquitous on the political left, and indeed widespread throughout American society and culture, and hardly unique to Farrakhan. It therefore passes largely without comment. 

Yet this points to another problem for American Jews as a direct result of both increasing ethnic diversity and increasing anti-white animosity – namely that, if increasing ethnic diversity does indeed mean that Jews come to be seen as no different from other whites, then the animosity of many non-whites towards whites, an animosity often nurtured by leftist Jewish intellectuals, is, unlike the destroying angel in the Book of Exodus, unlikely to distinguish Jew from Gentile. 

Yet, given their history, Jews, more than other whites, should be all too aware of the dangers in becoming a wealthy but resented minority, as whites in America are poised to become by the middle of the current century⁠, thanks to the immigration policy that Jews were, in Macdonald’s own telling, instrumental in moulding. 

In short, if I began this section of my review with a quote from a Jewish comedienne regarding blacks, it behoves to conclude with a quote from a black comedian, concerning Jews. Chris Rock, discussing the alleged anti-Semitism of Farrakhan in one of his stand-up routines, explains: 

Black people don’t hate Jews. Black people hate white people. We don’t got time to dice white people into little groups.” 

Endnotes

[1] Macdonald, however, never mentions the meme concept in PTSDA, perhaps on account of an antipathy to Richard Dawkins, whom he blames for prejudicing evolutionists against the idea groups have any important role to play in evolution (A People That Shall Dwell Alone: pviii). He does, however, mention the meme concept on one occasion in ‘Culture of Critique’, where he acknowledges:

The Jewish intellectual and cultural movements reviewed here may be viewed as memes designed designed to facilitate the continued existence of Judaism as an group evolutionary strategy” (p237).

However, Macdonald cautions:

Their adaptedness for gentiles who adopt them is highly questionable, however, and indeed, it is unlikely that any gentile who believes that, for example, anti-Semitism is necessarily a sign of a pathological personality is behaving adaptively” (p237).

[2] Curiously, Macdonald even refers to these secular thinkers and political activists as still continuing to practise what he calls “Judaism as a group evolutionary strategy”, a phrase he uses repeatedly throughout this book, even though the vast majority of the thinkers he discusses are secular in orientation. This suggests that, for Macdonald, the word “Judaism” has a rather different, and broader, meaning than it does for most other people, referring not merely to a religion, but rather to a group evolutionary strategy that is, as he purports to show in PTSDA, encapsulated in this religion, but also somehow broader than the religion itself, and capable of being practised by, say, secular psychoanalysts, Marxists and anthropologists just as much as by, say, devout orthodox Jews. This is a rather odd idea, and certainly a very odd definition of ‘Judaism’, that Macdonald never gets around to explaining.

[3] Indeed, Macdonald goes even further, provocatively arguing that the ultimate progenitor of Nazi race theory is not to be found among such infamously anti-Semitic proto-Nazi notables as Wagner, Chamberlain or Gobineau, let alone Eckart, Rosenberg or Hitler himself, but rather the celebrated, and ethnically Jewish, British Prime Minister Benjamin Disraeli. Despite being, at least nominally, a Christian convert and marrying a Gentile, Disraeli, according to Macdonald, not only considered the Jews a superior race vis a vis white Gentiles, but also attributed this superiority to their alleged “racial purity” (Separation and Its Discontents: p181).
Thus, he quotes Disraeli as observing:

The other degraded races wear out and disappear; the Jew remains, as determined, as expert, as persevering, as full of resource and resolution as ever… All of which proves that it is in vain for man to attempt to battle the inexorable law of nature, which has decreed that a superior race shall never be destroyed or absorbed by an inferior” (Lord George Bentinck: A Political Biography: quoted in Separation and Its Discontents: p181).

Indeed, Macdonald reports, Disraeli considered Jews as being responsible for “virtually all the advances of civilization”, and, evincing black Israelite levels of delusion, apparently even considered Mozart to be Jewish. Thus, Macdonald quotes LJ Rather as concluding:

Disraeli rather than Gobineau—still less Chamberlain—is entitled to be called the father of nineteenth-century racist ideology” (Reading Wagner: quoted in Separation and Its Discontents: p180).

[4] The studies cited by Macdonald for this claim are: Marciano 1981; Schwartz 1978

[5] Of course, in making this claim, I am being at least semi-facetious. Jews are not be overrepresented among most white nationalist groups because most such groups are also highly anti-Semitic and hence Jews would not be welcome there. On the other hand, Jews would be welcome among more mainstream civic nationalist and anti-immigration groups, not least because they would lend such groups a defence against the charge of being anti-Semitic or ‘Nazis’. However, they do not appear to be especially well represented among these groups, or, at the very least, not as overrepresented among these groups as they are on the political left

[6] On the contrary, other plausible explanations as for why Jew and Gentile alike were drawn to the intellectual movements discussed readily present themselves. For example, wishful thinking may have motivated the Marxist belief in the coming of a communist utopia. Simply a sense of belonging, and of intellectual superiority, may also be a motivating factor in joining such movements as psychoanalysis and Marxism. Indeed, many disparate cults and religions have posited all kinds of odd religious beliefs (arguably odder even than those of Freud), such as reincarnation, miracles etc., without their being any discernible strategic advantage for the overwhelming majority of adherents, indeed sometimes at considerable cost to themselves (e.g. religiously imposed celibacy). 

[7] These are also the movements with which I suspect Macdonald himself is most familiar. As an evolutionary psychologist, he is naturally familiar with Boasian anthropology and the the standard social science model, to which evolutionary psychology stands largely in opposition. Also, he has a longstanding interest in Freudian psychoanalysis, having earlier written a critique of psychoanalysis as a cult in Skeptic magazine (Macdonald 1996), and also, ten years earlier, a not entirely unsympathetic assessment of Freud’s theories in the light of sociobiological theory (Macdonald 1986), both of which articles critique Freudianism without recourse to anti-Semitism or any talk of ‘Jewish group evolutionary strategies’. Also, the title of his previous book on ‘the Jewish question’, namely ‘Separation and Its Discontents’, is obviously drawn from the title of one of Freud’s own books, namely ‘Civilization and its Discontents’

[8] Contrary to some anti-Semitic propaganda, it seems that Jews did not constitute a particularly large proportion of the party membership as a whole. In fact, Slezkine, reports that the most overrepresented ethnicity were not Jews, but rather Latvians (The Jewish Century: p169).
Yet, if Jews were not overrepresented among the rank-and-file party membership in Russia, they do seem to have been vastly overrepresented among the party leadership, at least prior to Stalin’s purges. Thus, Slezkine reports:

Their overall share of Bolshevik party membership during the civil war was relatively modest (5.2 percent in 1922), but… [it is estimated that] Jews had made up about 40 percent of all top elected officials in the army… In April 1917, 10 out of 24 members (41.7 percent) of the governing bureau of the Petrograd Soviet were Jews. At the First All-Russian Congress of Soviets in June 1917, at least 31 percent of Bolshevik delegates (and 37 percent of Unified Social Democrats) were Jews. At the Bolshevik Central Committee meeting of October 23, 1917, which voted to launch an armed insurrection, 5 out of the 12 members present were Jews. Three out of seven Politbureau members charged with leading the October uprising were Jews (Trotsky, Zinoviev, and Grigory Sokolnikov [Girsh Brilliant]). The All-Russian Central Executive Committee (VtsIK) elected at the Second Congress of Soviets included 62 Bolsheviks… Among them were 23 Jews, 20 Russians, 5 Ukrainians, 5 Poles, 4 “Balts,” 3 Georgians, and 2 Armenians… [A]ll 15 speakers who debated the takeover as their parties’ official representatives were Jews” (The Jewish Century: p175).

Similarly, an article in one leading Israeli newspaper reports that, despite only ever representing a tiny proportion of the overall Soviet Russian population:

In 1934, according to published statistics, 38.5 percent of those holding the most senior posts in the Soviet security apparatuses were of Jewish origin” (Plocker 2006).

Similarly, an article in the Jerusalum Post reports that, in the sealed train by which Germany brought Lenin and other communist revolutionaries who had been exiled under the Tsarist regime back into revolutionary Russia in order to raise chaos and ultimately ignite a second revolution, “almost half the passengers on the train were Jewish” (Frantzman 2017).
Historian Robert Gellately gives that seems to give a balanced picture when he reports of the Jewish role in the October revolution and Soviet regime:

Their participation in the Bolshevik Revolution in absolute terms was not great, but five of the twelve members at the Bolshevik Central Committee meeting on October 23 1917 were Jews. The Politburo that led the revolution had seven members, three of whom were Jews. During the stormy years of 1918-21, Jews generally made up one-quarter of the Central Committee and were active in other institutions as well including the Cheka” (Lenin, Stalin & Hitler: p67-8).

Similarly, historian Albert Lindemann reports:

It seems beyond serious debate that in the first twenty years of the Bolshevik Party the top ten to twenty leaders included close to a majority of Jews. O f the seven ‘major figures’ listed in The Makers of the Russian Revolution, four are of Jewish origin, and of the fifty-odd others included in the list, Jews constitute approximately a third, Jews and non-Russians close to a minority” (Esau’s Tears: p429-30).

In short, the myth of Judeo-Bolshevism was just that – a myth. However, the role of the Jews in both the Communist revolution and the later regime, especially in leadership positions and prior to Stalin’s purges, was nevertheless vastly disproportionate to their numbers in the population as a whole.

[9] Perhaps the only country where Jews played a comparably disproportionate role in the radical left was Hungary, where, citing the work of Jewish historian Richard Pipes, Macdonald reports, rather remarkably, that:

In the short-lived communist government in Hungary in 1919, 95 percent of the leading figures of Bela Kun’s government were Jews” (p99)

[10] In contrast, in Britain, for example, there was an independent, indigenous socialist tradition, which developed quite independent of any external Jewish influence (e.g. the Levellers, Robert Owen). In Britian, while Jews would certainly have been overrepresented among leftist radicals during the twentieth century, I suspect that it would not have been to anything like the same degree, not necessarily because of any lesser per capita involvement of Jews, but rather because of:

  1. The relatively lower numbers of Jews resident in the UK as a proportion of the overall population during this time frame; and
  2. The greater per capita involvement of gentiles in leftist and radical socialist movements.

Meanwhile, in Scandinavian countries, so-called Nordic social democracy surely developed without any significant Jewish influence, or at least any direct influence, if only because so few Jews were resident in these countries. In short, socialism and radical leftism cannot be credited to (or blamed on) Jews alone.

[11] Analogously, leftist critics of neoliberal economics, sociobiological theory and evolutionary psychology sometimes claim that these theories were devised within a liberal-capitalist milieu, ultimately in order to justify the capitalist system. However, even assuming this were true, it is not directly relevant to the question of whether the theories in question are true, or at least provide a productive model of how the real world operates. Thus, biologist John Maynard Smith wrote of how:

There is a recent fashion in the history of science to throw away the baby and keep the bathwater to ignore the science, but to describe in sordid detail the political tactics of the scientists” (The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today: px).

[12] I am aware that all these writers and researchers are Jewish either because they have mentioned their ethnicity in their own writings, or it has been mentioned by other authors whom I regard as reliable. I have not, for example, merely relied on their having Jewish-sounding names. This is actually a very inaccurate way of determining ancestry, because, not only have many Jewish people anglicized their names, but also most surnames that Americans and British people think of as characteristically Jewish are actually German in origin, and only relatively more or less common among Jews than among German gentiles. Only a few surnames (e.g. Levin, Cohen) are exclusively Jewish in origin, and even these indicate, of course, only male-line ancestry.

[13] For whatever reason, Eysenck spent most of his life denying and concealing his own Jewish ancestry, practising what Macdonald calls crypsis. Interestingly, he also favourably reviewed the first installment of Macdonald’s so-called ‘Culture of Critique trilogy’, A People That Shall Dwell alone (which I myself have reviewed here) in the psychology journal, Personality & Individual Differences, describing it asa potentially very important contribution to the literature on eugenics, and on reproductive strategy”. Another prominent Jewish champion of hereditarian theories of racial difference was the leading libertarian economist Murray Rothbard

[14] On his blog, Macdonald has repeatedly disparaged Pinker as occupying “the Stephen Jay Gould Chair for Politically Correct Popularization of Evolutionary Biology at Harvard”. This may be a witty (and perhaps anti-Semitic) putdown. It is also, however, grossly unfair. Pinker has not only championed IQ testing, behavioural genetics and sociobiology, but even the idea of innate differences between races in psychological traits such as intelligence (see What is Your Dangerous Idea: p13-5; Pinker 2006). 

[15] Admittedly, the first four of these very much form a clique, very much associated with one another, having jointly authored books and articles together and frequently citing one another’s work. This may be why they were the first five names to occur to me. It might also explain their common ethnicity, as it seems that, according to a study cited by Macdonald, Jewish scholars are more likely to collaborate with and cite fellow Jews (Greenwald & Schuh 1994). On the other hand, anthropologist Marshall Sahlins is not associated with this group, and prior to looking up his biographical details for the purpose of writing this paragraph, I was not aware he was of Jewish ancestry. Perhaps the next best-known critic of sociobiology (or at least the next one I could name offhand) is philosopher Phillip Kitcher, who, despite his German-sounding surname, is not, to my knowledge, of Jewish ancestry.

[16] Admittedly, a fair few of the worst offenders among them have been both French and Jewish (e.g. Claude Lévi-Strauss and Jacques Derrida). 

[17] This explains why, despite its supposed association with the so-called ‘far-right, anti-Semitism and leftism typically go together. Thus, on the one hand, Marxists believe that society is controlled by a conspiracy of wealthy capitalists who control the mass media and exploit and oppress everyone else. On the other hand, anti-Semites believe that society is controlled by a conspiracy of wealthy Jewish capitalists who control the mass media and exploit and oppress everyone else.
Thus, as a famous aphorism has it: Anti-Semitism is the socialism of fools.
Thus, since the contemporary left in America is endlessly obsessed with the supposed ‘overrepresentation’ of white males in positions of power and influence, it ought presumably also to be concerned about the even greater per capita overrepresentation of Jews in those exact same positions of power and influence, as were the Nazis.
In short, National Socialism is indeed a form of socialism – the clue’s is in the name. 

[18] Indeed, today, anti-Semitism is arguably more common on the left, as the left has increasingly made common cause with Palestinians and indeed with Muslims more generally. Yet, in America, Jews still vote overwhelmingly for the leftist Democratic Party, even though Republicans now tend to be even more vociferously pro-Israel than the Democratics. In the UK, on the other hand, Jews are now more likely to vote for Conservative candidates than for Labour. However, I recall reading that, even in the UK, after controlling for socioeconomic status and income, Jews are still more likely to vote for leftist parties than are non-Jews of equivalent socioeconomic status and income-level.

[19] In contrast, as emphasized by Macdonald, other theorists sought to reclaim Freudianism on behalf of the left, notably the infamous (and influential) Frankfurt School, to whom Macdonald devotes a chapter in ‘Culture of Critique’. Thus, the Frankfurt School are today remembered primarily for having combined, on the one hand, Freudian psychoanalysis with, on the other, Marxist social and economic theory. Regarding this brilliant theorietical synthesis, Rod Liddle once memorably remarked:

“[This] is a bit like being remembered for having combined the theory that the sun revolves around the earth with the theory that the earth is flat” (Liddle 2008). 

[20] Thus, whereas various passages in the Old Testament envisage and provide for divorce and remarriage, in contrast Jesus’s teaching on this matter, as reported in the New Testament Gospels, is very strict in forbidding both divorce and remarriage (Matthew 19:3-9; Matthew 5:32). Moreover, precisely because these teachings go against what was common practice amongst Jews at the time of Jesus’s ministry, they are regarded as satisfying the criterion of dissimilarity and hence as historically reliable teachings of the historical Jesus

[21] Thus, despite including in-depth discussion of the supposed ethnic motivations of many ethnically Jewish Marxist thinkers in his chapter on ‘Jews and the Left’, Macdonald passes over Marx himself in less than a page at the very beginning of this chapter, where he concedes: 

Marxism, at least as envisaged by Marx himself, is the very antithesis of Judaism… [and] Marx himself, though born of two ethnically Jewish parents, has been viewed by many as an anti-Semite” (p50). 

While also conceding that “Marx viewed Judaism as an abstract principal of human greed that would end in the communist society of the future”, he also claims, citing a secondary source, that: 

He envisaged that Judaism, freed from the principal of greed, would continue to exist in the transformed society of the future (Katz 1986, 113)” (p50). 

On his Occidental Observer website, Macdonald has also published a piece by the surely pseudonymousFerdinand Bardamu’ arguing that, despite appearances to the contrary, Marx was indeed pursuing a ‘Jewish group evolutionary strategy’ in his political activism (Bardamu 2020). The attempt is, in my view, singularly unpersuasive. 
Interestingly, if Marx was, despite his Jewish background, something of an anti-Semite, the same might also be true of the figure who represents for many anti-Semites, perhaps even more than Marx himself, the quintessential Jewish leftist, namely Leon Trotsky (née Lev Davidovich Bronstein). Thus, according to historian Albert Lindemann, in his somewhat revisionist Esau’s Tears: Modern Anti-Semitism and the Rise of the Jews:

Trotsky observed that Jews as a whole were not worth much to the cause of revolution, for they tenaciously resisted proletarianization. Even when pushed into desperate poverty, Jews stubbornly retained a ‘petty-bourgeois consciousness,’ which for Trotsky was the most contemptible of all forms of consciousness” (Esau’s Tears: p426-7)

[22] Marx was also highly racist by modern standards. Indeed, Marx even delightfully combined his racism with anti-Semitism in a letter to his patron and collaborator Friedrich Engels, where he describes fellow Jewish socialist (and friend), Ferdinand Lassalle, as “the Jewish nigger” and theorizes: 

It is now quite plain to me—as the shape of his head and the way his hair grows also testify—that he is descended from the negroes who accompanied Moses’ flight from Egypt (unless his mother or paternal grandmother interbred with a nigger)… The fellow’s importunity is also niggerlike.

[23] A complete list of prominent Jews who have iconoclastically challenged cherished and venerated Jewish institutions, beliefs and traditions is beyond the scope of this review. However, such a list would surely include, among others, such figures as Gilad Atzmon, Shlomo Sand and Otto Weininger. Israel Shahak is another Jewish intellectual frequently accused by his detractors of anti-Semitism, and certainly his book Jewish History, Jewish Religion is critical of aspects of Judaism and Talmudic teachings. Likewise, in Israel, the so-called New Historians, themselves overwhelmingly Jewish in ethnicity, were responsible for challenging many of the founding myths of Israel. Also perhaps meriting honourable (or, for some, dishonourable) mention in this context are Murray Rothbard, also Jewish, who extolled the work of Harry Elmer Barnes, himself widely considered an anti-Semite and early pioneer of ‘holocaust denial’; and Paul Gottfreid, the paleoconservative Jewish intellectual credited with coining the term ‘alt right’.

[24] In fact, even many Marranos seem to have ultimately lost their Jewish identity, especially those who migrated to the New World, who retained, at most, faint remnants of their former faith in certain cultural traditions the significance of which was gradually lost even to themselves. 

[25] Thus, Macdonald writes:

Given the very large differences between Jews and gentiles in intelligence and tendencies towards intelligence and highinvestment parenting… Jews suffer to a lesser extent than gentiles from the erosion of cultural supports for high-investment parenting. Given that differences between Jews and gentiles are genitically mediated, Jews would not be as dependent on the preservation of cultural supports for high-investment parenting parenting as would be the case among gentiles… Facilitation of the pursuit of sexual gratification, low investment parenting, and elimination of social controls on sexual behavior may therefore be expected to affect Jews and gentiles differently with the result that the competitive difference between Jews and gentiles… would be exacerbated” (p148-9). 

[26] Whereas his former chapters focussed on intellectual movements, which, though they almost invariably had a large political dimension, were nevertheless at least one remove away from the determination of actual government policy, this chapter focuses on political activism directly concerned with reforming government policy.

[27] Macdonald also charges Jewish activists with hypocrisy for opposing ethnically-based restrictions on immigration to the USA, while also supporting the overtly racialist immigration policy of Israel, which provides a so-called right of return for ethnic Jews who have never previously set foot in Israel, while denying a literal right of return to Palestinian refugees driven from their homeland in the mid-twentieth century.
In response, Cofnas (2018) notes that Macdonald has not cited that any Jews who actually take both these positions. He has only shown that American Jews favour mass non-white immigration to America, whereas Israeli Jews, a separate population, are opposed to non-Jewish immigration in Israel.
However, this only raises the question as to why it is that those Jews resident in America support mass immigration, whereas those resident in Israel support border control and maintaining a Jewish majority. Self-selection may explain part of the difference, as more ethnocentric Jews may prefer to be resident in Israel. However, given the scale of the disparity, and the extent of intermigration and even dual citizenship, it is highly doubtful that this can explain all of it.
As an example, Cofnas (2018) argues that American liberals such as Alan Dershowitz actually support the campaign for Israel to admit the (non-white) Beta Israel of Ethiopia into Israel.
However, the Beta Israel in total only number around 150,000. Therefore, even if all were permitted to emigrate to Israel (which is still yet to occur), they would represent less than 2% of Israel’s total population. Clearly, allowing a few thousand token ‘black Jews’ to immigrate to Israel is hardly comparable to advocating that people of all ethnicities (and all religions) be permitted to immigrate to Western jurisdictions.
Moreover, the Beta Israel, and even the Falash Mula, are still Jewish in a religious, if not a racial sense. Yet, attempts by white western countries other than Israel to restrict immigration on either racial or religious lines are universally condemned, including by Dershowitz, who condemned Trump’s call for a moratorium on Muslim immigration as incompatible with “the best values of what America should be like. Dershowitz is therefore indeed guilty of hypocrisy and double-standards when it comes the immigration issue.
Similarly, American TV presenter and political commentator Tucker Carlson recently revealed the hypocrisy of perhaps the most powerful Jewish advocacy group in the USA, the ADL, who had condemned Carlson for crimes against political correctness for opposing replacement-level immigration in the USA, while at the same time, and on the same website, themselves arguing, in a post since blocked from public access, that:

It is unrealistic and unacceptable to expect the State of Israel to voluntarily subvert its own sovereign existence and nationalist identity and become a vulnerable minority within what was once its own territory. 

Yet this is precisely what the ADL is insisting white Americans do by insisting that any opposition to replacement level immigration to America is evidence of ‘white supremacism’.
Macdonald may then, as Cofnas complains, not have actually named any Jewish individuals who are hypocritical with respect to immigration policy in America and Israel; however, Carlson has identified a major Jewish organization that is indeed hypocritical with respect to this issue.
I might add here that, unlike Macdonald, I do not think this type of hypocrisy is either unique to, or indeed especially prevalent or magnified among, Jewish people. On the contrary, hypocrisy is I suspect, like ethnocentrism, a universal human phenomenon.
In short, people are much better at being tolerant, moderate and conciliatory in respect of what they perceive as other people’s quarrels. Yet, when they perceive themselves, or their people, as having a direct ethnic or genetic stake in an issue at hand, they tend to be altogether less tolerant and conciliatory.

[28] Macdonald himself puts it this way: 

Ethnic and religious pluralism also serves external Jewish interests because Jews become just one of many ethnic groups. This results in the diffusion of political and cultural influence among the various ethnic and religious groups, and it becomes difficult or impossible to develop unified, cohesive groups of gentiles united in their opposition to Judaism. Historically, major anti-Semitic movements have tended to erupt in societies that have been, apart from the Jews, religiously or ethnically homogeneous (see SAID). Conversely, one reason for the relative lack of anti-Semitism in the United States compared to Europe was that ‘Jews did not stand out as a solitary group of [religious] non-conformists’” (p242). 

In addition, Macdonald contends that a further advantage of increased levels of ethnic diversity within the host society is that: 

Pluralism serves both internal (within-group) and external (between-group) Jewish interests. Pluralism serves internal Jewish interests because it legitimates the internal Jewish interest in rationalizing and openly advocating an interest in overt rather than semi-cryptic Jewish group commitment and nonassimilation” (p241).

In other words, multi-culturalism allows Jews to both abandon the (supposed) pretence of assimilation and overtly advocate for their own ethnic interests, because, in a multi-ethnic society, other groups will inevitably be doing likewise.
However, Jews may also have had other reasons for supporting open borders. After all, Jews are a sojourning diaspora people, who have often migrated from one host society to another, not least to escape periodic pogroms and persecutions. Thus, they had an obvious motive for supporting open borders, namely so that their own coreligionists would be able to migrate to America should the need arise.
One might also argue that, as a people who often had to migrate to escape persecution, they were naturally sympathetic to refugees of other ethnicities, or indeed other immigrants travelling to new pastures in search of a better life, as their own ancestors have so often done in the past, though Macdonald would no doubt dismiss this interpretation as naïve. 

[29] In my view, a better explanation for why so many western countries have opened up their borders to replacement levels of racially, culturally and religiously alien and unassmilable minorities, is the economic one. Indeed, here, a Marxist perspective may be of value, since the economically-dominant capitalist class benefits from the cheap labour that Third World migrants provide, as do wealthy consumers who can afford to purchase a disproportionate share the cheap products and services that such cheap labour provides and produces. In contrast, it is the indigenous poor and working-class, of all ethnicities, who bear a disproportionate share of the costs associated with such migration, including both depressed wages and ethnically-divided, crime-ridden and distrustful communities (see Liddle 2006).

[30] Ironically then, given the substantial numbers of Arab Muslims resident in France, for example, many of the people responsible for so-called ‘anti-Semitic hate crimes’ are themselves ‘Semitic’, and indeed have a rather stronger case for being ‘Semitic’ in a racial sense than do most of their Jewish victims. 

References 

Bardamu (2020) Karl Marx: Founding Father of the Jewish Left? Occidental Quarterly, 4 January.
Cofnas (2018) Judaism as a Group Evolutionary Strategy. Human Nature, 29:134–156.
Frantzman (2017) Was the Russian Revolution Jewish? Jerusalem Post, November 15. 
Greenwald & Schuh (1994) An Ethnic Bias in Scientific Citations. European Journal of Social Psychology, 24(6), 623–639.
Liddle (2005) Why Labour does not need the Jews, Spectator, 19 February.
Liddle (2006) The Politics of Pleasantville, Spectator, 21 January.
Liddle (2008) Stand by for a year of nostalgia for 1968, Spectator, 5 January.
Macdonald (1986) Civilization and Its Discontents Revisited: Freud as an Evolutionary Biologist. Journal of Social and Biological Structures, 9, 213-220. 
Macdonald (1996) Freud’s Follies: Psychoanalysis as religion, cult, and political movement. Skeptic, 4(3), 94-99.
Macdonald (2005) Stalin’s Willing Executioners The Impact of Orthography Jews As a Hostile Elite in the USSR. Occidental Observer, 5(3): 65-100.
Marciano (1981) Families and CultsMarriage and Family Review, 4(3-4): 101-117. 
Pinker (2006) Groups and Genes. New Republic, 26 June. 
Plocker (2006) Stalin’s Jews, Yedioth Ahronoth (ynetnews.com), 21 December.
Whisker (1984) Karl Marx: Anti-Semite. Journal of Historical Review, 5(1): 69-76.
Schwartz (1978) Cults and the vulnerability of Jewish YouthJewish Education, 46(2): 23-42.
Veblen (1919) The Intellectual Pre-Eminence of Jews in Modern Europe. Political Science Quarterly 34(1). 

Mussolini and the Meaning of Fascism 

Nicholas Farrell, Mussolini: A New Life (London: Phoenix, 2003) 

Nicholas Farrell, author of ‘Mussolini: A New Life’, his controversial revisionist biography of Il Duce, is a journalist, born in England but now resident in Italy. 

Indeed, at the time he wrote this biography, he was living in Predappio, Mussolini’s birthplace and a mecca for neo-fascists, which, though, until quite recently, a communist stronghold, had, at that time (the authorities have since clamped down), a booming cottage industry selling what can only be described as ‘Mussolini Memorabilia’ to visiting tourists, fascist pilgrims and the merely curious. 

Mussolini: A New Life’ is not the definitive Mussolini biography. Indeed, it does not purport to be. Instead, in Farrell’s own view, this honour goes to Italian historian Renzo De Felice’s four-volume magnus opus.

Unfortunately, however, De Felice’s biography stretches to around 6,000 pages, spread over four volumes and published as eight separate books, has never been translated into English, and remained unfinished at the time of the author’s death in 1996. This makes it a heavy read even for someone fluent in Italian, a daunting work to translate, and one likely to be read in full only by professional historians. 

Farrell seems to view his own biography as primarily an abridgement, translation and popularization of De Felice’s work, written in order to bring De Felice’s new revelations, and new perspective, to a wider English-speaking audience. 

In contrast to De Felice’s work, Farrell’s biography is highly readable, and indeed written in a strangely colloquial, conversational style. 

Revisionist 

Yet, be forewarned, Farrell’s biography of Mussolini is not only highly readable, it is also highly revisionist, and attracted no little controversy and criticism when first published in 2003, being variously dismissed as everything from fascist apologetics and whitewash to a hagiographic paean to Il Duce

Why then the controversy? How then was Farrell’s work revisionist and why did it attract so much controversy? 

There seem to be two main elements where Farrell departs from the mainstream historical narrative regarding fascism in Italy. 

First, Farrell argues that Mussolini was not so bad, and even was a relatively successful Italian ruler compared to those who came both before and after him, his posthumous reputation being damaged primarily by his association with Hitler and National Socialism.

Second, Farrell claims that Mussolini, far from being ‘right-wing’, remained, until his dying day, very much a socialist

Given that Farrell himself is himself far from socialist, these claims come close to being contradictory. After all, if Mussolini was a leftist, then what is a conservative like Farrell doing defending him? After all, if he was a socialist than surely he was indeed bad, at least from the perspective of a conservative like Farrell. 

Of course, it is possible for conservatives to admire some leftists. (An old aphorism, often attributed to Leo Rosten, has it that conservatives only admire radicals some several centuries after the latter are dead). 

However, Farrell perhaps lays himself open to the charge of wanting to both have his cake and eat it too. 

A cynic might interpret his thesis thus: Mussolini was not so bad, and, even if he was, he was a socialist anyway so he’s not our problem. 

Rehabilitation 

Is Farrell, then, successful in rehabilitating Il Duce

Well, yes, up to a point – the point in question being the latter’s disastrous decision to align with Germany in the run up to World War Two. 

Up until that point, Mussolini had been, at least by twentieth century Italian standards, a relatively successful ruler and, by contemporary international standards, a not especially repressive one. 

Of course, he had, with the aid of his infamous Blackshirt militia, more or less bullied his way into power. Indeed, according to Robert Paxton, Mussolini’s rise to power was actually rather more violent than was Hitler’s, though the violence was on all sides, not just on the part of Mussolini’s fascisti (The Anatomy of Fascism: p49).

Yet, after he had attained power, Mussolini was not especially repressive or draconian.

Prior to the outbreak of World War II, there had been no Gulags or concentration camps in Italy, no Night of the Long Knifes or Stalinist purges, and, until just prior to the outbreak of the war, nor was there any persecution of, or discrimination against, Italy’s Jewish community. 

Admittedly, Mussolini’s conquest of Ethiopia was indeed brutal. Thus, the Italians did indeed employ concentration camps in both East and North Africa, among many other other brutal and draconian measures.

However, Italian rule in Ethiopia was surely no worse than what preceded it, namely the rule of Emperor Haile Selassie, under whom slavery was still both lawful and widely practiced, despite repeated promises by successive Ethiopian rulers to prohibit and eradicate the practice.[1]

Moreover, Mussolini had a point when he charged Britain and France with hypocrisy for opposing Italian expansion in Africa despite their own vastly greater African colonial possessions, acquired only a few years earlier, sometimes with comparable brutality. 

For example, the Boer War of 1899 to 1902, fought by the British for transparently self-interested economic reasons (namely to gain control over the Boer Republics’ lucrative and newly-discovered gold and diamond reserves), was similarly brutal in nature. Here, the British themselves employed concentration camps, and are indeed sometimes even credited with the dubious distinction of having invented the concept.

Suppressing the Mafia

Today, there is a tendency to deny that the fascist regime had any positive impact on Italy, an implausible conclusion given both the popularity and endurance of the regime in Italy. 

Take, for example, Mussolini’s suppression of the Mafia in Sicily, an achievement to which Farrell himself devotes only a few paragraphs (p182-3). 

In most recent histories of the Sicilian Mafia, Mussolini and his regime are denied any credit whatsoever for this achievement. 

For example, historian John Dickie, in his books Blood Brotherhoods and Cosa Nostra, takes great pains to emphasize that, under Mussolini, the Mafia was not, in fact, finally defeated, but merely went underground and became inactive. Moreover, he insists, most of those mafiosi who were arrested and imprisoned or sent into internal exile during Cesare Mori’s clampdown on the Mafia were not Mafia bosses, but rather, at best, low-level soldiers and underlings. 

It is, of course, true that, under Mussolini, the Mafia was not finally defeated. Indeed, this was amply proven by the resurgence of the Mafia during the post-War period under the Allied occupation and thereafter. 

Yet this view neglects to credit that merely forcing the Mafia to go underground and become inactive was an achievement in and of itself, and seemingly resulted in a massive decrease in serious violent crime in the Mafia’s traditional heartland of Palermo. 

For example, another historian of the Sicilian Mafia, perhaps more honest (but certainly no more sympathetic to Fascism), reports that, in the traditional Mafia stronghold of Palmermo:

Between 1924 and 1928 murders… dropped from 278 per year to 25, which, by any standard of crime prevention is impressive” (Mafia: Inside the Dark Heart: p92). 

Moreover, while leaving (some of) the Mafia bosses untouched and focusing law enforcement attention on low-level soldiers may seem both unfair and inefficient, actually arresting and taking out of circulation a sufficiently large number of low-level soldiers and associates is likely a highly effective method of suppressing a group such as the Mafia, since it is low-level soldiers and associates who, whether or not on orders from above, are responsible for most of the day-to-day operation, crimes and violence of the group.[2]

Indeed, if the Mafia had indeed been made inactive in this way on a long-term, indefinite basis, then ultimately it would surely have died away and ceased to exist as a criminal network. 

Thus, it was only the overthrow of the Fascist regime and Allied occupation that permitted the resurgence of the Mafia in the post-War period, not least because imprisoned and exiled Mafiosi were, upon their return to Sicily, said to have used the very fact of their imprisonment, persecution or exile under the Fascist regime as proof of their supposed anti-fascist credentials, in order to pose as anti-fascists and hence secure appointment to high office under the Allied occupation.[3]

The Fascist campaign against the Mafia seems then, on balance, to have been quite successful.

Of course, methods employed by Mori and the Fascists to achieve this result were not always in accord with contemporary western notions of due process. On the contrary, they were often quite brutal and the Fascists have stood accused as ironically employing to Mafiastyle intimidation against the Mafia – to ‘out-mafia’ the Mafia, as it were.

One may therefore justifiably question whether the ends justified the means.

Indeed, on one view, Mussolini himself was a gangster whose thuggish blackshirts essentially used Mafiastyle violence and intimidation to bully their way into power. On this view, the cure was rather worse than the disease and, while the Sicilian Mafia was in abeyance, a rather worse Mafia was now in power in Rome itself.

However, Mussolini’s, and Mori’s, achievement in, at least temporarily, defeating the scourge of the Mafia in Sicily and Southern Italy, howsoever achieved, surely cannot be denied.

A Benevolent Dictator? 

The very endurance of the Fascist regime is, in one sense, a measure of its success. By this pragmatic definition, a politician or party are to be regarded as ‘successful’ if they successfully gain power, and thenceforth successfully hold onto it.

Yet the endurance of Mussolini’s regime, together with the relative lack of resort to repression, is also indirect evidence that, in terms of satisfying the demands of the Italian public with his policies and governance of the state, he was clearly doing something right.

Moreover, Il Duce was not only popular at home, he was also widely respected abroad, and indeed counted among his fawning admirers such politically diverse figures as Winston Churchill, George Bernard Shaw and, of course, Hitler.

Mussolini is famously credited with making the trains run on time, a popular perception that surely had at least some basis in reality.[4]

Certainly, the period of his rule up until the beginning of World War II constituted the most stable period of governance in Italy’s turbulent 20th century history, arguably right up to the present day. Whereas the average post-war Italian government remains in office, on average, all of five minutes, and in the early twentieth century, before Il Duce came to power, governmental rule was, if anything, even more unstable, Mussolini himself remained in power for fully two decades.

Moreover, in agreeing the Papal Accords and thereby resolving Roman Question which had dogged the Italian state from the time of Garibaldi, Mussolini produced a legacy that outlived both Fascism and Mussolini himself, since this agreement continues to govern the relationship between Church and State in Italy to this day.[5]

Thus, just as Hitler, with his annexation of Austria (and perhaps of the Sudetenland, not to mention Polish Corridor, Alsace-Lorraine, Danzig and other parts of German territory surrendered under the Versailles Treaty), could justifiably claim to have completed the unification of Germany that had begun under Bismark, so Farrell asserts: 

Garibaldi had begun the process of the creation of Italy. Mussolini would complete it” (p199). 

Mussolini and Hitler: A Match Made in Hell?

Mussolini’s undoing ultimately came with the rise of the National Socialist regime in Germany, the coming of the Second World War and Mussolini’s disastrous decision to ally his regime with that of Hitler in Germany and hence tie its own fate, and that of Mussolini himself, with that of Hitler and the Nazis

While today we might think of Hitler and Mussolini as natural allies, the alliance between Germany and Italy was actually far from a foregone conclusion. 

Indeed, to his credit, Mussolini was initially wary of German National Socialism and indeed of Hitler himself, despite the latter’s professed admiration for, and ardent courtship of, the Italian dictator upon whom he had (partly) modelled himself. 

Fascism,” Mussolini famously declared, “is not for export” (p240). 

I should be pleased, I suppose, that Hitler has carried out a revolution on our lines. But they are Germans. So they will end by ruining our idea.” 

This notion, namely that Germans, by virtue of being German, would inevitably ruin the idea of fascism, even if it ultimately proved prophetic, is obviously crudely jingoistic. Yet such jingoism was entirely consistent with fascist ideology. 

After all, fascism was a nationalist ideology, and nationalist ideologies are intrinsically jingoistic.

Nationalist movements are also, by their very nature, necessarily limited in their appeal to members of a single nation or ethnicity.

A nationalist of one nation is no necessary or natural ally for the nationalist of another, especially if the nations in question share a border. On the contrary, nationalists of neighbouring nations are natural enemies.[6]

Moreover, the fact Italy was the chief ally and protector of the Federal State of Austria, whose annexation was a major priority of Hitler’s foreign policy, and had herself annexed German-speaking South Tyrol at the end of World War I, certainly did not help matters.[7]

Hitler, however, was to prove an ardent suitor. 

Mussolini would have preferred, Farrell reports, an understanding with the British. (So incidentally would Hitler himself.)

Moreover, initially the British political establishment was surprisingly favourably disposed.

Indeed, Mussolini even counted among his most ardent British admirers one Winston Churchill, who, though then out of office and adrift in what he was himself to later term his ‘wilderness years’, had in 1933 extolled Italian Fascism as a bulwark against Bolshevism and Il Duce himself as “the Roman genius” and “greatest law-giver among living men” (p225). 

Indeed, Farrell reveals that, given his own staunch anti-communist credentials, oratorical ability and personal charisma, Churchill was even touted by some contemporaries as a potential fascist dictator in his own right, his cousin, the communist sympathizer, journalist and suspected Soviet spy, Clare Sheridan, writing in one contemporary piece that:

Churchill… is talked of as the likely leader of a fascisti party in England” (quoted: p130). 

Yet three factors, Farrell reports, ultimately led to Mussolini’s estrangement from Britain. These were: 

  1. Italy’s conquest of Ethiopia
  1. The implacably hostility of British Foreign Secretary, Anthony Eden
  1. Both Mussolini and Hitler’s support for, and assistance to, the nationalist side during the Spanish civil war

Each of these factors, together with Britain’s implacable support for the League of Nations which had imposed sanctions against Italy for her invasion of Ethiopia, strained Mussolini’s relationship with Britain, and precluded any possibility of an alliance, or even an understanding, between the two powers.

In addition, Mussolini, observing at a distance German diplomatic successes (e.g. the remilitarization of the Rhineland, the occupation of the Sudetonland) and observing up close the Anschluss with Austria, in the face of percieved western appeasement, came, not unreasonably, to view the western democracies of Britain and France as weak and decadant in the face of renewed German militarism, and to see Germany herself as the dynamic rising continental power.

Ultimately, this led Mussolini, reluctantly at first, into the German Führer’s fatal embrace. 

Anti-Semitism 

Hitler is also likely to blame for Italy’s anti-Semitic race laws, introduced in 1938. 

True, Hitler, it seems, exerted no direct pressure on Mussolini with regard to this issue. However, given that Mussolini had been in power a decade and a half without feeling any need to enact such laws on his own initiative, and evidently changed his mind only after he had begun to align with the Hitler’s newly-established National Socialist regime in Germany, it seems likely that this was the decisive factor. 

However, Farrell claims that the rapprochement with Germany was “not the reason”, only “the catalyst” for this decision (p304). 

The real reason, he claims, was that: 

Jews had come to epitomise Mussolini’s three enemies: Communism, the bourgeoisie and anti-fascism [since] Jews were prominent in all three” (p304). 

This may be true. However, Jews, it should be noted, were also prominent among Fascists themselves. Indeed, Farrell himself reports: 

More than 10,000 Jews, about one-third of adult Italian Jews, were members of the PNF in 1938” (p303).

Thus, relative to overall population size, Jews were in fact overrepresented among members of the PNF by a factor of three (Italy’s Jews: From Emancipation to Fascism: p44).[8]

Perhaps most prominent and influential among Jewish Italian Fascists was Mussolini’s long-term mistress, Margherita Sarfatti, a leading Italian intellectual in her own right, who had followed, or perhaps even led, Mussolini from socialism to Fascism, and who plays a prominent role in the first half of Farrell’s biography.

In addition to being Mussolini’s mistress (or rather one of his many mistresses) and a confidante of Il Duce for almost thirty years, she is thought to have been a key and influential figure in the Fascist regime, helping shape policy and decision-making from behind the scenes. 

She was also, Farrell surmises, the only of Mussolini’s many mistresses whom his semi-literate peasant wife (who was also, Farrell infers from, among other things, her eye colour, the known relationship between her mother and Mussolini’s father, and both sets of parents’ implacable opposition to the union, possibly his illegitimate half-sister: p40) truly “hated” and regarded as a serious threat to her marriage (p73-4). 

However, as Sarfatti aged, Mussolini’s ardour faded in parallel to her looks, suggesting that her hold over him had always been primarily sexual rather than intellectual. The breakdown of this relationship was likely a key factor in paving the way for both the pact with Germany and Italy’s race laws

Mussolini also, Farrell reports, saw the Jews as harbouring “secret loyalties that conflicted with Fascism”, much like the Freemasons, themselves less fashionable victims of persecution under both German National Socialism and Italian Fascism (p304). 

Farrell attempts to downplay the extent of persecution to which Jews were subject in Fascist Italy and absolve Mussolini of any culpability in the Holocaust. 

Thus, he insists, Italy’s anti-Semitic laws “did not involve violence at all” (p310), and he concludes: 

Although not anti-Semitic, Mussolini became increasingly anti-Jewish” (p304). 

However, Farrell never really explains what exactly is the difference between these two surely synonymous terms.

Farrell also emphasizes that Mussolini’s racism was not biological but “spiritual” in nature (p305). In other words, it was not Hitlerian, but rather Spenglerian or perhaps even Evolian.

If this is intended as a defence of Mussolini, then it rings decidedly hollow.

That the Italian dictator’s dislike of them reflected not biological but purely cultural factors was presumably scant consolation those Jews expelled from their jobs on account of their Jewishness, even if the criteria for qualifying as a Jew was less inclusive, and more open to exemptions and corrupt interpretation, than in Germany. 

Indeed, personally, as longterm readers of this blog, or my amazon and goodreads book reviews (assuming any such people actually exist) may be aware, I am actually not, in principle, entirely unsympathetic to biological theories of race and of race differences.

Of course, National Socialist racial theories were indeed nonsense. However, in purporting to be biological, and hence scientific (even if this claim was disingenuous), they at least had one benefit over so-called ‘spiritual’ theories of race, namely that they could, at least in principle, be the subject of testing and hence falsification.

Indeed, to the extent the Nazis viewed Jews as inferior, then their theories were not merely in principle falsifiable, but have indeed been falsified, at least with respect to intelligence differences.[9]

In contrast, the so-called ‘spiritual racism’ of Spengler, Evola and, it seems, Mussolini, which admits exceptions whereby an ethnic Jew can be ‘spiritually’ Aryan, and vice versa, seems to me to be wholly unfalsifiable mysticism.

In conclusion, Farrell quotes historian De Felice, himself, incidentally, of Jewish ancestry, as observing: 

Mussolini’s campaign against the Jews ‘was more against the Italians than against the Jews’” (p304). 

This may be true. However, I doubt either Farrell or De Felice would deny that it was the latter who ultimately ended up paying the greater price.  

The Holocaust 

On the other hand, Farrell does a good job of absolving Italians as a whole from any culpability in the holocaust. 

There appears to have been little popular anti-Semitic sentiment in Italy at this time, at least as compared to in other European countries in the same time period, probably because Italy’s Jewish community was comparatively small in number, long-established, and well integrated.

As a consequence, Mussolini’s race laws, introduced in 1938 in apparent imitation of Germany, had proven unpopular among the public, and even among many leading Fascists. Indeed, they are even said, along with the ongoing war, to have contributed to a decline in the support for the regime among the public in the years following their enactment.

Thus, unsurprisingly, when Nazi Germany occupied Northern Italy, established a puppet government nominally under leadership of the deposed Duce, and quickly began rounding up Jews for deportation and ultimate massacre, this too was unpopular.

As a consequence, many Italians passively or actively resisted the ongoing genocide. Italian government officials, ordered to round up Jews for deportation, often refused to comply or were deliberately obstructive. Many Italians, including the Vatican, hid and protected Jews

Mussolini himself, however, emerges rather less unscathed. 

On the one hand, Mussolini did indeed order the rounding up and deportation of Jews in accordance with German orders in the last years of the war.

However, by this stage, he was little more than a nominal puppet leader, with little power to act independently of, let alone in defiance of, his German backers. He therefore had little say in the matter.

On the other hand, Mussolini made little effort to ensure these orders were actually complied with and enforced, and tolerated or overlooked the refusal of many officials to comply with these orders and indeed the efforts of others to deliberately defy them. 

Thus, reading between the lines, Mussolini seems to have been largely indifferent to the fate of the Jews

Certainly, even on the evidence presented by Farrell himself, his own claim that “Mussolini did much to save Jews from Hitler” seems wholly unwarranted (p363). 

The most Farrell manages to prove is that Mussolini was far less anti-Semitic than was Hitler himself, hardly a great achievement or grounds for praise. 

World War II 

It is perhaps from World War II that the popular image of Mussolini as an inept and buffoonish figure emerged. Partly, this reflected Allied propaganda. However, despite Farrell’s attempted rehabilitation of Il Duce, Mussolini’s conduct of the war does indeed seem inept from the start. 

Thus, before the War began, Mussolini made, arguably, his first mistake, agreeing the Pact of Steel with National Socialist Germany, which obliged him to come to the latter’s aid even in the event of an aggressive war initiated by Germany herself (p317).[10]

Then, after the War had indeed begun in just this way, Mussolini conspicuously failed to come to Germany’s aid, in direct contravention of her newly acquired treaty obligations. 

Mussolini justified this decision on the grounds that Italy was not yet ready for war. In this assessment, he was surely right, as was proven tragically true when Italy did enter the war, with disastrous consequences, both for Mussolini’s own Fascist regime, and, arguably, for National Socialist Germany as well. 

To his credit, then, Mussolini had not, it seems, made the classic error of ‘falling for his own publicity’. He knew that his own militaristic braggadocio and podium strutting were mere empty bluff, and that war with Britain and France was the last thing that the Italian armed forces, or the Italian state, needed at this time.[11]

However, on witnessing Germany’s dramatic defeat of France, Mussolini suddenly decided he wanted to get in on the action – or rather in on the spoils.

Greedily and rather transparently anticipating a share of the territory of the conquered French, he suddenly and belatedly signed up for the war, albeit right about the same time that Hitler had (seemingly) already won it and hence had no further need of him. 

As a result, he got none of the territorial gains he so eagerly anticipated, the relevant parts of French territory having already been promised to the new French Vichy regime as part of the German-French peace accord of 1940 which brought an end to the fighting. 

Now, however, for better or worse, Mussolini had thrown in his lot with the German Führer. Italy was now in for the long-haul and Mussolini’s own fate directly tied to that of the German war machine. Henceforth, Mussolini’s Italy would find itself relegated to the role of junior partner to the German behemoth, over time increasingly surrendering any capacity for independent decision-making. 

Mussolini did, however, make one last attempt to assert independence from the German war machine. Chagrined that Hitler kept invading foreign powers without consulting his ostensible ally, Mussolini decided to do the same for himself, aspiring to emulate his ally by invading Greece, and thereby shift the focus of the war towards the Mediterranean, where his own territorial ambitions were naturally, and quite sensibly, focused. 

However, the attempt to assert independence backfired disastously. His invasion easily rebuffed by the Greeks, Mussolini was forced to call in for help from the very Germans whose military successes he had so envied and sought to emulate.

Moreover, the delay to the proposed invasion of the USSR that Germany’s intervention on Italy’s behalf in Greece necessitated, has been implicated as a key factor that ultimately doomed Operation Barbarossa, and hence led, ultimately, to the fall of both both dictators.

Farrell does convincingly establish that, in his disagreements with Hitler regarding the conduct, strategy and overall direction of the war, Mussolini was, perhaps surprisingly, often more strategically astute than the more militarily-minded Führer, who, despite his remarkable early military successes (or indeed perhaps because of them), had become increasingly detached from reality, inflexible in his strategic thinking and unwilling to listen to criticism.

Thus, most military historians would agree that shifting the focus of the war effort towards the Mediterranean, as Mussolini advocated, was a sound strategic policy, not only in Italy’s own strategic interests, but also that of Germany and the Axis powers as a whole.

This would have allowed the Axis powers to secure their vulnerable Southern flank, which Churchill was later to aptly identify as Europe’s vulnerable ‘soft underbelly’, and establish complete control over the Mediterranean, with all the military and economic benefits this would confer.

Certainly, it made much more sense than the decision to open up an entirely new front, and provoke a new enemy, by the disasterous decision to invade the mighty Soviet Union.

But, alas, it was to no avail. Hitler was no more willing to listen to the wise counsel of his Italian counterpart than he was to listen to that of his own senior generals and commanders.

Instead, Hitler had his sights firmly fixed on the invasion and conquest of the detested Judeo-Bolshevik Soviet regime in Russia, and the perceived German geopolitical imperative of living space in the East, and would brook no delay or postponement, let alone cancelation, of these plans in order to secure his vulnerable southern flank. 

Ultimately, Farrell is successful in explaining why Mussolini did what he did in World War II given the limited information available to him at the time and the difficult predicament in which he increasingly found himself. 

However, he fails to revise the established view that these decisions were, in the long-term, ultimately anything other than disastrous miscalculations. 

Ciano – Diarist and Dilettante

Not only was Mussolini often more strategically astute than the Führer, he was also, Farrell shows, far more strategically adept than his foreign minister and son-in-law, Galeazzo Ciano.

The latter plays a prominent role in the second half of Farrell’s biography, probably due to the value of his famous diaries as an historical source regarding Mussolini’s thinking, and that of his inner-circle, during this critical time period.

From initially hero-worshiping his famous father-in-law, Ciano gradually became a firm critic of Mussolini, criticising the latter’s decision-making repeatedly in his diaries and ultimately betraying him.

Yet, in Farrell’s account, Ciano emerges as a political dilitante, a playboy, and a hypocrite – “the spoilt child of the regime” – who was always unpopular with the public (p322).

Thus, while, in his diaries, he criticizes Mussolini for his decision to ally with Germany, and, in the post-war period, according to Farrell, “a whole industry sprouted up on the basis of his famous diaries which would have us believe… that Ciano tried to srop the Pact of Steel”, the truth was that Ciano was no more than “the Duce’s yes man, however much whinging he did in private” (p316-7).

Moreover, though he was indeed often critical of the alliance with Germany, his views seemingly changed by the day. Thus, Farrell reports, despite his earlier criticism of the Pact of Steel, “as soon as Germany started winning easily in the west in the spring of 1940 he was all in favour of Germany again” (p322). He was also, Farrel reports, a main champion and proponent of Italy’s disastrous invasion of Greece (p340).

Indeed, Farrell does a far better job of showing that Ciano was even more incompetent, and inconsistent, in his strategic pronouncements than was Mussolini, than he does showing that Mussolini was himself in any way competent. 

History is written, it seems, not so much by the victors, or, at any rate, not only by the victors, but also by those with sufficient time on their hands, and sufficient inclination, to put across their own side of things in diaries or other writings that ultimately outlive them. As Churchill was later to put it:

History will be kind to me for I intend to write it”.

Was Mussolini a Socialist? 

What then of Farrell’s second revisionist claim: Did Mussolini really always remain a man of the Left until his dying day?

Certainly, both Fascism and Mussolini seem to have begun on the Left

Mussolini’s own journey from the Left began when he advocated Italian involvement in the First World War, contrary to the doctrine of the Second International

Yet, in this, Mussolini was merely following in the path trodden by socialists across Europe, who, caught up in the prevailing mood of nationalism and war-fever, abandoned the internationalism and pan-proletarian solidarity of the Second International en masse, to come out in support of, and march to their deaths in the service of, their respective nation’s war-efforts.[12]

Thus, as had occurred so often before, and would occur so many more times in the future, idealism and internationalism came crashing down in the face of nationalism, ethnocentrism and war fever. 

Mussolini himself thus came to believe in the power of nationalism to move men’s souls in a way that appeals to mere economic class interests never could. He came to believe that:

Nation had a stronger grip on men than class” (p61). 

As sociologist-turned-sociobiologist Pierre van den Berghe was later to put it in his excellent The Ethnic Phenomenon (which i have reviewed here): 

Blood runs thicker than money” (The Ethnic Phenomenon: p243)

Thus, Mussolini and the early Fascists, like the early pre-Hitler German Workers’ Party in Germany that was later to become the NSDAP, sought to combine socialism with nationalism

In addition, Mussolini also came to believe that, just as the Bolshevik revolution in Russia would never have been brought about without Lenin, so socialist revolution in Italy would require an elite revolutionary vanguard.

Yet this was contrary to orthodox Marxist doctrine which insisted that the coming revolution would be brought about by the proletariat as a whole, and was, at any rate, historically inevitable, such that no elite revolutionary vanguard would be necessary. 

In this assessment, Mussolini was surely right. The Bolshevik revolution would indeed surely never have occurred without Lenin as its catalyst and driving force.

Thus, when, in 1917, Lenin arrived by train in Petrograd, courtesy of the German government, even the vast majority of his fellow Bolsheviks were resigned to a policy of support for the newly-established provisional government, as were the Mensheviks, who despite their name, probably outnumbered the Bolsheviks, not to mention the Socialist Revolutionaries, who outnumbered both. Lenin was then, at first, almost alone in advocating armed revolution. Yet this policy was ultimately to prove successful. 

Ironically, then, as Isaiah Berlin is said to have observed, the much-maligned Great Man Theory of History’, as famously espoused by Thomas Carlyle, became perennially unfashionable among historians at almost precisely the moment that, in the persons of first Lenin and later Hitler, it was proven so terribly and tragically true.[13]

However, recognizing the need for an elite revolutionary vanguard also led Mussolini to question another key tenet of Leftism, namely belief in the equality of man

In other words, if an elite revolutionary vanguard was indeed necessary to bring about socialism, then this suggested that this elite vanguard represented a superior caste of men. This, ironically, undermined the entire basis for socialism, which presupposed human equality.

This led Mussolini to Nietzsche and ultimately to Fascism, Mussolini himself being quoted by Farrell as explaining to a visiting American journalist during the 1920s that: 

Nietzsche had ‘cured me of my socialism” (p30). 

Yet Farrell insists that Mussolini nevertheless remained, in some sense, a socialist even thereafter, and indeed throughout his political career. Thus, he writes:

Mussolini was never a democrat. But much of him was and remained a Socialist” (p39).

However, in making this claim, Farrell is not entirely consistent. Thus, explaining the adoption of the black Arditi flag by the fascist faithful, he explains:

Red was the colour of the enemy – Socialism” (p80).

However, on the very next page he claims:

Fascism was anything but a right-wing movement. The first Fascist programme… reflected the preponderance of the futurists and was very left-wing” (p81). 

These different claims, only a page apart, are difficult to reconcile with one another.

Perhaps, in referring to socialism as “the enemy”, Farrell has in mind ‘Socialism’ with a capital ‘S’ – i.e. the programme of the Italian Socialist party. On this view, the Socialists might be the enemy of Fascism precisely because both movements were left-wing and hence competed in the same political space for the same constituency of support.[14]

However, Farrell does not employ capitalization in any such consistent manner and also capitalizes ‘Socialism’ when referring to Mussolini’s own beliefs (e.g. p39: quoted above).

Mussolini’s eventual return to his leftist roots, Farrell reports, comes only much later, after his overthrow and dramatic rescue by the Germans, with the establishment of the short-lived Italian Social Republic in Northern Italy under German patronage.

By then, however, Mussolini was a mere German puppet, and any socialist pretentions, or indeed pretentions to any sort of action independent of, let alone in defiance of, his German National Socialist patrons, were wholly ineffectual.

Defining Fascism

To decide whether Fascism was a left-wing movement, we must first define we mean by ‘fascism’. Unfortunately, however, the meaning of the word ‘fascism’ changed a great deal over time.

The word ‘fascism’ derives from the Italian word ‘fascio’, meaning ‘a bundle of sticks’, in particular the fasces, a symbol of power and authority in ancient Rome.

Amusingly, it seems to be cognate with the word faggot, now chiefly employed as a highly offensive pejorative Americanism for a homosexual male, but which also originally meant a bundle of sticks

The political usage seems to derive from the notion that several sticks bound together are stronger than one stick alone, hence emphasizing the importance of national solidarity and collectivism

Collectivism, in particular the subordination of the individual to the state and nation, was, of course, a key tenet of fascist ideology. However, collectivism is also a key element of virtually all forms of left-wing socialist ideology (except perhaps certain forms of anarchism), where it is class interests, and the interests of communist society as a whole, that take precedence.

With regard to situating fascism on the left-right political spectrum, it is certainly the case that, like Mussolini himself, Fascism began on the left

Indeed, among the first political groups to style themselves ‘fascist’ was the peasant Fasci Siciliani, who unsuccessfully fought for peasant land rights in Sicily in the late-nineteenth century.

Indeed, even the first incarnation of Mussolini’s own brand of fascism, namely the Fasces of Revolutionary Action, founded by Mussolini in 1914, was very much left-wing and revolutionary in orientation, being composed, in large part, of syndicalists and other disgruntled leftists estranged from the mainstream Italian left (i.e. from the Italian Socialist Party).

Left-wing political parties generally prove to be less radical on assuming power than they formerly promised to be while in opposition. However, Mussolini’s (and Fascism’s) own move from the left began long before they ever even came within distant sight of power.

Thus, even as early as 1920, after humiliation at the polls during national elections the previous year, Farrell himself acknowledges:

Most of the Fascists of the first hour – especially those of left-wing origin – had gone… [and] fascism… moved right” (p95).

Thus, while fascism was initially anti-clericalist and associated with revolutionary Syndicalism and the Futurist movement, it ultimately came to be associated with Catholicism and traditionalism.

Thus, the meaning of the word ‘fascism’ evolved and changed along with the policies of the the regime itself. 

Fascism’ came to mean whatever the regime stood for at any particular point in time, something that both changed over time and reflected less a coherent, unchanging ideology than it did the shifting demands of pragmatic realpolitik.

Defining the Left

To determine if fascism was truly leftist, we must also define, not only what ‘fascism’ means, but also what we mean by leftist. This is only marginally less problematic than defining ‘fascism’.

Hayek, in his celebrated The Road to Serfdom, equates the Left with big government and a centrally planned economy. On this basis, he therefore classes both German National Socialism and Italian Fascism as leftist.

Thus, American political scientist Anthony James Gregor, a leading reseracher on the nature of fascism, and of Italian Fascism in particular, reports in his book, The Search for Neofascism: The Use and Abuse of Social Science:

After the termination of the Second World War, Italian economists affirmed that ‘after 1936 the Fascist government controlled proportionately a larger share of Italy’s industrial base than any other nation in Europe other than the Soviet Union’” (The Search for Neofascism: p6)

Similarly, Patricia Knight, in Mussolini and Fascism: Questions and Analysis in History, affirms:

By 1939 the Italian state controlled four-fifths of shipping and shipbuilding, three-quarters of iron and half of steel, while as a result of the 1936 Banking Reform Act, the Bank of Italy and most other large banks became public institutions. By 1939 Italy had the highest percentage of state-owned enterprises outside the Soviet Union” (Mussolini and Fascism: p65).

However, leftism is usually associated, not only with big government, a large public sector and a planned economy, but also with redistribution and egalitarianism. In this latter sense, Italian Fascism was not especially leftist.

On the other hand, anti-Semitism has always seemed to me fundamentally leftist.

Thus, Marxists believe that society is controlled by wealthy capitalists who control the mass media and oppress and exploit everyone else. Anti-Semites, on the other hand, believe society is controlled by wealthy Jewish capitalists who control the mass media and oppress and exploit everyone else.

The distinction between Marxism and anti-Semitism is largely tangential. Anti-Semites insist that our capitalist oppressors are largely or, in some especially deranged versions, wholly Jewish in ethnicity.

Orthodox Marxists, on the other hand, take no stance on this matter either way and, frankly, prefer not to talk about the matter.

Thus, a nineteenth century German socialist slogan famously proclaimed:

Antisemitism is the socialism of fools.[15]

Or, turning this reasoning on its head, columnist Rod Liddle amusingly asserts:

Many psychoanalysts believe that the Left’s aversion to capitalism is simply a displaced loathing of Jews” (Liddle 2005).

On this basis, one might indeed argue that national socialism is a form of socialism.

However, as we have seen, anti-Semitism was, at least prior to Italy’s ill-fated alliance with Germay and passing of Italys race laws, never an integral part of Italian Fascism.

Defining the Right

If Fascism cannot then unproblematically be described as a phenomenon of the left, can we then instead characterize it as a phenomenon of the right?

This, of course, requires a definition of ‘the right’. Unfortunately, however, defining what we mean by ‘the right’ is even more difficult than defining the left. 

For example, a Christian fundamentalist who wants to ban pornography and abortion has little in common with, on the one hand, a libertarian who wants to decriminalise prostitution and child pornography, nor, on the other, a eugenicist who wants to make abortion, for certain classes of person, compulsory. Yet all three are classified as together as ‘right-wing’, even though they have no more in common with one another than any does with a raving, unreconstructed Marxist

The right, then, is defined as, in effect, anything that is not the Left.

As Steven Pinker puts it, the left is like the South Pole. Just as, at the South Pole, all directions lead north, so, at the Left Pole, all directions lead right.

Therefore, right-wing is itself a left-wing term – because it defines all political positions by reference to the extent to which they diverge from a perceived leftist ideal.

Therefore, debating whether Fascism was really an ideology of left or right simply exposes the inadequacy of this one-dimensional conception of the political spectrum, whereby all political positions are situated on a single-dimensional left-right axis.

A Third Way?

Rather than self-identifying as of ‘the Right’, Fascists themselves often affect to reject any simplistic situation of their views as either being of the left or the right. Instead, they insist that they have moved beyond left and right, transcended the left-right political divide, and represent instead a Third Position or Third Way.

This leads Farrell to propose an especially provocative analogy in his Preface, where he writes:

Whereas communist ideas appear terminally ill, the Fascist idea of the Third Way lives on and is championed by the standard bearers of the modern Left such as New Labour in Britain” (pxviii).

Unfortunately, however, Farrell never really gets around to expanding on this single throwaway sentence in his Preface.

On its face, it at first appears to rest on little more than a curious convergence of slogans – namely, both Fascism and New Labour claimed to represent a Third Way.

However, each meant something quite different by this term.

Thus, for Mussolini the Third Way (or ‘terza via’), namely Fascism itself, entailed nationalism, abrogation of individual rights to the needs of the nation, and totalitarian dictatorship.

In contrast, much though the notion of totalitarian dictatorship might have appealed to Tony Blair, the objectives of New Labour were altogether more modest in scale.

Indeed, the two regimes differed not only in what their respective ‘Third Ways’ were to involve, but also in their conception of the ‘First’ and ‘Second Ways’ to which they represented themselves as an alternative.

Thus, for Mussolini, the ‘Third Way’ represented an alternative to, on the one hand, Soviet-style communism, and, on the other, western liberal democracy.

For Blair, on the other hand, western liberal democracy was never really in question, and outright communism never really on the table either. Instead, the ‘Third Way’ was envisaged as an alternative to, on the one hand, Thatcherite neo-liberalism and, on the other, the sort of unreconstructed socialism that the Blairites dismissed as Old Labour.

Defining what Blairism or New Labour itself actually entailed is, however, much more difficult, and even more difficult, perhaps, than defining ‘fascism’.

This, then, perhaps points to a deeper affinity between the two movements. Both were not so much coherent ideologies as glorified marketing campaigns – triumphs of spin over substance.

Defining what either actually stood for, as opposed to merely against, is almost impossible.

Fascism’ and New Labour represented, then, little more than catchy political slogans that tapped into the zeitgeister of the respective ages, new words for not especially new ideas.

Indeed, Mussolini, himself a former journalist (and a very successful one at that), can perhaps lay claim to being the first politician to successfully manipulate modern media to manage his own public image – the first truly modern politician.

As for Farrell’s comparison between Fascism and New Labour, this, one suspects, reflected little more than a marketing campaign of Farrell’s own.

Farrell, himself also a journalist, was using a provocative quote to attract media attention, publicity, controversy and hence, so he hoped, sales for his new book in Blair-era Britain.

Today, less than twenty years later, it already seems strangely anachronistic, as New Labour has itself gone the way of fascism, into the dustbin of history (at least for now), to be replaced, in the Labour Party at least, with a return to unreconstructed ‘Old Laboursocialism, albeit now buttressed with a new, even more moronic, cultural Marxist ‘wokeism’ and deranged feminism.

Indeed, on the evidence of some recent Labour Party leaders, even “communist ideals” may no longer be as “terminally ill” as Farrell once so confidently predicted.

This, however, merely reinforces my suspicion that any attempt to draw analogies between fascism and contemporary political movements or regimes is ultimately unhelpful and reflects little more than a version of guilt-by-association or what Leo Strauss aptly termed the reductio ad Hitlerum.

Fascism certainly has little in common with the contemporary Left, despite the efforts of some conservatives to prove the contrary. However, as a nationalist and fundamentally anti-individualist ideology, it arguably has even less in common with the individualist and globalist ethos of contemporary neoliberalism and neoconservatism, let alone libertarianism.

As George Orwell wrote only a year or so after the defeat of both National Socialist Germany and Fascist Italy:

The word fascism has now no meaning except in so far as it signifies ‘something not desirable’.”[16]

So let’s stop using the word ‘fascist’ as a slur against our political opponents and restrict its use to an historical context.[17]

___________________

Endnotes

[1] The continued practice of slavery in Ethiopia was indeed among the justifications employed by the Italians to justify their invasion and conquest. (The British had justified their own earlier conquests in Africa under the same pretext.) Moreover, the Italians did indeed pass the first laws formally abolishing the practice of slavery in Ethiopia, though the extent to which these laws were enforced, or represented a mere propaganda exercise, seems to be in some dispute.

[2] Imprisoning or exiling large numbers of low-level mafia soldiers and associates will not only have taken those individuals themselves out of operation but also likely have deterred others from taking their places. In contrast, making only a few high-profile arrests of priminent bosses, while it may attract media attention, is likely only to result in other formerly lower-level mafiosi eagerly lining up to fill in the vacancy.

[3] Other, more genuine, Italian anti-fascists, who had indeed fought against the fascist regime, tended to be communists, who the American (and British) occupying forces were hence loathe to promote to high office. In addition, whereas the stronghold of the Mafia has always been Sicily, and other powerful Italian criminal syndicates (e.g. the ’Ndrangheta and Cammora) are likewise each based in regions of the Southern Italian Mezzogiorno, the Italian communists were strongest in the relatively more industrialized regions of Northern Italy. This ‘unholy alliance’ between the Americans, the Mafia, and, later, the Catholic Church and conservative Christian Democratic Party soon came to be almost institutionalized in post-war Italian politics, as, during the Cold War, the American government, together with Italian conservatives, opted to ally with the Mafia as the ‘lesser of two evils’ against Italy’s powerful Communist Party, who, in post-war era Italian politics, often seemed on the verge of winning power at the national level.

[4] Obviously, in a literal sense, Mussolini did not make make the trains run on time, at least not always. Indeed, no regime, howsoever efficient its transport system, has ever successfully ensured that all its trains always run on time, an obviously utopian aspiration.
Rather, the claim seems to be intended metaphorically, and to apply not just to the rail system, but rather to a perception of general improved efficiency in government and society at large under the Fascist regime, at least as compared to what was usual in Italy both before the Fascists had come to power and after they had been removed.
Thus, the general impression one gets from Farrell, who seems by no means blind to the faults of his adopted homeland and its people, is, not so much that Mussolini’s regime was highly efficient, either by European or international standards, but rather only that it was marginally less inefficient than most prior or subsequent Italian governments have been.
Interestingly, I recall reading, but have been unable to source, the suggestion that this oft-repeated adage about ‘the trains running on time’ under Mussolini in fact originated as a reference to the fact that Mussolini himself did not directly participate in the celebrated ‘March on Rome’ by which he and the PNF took power, but rather pragmatically opted to remain in Milan near the Swiss border, allegedly so as to escape and seek sanctuary abroad should this bold power grab fail and the authorities order his arrest, instead arriving in Rome only later, at the invitation of the then-King – and by train.
On this view, this famous reference to ‘the trains running on time’ might almost qualify as something of a backhanded complement, alluding as it does to Mussolini’s perceived cowardice in not participating in the March on Rome himself. In fact, according to Farrell, most participants in the so-called ‘March on Rome’ had in fact arrived by rail, Farrell reporting:

Most of the Fascists who marched on Rome had not marched at all but arrived at the assembly points by train like football supporters for an away game” (p118).

[5] Interestingly, Hitler’s Nazi regime too signed a concordat with the Catholic Church, which, like the Lateran Treaty in Italy, continues to govern relations between the Catholic Church and the state in Germany to this day, with German bishops taking an oath of loyalty to the German state on assuming office and agreeing to forgo participation in party politics.

[6] Thus, for example, Irish nationalists and British nationalists are natural enemies, as are Pakistani and Indian nationalists, and Turkish and Greek nationalists. Indeed, as far back as the third century BCE, Arthashastra, the ancient Indian treatise on statecraft, observed that next-door neighbours, by virtue of sharing a border, are natural enemies, whereas a state’s next-door neighbours but one, by virtue of sharing a border with one’s immediate neighbours, and hence one’s enemies, but not with oneself, are natural allies. Thus, France and Scotland combined against their common neighbour England in the Auld Alliance which lasted two and a half centuries, while during the First World War Russia and France allied against their common neighbour Germany. Arthashastra’s observation is sometimes cited as the origin of the famous aphorism, the enemy of my enemy is my friend.

[7] It is interesting to note that, even when Mussolini did belatedly embrace the idea of a ‘fascist international’, he initially excluded National Socialist Germany from this alliance. Thus, at the 1934 Montreux Fascist International Congress, representatives of the German National Socialist government were conspicuous by their absence. Yet, in contrast, representatives of what was then Hitler’s principal enemy, the Federal State of Austria, then governed by the so-called ‘clerical fascist’ or ‘AustrofascistFatherland Front, were invited and did indeed attend.

[8] This statistic is, I suspect, potentially misleading and probably reflects, at least partly, the higher levels of political engagement of Jews as compared to non-Jewish Italians, rather than any especial affinity towards Fascism. Jews may therefore have been overrepresented among communists and other opponents of the Fascist regime to an even greater degree than they were overrepresented among Fascists themselves.
Moreover, since Jews represented only about 0.1% of the total Italian population at the time, they can hardly be viewed as a key component of the fascist coalition of support, let alone as to any degree responsible for the rise of Fascism.
However, this statistic does at least show both that the Fascist regime was not regarded as at all anti-Semitic during this period, and moreover that Italian Jews were, by no means, universally anti-fascist in their sympathies.

[9] For my own thoughts on more realistic biological theories of race, see here, here and here.

[10] I recall reading somewhere the fanciful suggestion that Mussolini only agreed to this onerous, or at least unusual, condition because, as a former language teacher who regarded himself as sufficiently proficient in German as to forgo the need for a translator, or an Italian translation of the treaty’s provisions, he arrogantly refused any such assistance, and hence simply failed to understand exactly what it was he was actually agreeing to. However, I have been unable to source this claim, which, though it makes for an amusing story, is highly doubtful, and, if I have not myself entirely imagined it, surely apocryphal, since it is hardly likely that any competent world statesman would ever sign such an important treaty without having his advisors, in addition to himself, meticulously review its contents .

[11] Although remembered as a disciple of his compatriot Niccolò Machiavelli, Mussolini, with his militaristic braggadocio and strutting, had perhaps here imbued, or, more likely, independently hit upon, the teaching of that other great guru of military strategy and statecraft, Sun Tzu, who famously advised military leaders:

The most powerful tool of a leader is deception. Appear weak when you are strong, and strong when you are weak.

Thus, just as a powerful commander should fake weakness in order to lull his enemies into a false sense of security before attacking them, or even thereby provoking them to attack first, so a militarily-weak power like Mussolini’s Italy is advised to feign military strength and power in order to deter potential enemies from attacking.
However, it is likely that Mussolini’s own militaristic braggadocio and strutting was intended at least as much for internal consumption within Italy as on the international stage. Certainly, few foreign leaders seem to have been taken in, except perhaps Hitler, who indeed sought out an alliance with Fascist Italy despite her military weakness.

[12] In this respect, Italy was, Mussolini and the nascent Fascist movement excepted, something of an outlier and exception, since, here, the leading socialist party, Partito Socialista Italiano, did indeed stand true to the ideals of the Second International by opposing Italy’s entry into the War, even though there was, by this time, to all intents and purposes, no Second International left to which to remain true.

[13] To be clear, I do not here endorse the strong version of great man theory, as supposedly advocated by Carlyle, whereby historical research is to focus exclusively on so-called ‘great men’ to the exclusion of all other factors. On the contrary, the impact of ‘great men’ is, I believe, much less important than that of social, economic, ecological, environmental and biological factors.
The overemphasis on the impact of ‘great men’ in some popular histories has, I suspect, more to do with literary conventions, which require narratives to focus on the adventures and travails of heroes and villains and other human interest factors, in order to attract an audience, than with an objective appraisal of history. Such a focus is indeed, in my view, quite unscientific.
However, as the undoubted impact of such figures as Lenin and Hitler, and many others (including Mussolini himself), on history amply demonstrates, ‘great men’ do indeed, at least sometimes, have a major effect on human history, and such factors cannot be entirely ignored or ruled out by the serious historian. To rule out a priori the possibillity of individual personality as having a major impact on historical events strikes me as dogmatic and almost as unscientific as focussing exclusively on such factors.
Of course, in referring to both Lenin and Hitler as ‘great men’ I am not using the word ‘great’ in a moral, acclamatory or approving sense, but rather in the older meaning of the word, referring to the ‘great’ (i.e. massive) impact that each had upon history. This exculpatory clarificiation we might helpfully term the Farrakhan defence.

[14] Inevitably, it is parties of similar ideological persuasion who are most in competition with one another for support, since both will be attempting to attract the same core constituency of supporter. Relatedly, I am here reminded of a quotation attributed (possibly apocryphally) to Winston Churchill, who, when a newly elected MP, surveying for the first time the benches opposite, remarked ‘So, that’s the enemy’, was said to have replied, ‘No, that’s the oppostion. The enemy sits behind you’.

[15] Actually, as an avowed opponent of socialism and Marxism (albeit one who recognizes a certain usefulness and truth in the Marxist analysis of capitalist society), I would think it would be more accurate to state:

Socialism is the socialism of fools. Anti-Semitism the socialism of other fools.

[16] Of course, if we are being pedantic, Orwell was obviously exaggerating. Not all things that can be described as ‘not desirable’ can also be described as ‘fascist’. For example, one might well consider an unhealthy and not very tasty meal to be undesirable, but not even the most deranged leftist or unreconstructed Marxist would likely to describe food as ‘fascist’, though this would admittedly make for a good parody of some of the worst excesses of leftist and Marxist rhetoric.
Employment of the word ‘fascist’ is thus generally restricted to the political sphere, though the charge may also be levelled at anyone perceived as excercising some degree of authority in a given situation, especially if they are perceived as having too much authority in this matter or as excercising this authority in a manner that displeases the person employing the term.
More specifically, ‘fascist’ is generally employed only in respect of political positions, groups or regimes that are perceived as excessively authoritarian, restrictive of individuals liberties, nationalist or right-wing.
However, these terms themselves are often imprecisely, and very expansively, defined and understood. Indeed, as I have discussed above, the term ‘right-wing’ is especially extremely broad and imprecise in meaning, in ordinary usage coflating many divergent and conflicting political positions, and is hence almost as problematic as is the word ‘fascist’ itself.
Interestingly, Orwell’s observation is echoed by at least one leading leading post-war theorist of fascism (and alleged fascist sympathizer), namely Anthony James Gregor, who believes that the word, when properly applied (i.e. as he applies it) does in fact have a precise meaning, and indeed even an internal philosophical coherence, but nevertheless acknowledges that, in ordinary colloquial usage, is hopelessly ill-defined. Thus, Gregor writes:

Like some other terms in contemporary political use, the term ‘fascist’, as used in ordinary speech, is almost entirely without substantive meaning or specific reference” (The Search for Neofascism: p12

[17] I am here advocating that the word ‘fascism’ be confined in usage to the early- to mid-twentieth Italian political movement and ruling regime, and perhaps a few contemporaneous copycat movements that explicitly described themselves as ‘fascist’ (e.g. the BUF in the UK). Even describing the National Socialist movement and regime of Germany in the mid-twentieth century as ‘fascist’ seems to me unhelpful and potentially misleading, since, despite some commonalities, German National Socialism was, in many respects, a quite different and distinctively German phenomenon, and German National Socialist leaders such as Hitler, much though he may have admired and even partially modelled himself on Mussolini, did not, to my knowledge, ever self-identify as ‘fascists’. Instead, the employment of the term ‘fascist’ to describe Nazi Germany seems to have begun among opponents and critics of the regime, in particular among Marxists and the Soviet Union.

Anthropology Meets True Crime: ‘Pimp Philosophy’ and a World Where Men are Truly Dominant – Or Are They? 

Black Players: The Secret World Of Black Pimps by Richard Milner and Christina Milner (New York: Bantam Books, 1972)

To validate flawed sociological dogmas such as cultural determinism and feminism, generations of American anthropologists have bravely ventured into remote deserts, jungles and other dangerous, primitive and inhospitable corners of the globe in an effort to discover (or, if necessary, to fabricate) the existence of a society in which traditional western sex-roles are reversed.

The enterprise has, I think it is fair to say, proven singularly unsuccessful.[1]

However, way back in the early-1970s, Milner and Milner, two American anthropologists, discovered precisely what their colleagues have been searching for in vain, namely a culture in which sex roles are reversed, right in America’s own backyard – or rather in America’s own backstreets.

This was the underground subculture of pimps and ‘hos’. Here, in stark contrast to the traditional sexual division of labour in western (and indeed many non-western) societies: 

Women are the economic providers… [whereas] a man may spend hours a day on his hair, clothes and toilette while his women are out working to support the household” (p5).

Another feature of the pimp lifestyle at odds with mainstream American culture is the prevalence of polygyny. Thus, Milner report that many pimp-ho households are polygynous, being composed of a single pimp and several prostitutes, and polygyny is regarded as the ideal (p5). 

Interestingly, this family structure and pattern of economic activity in many respects parallels that still prevailing in much of sub-Saharan Africa. Thus, in Africa, polygyny is ubiquitous and women perform most agricultural labour (Draper 1989)[2].

One controversial interpretation, then, is that people of black African descent are genetically predisposed to such a mating system since it was adaptive in much of sub-Saharan African, and that African-Americans are simply recreating in America an approximation of the mating system, and economic system, of their African forebears. 

Of course, since pimp culture has now been popularized by generations of ‘gangsta rappers’, the “secret world” promised by the authors in their subtitle may be more familiar to modern readers than on the book’s first publication in 1972 (though, even then, blaxploitation films had already introduced the black pimp archetype to the wider public). However, the picture created in rap lyrics is necessarily so comically caricatured out of all recognition that the Milners’ exploration of the reality behind the absurd caricature remains as revelatory as ever.[3]

Male Dominance and Pimp Philosophy 

Of course, although women are the economic providers and pimps concerned with their clothes and appearance, in one crucial respect, conventional sex roles appear to be, not reversed, but rather accentuated in American pimp culture.

Thus, in American pimp culture, male dominance was, the Milners’ emphasize, absolute and categorical. 

However, what the Milners refer as ‘pimp philosophy’, namely the worldview and philosophy passed down among pimps from mentor to student and described by the Milners in detail, raises serious questions about whether this too, in some respects, represents a reversal of the sex roles apparent in mainstream society and whether, in ‘square’ society, it is indeed men who are really dominant (see also The Myth of Male Power: which I have reviewed here and here). 

Thus, according to the ‘pimp philosophy’: 

White men (and square blacks) are thought to be ‘pussy-whipped’ by their wives after having been brain washed by their mothers to accept female dominance as the natural order of things. Most families today are controlled by women, who direct the goals and manage the money… by withholding sexual favours” (p161). 

It is indeed the case that, while men work longer hours and earn more money than women, women are known to control the vast majority of spending decisions.

Thus, Martha Barletta reports that reports that women are responsible for about 80% of household spending in modern America (Marketing to Women: p4); while another marketing researcher, Bernice Kanner, reports that women make approximately 88% of retail purchases in the US (see Pocketbook Power: p5).

Thus, according to ‘pimp philosophy’, square husbands are ‘pimped’ by their wives every bit as ruthlessly as street-prostitutes, by being obliged to earn money and financially support their wives in return for sexual favours.

Thus, according to ‘pimp philosophy’, the Milners report: “The highest level of prostitution is—the wife!” (p221). 

Whether the men want to admit it or not, every woman is a ho regardless of what the status is—housewife, nun, prostitutes, whatever you want to say. The Housewife gets longevity, you know. She gets the vacation every year, she gets the security with the fella on the twenty-five-dollar-a-year job. Vacation every summer, the golf club, country club” (p227) 

Interestingly, this view of male-female relations directly converges with that of anti-feminists such as Esther Vilar who expressed similar ideas in her book 1971 classic, The Manipulated Man, which I have reviewed here.[4]

For example, one pimp describes how wives supposedly bear children only, or at least primarily, because: 

She knows once she has one or two babies she’s gonna have him locked down tight and even if he leaves she can still get four or five hundred dollars a month [in maintenance payments] if he’s making any kind of money” (p227). 

This parallels Vilar’s description in The Manipulated Man of offspring as “hostages” in her chapter title “children as hostages”, since they are used, like hostages kidnapped in order to make a ransom demand, to demand additional monies from the unfortunate father. Thus, the pimp quoted by the Milners explains:

His wife is pimping him, see? She gets him to get up every morning, cooks his breakfast to make sure he’s good and strong, gives him his vitamin pills and everything, hands him his little briefcase, you know, so he can get out there and get the buck so she can go play bridge, go get her hair done, understand?” (p229) 

The pimp-ho relationship is then directly analogous to the relationship between husband and wife, only with the gender roles reversed. Thus, in the endnote to chapter one, the Milners approvingly quote sociologist Travis Hirschi as observing:

The similarity of the pimp-prostitute relationship to the husband-wife relationship, with the economic roles reversed, is too obvious to overlook” (p285; Hirschi 1962).

According to the pimps interviewed by the Milner’s during their research, the process of socializing and indoctrinating males to willingly accept their assigned role as beta providers begins in childhood. Thus, the Milners report:

Several pimps asserted that pimping comes from Black men being supported by their mothers as kids [in single-parent households] and deciding to continue the arrangement… Most pimps, however, believe that they were raised by their mothers not to be pimps, but to be tricks. ‘Trick marriage’ is seen by the pimps as a man’s servitude to women in exchange for ‘her pussy’” (p174-5).

Thus, since it is mothers who are responsible for most childcare, they indoctrinate their sons from infancy to accept ‘trick marriage’ and female dominance as the natural, normal and healthy state of affairs. Thus, one pimp observes:

She is, from the time you are a kid, understand, giving you a certain set of values which in reality is a woman’s set of values. She is brainwashing you to the extent of how to treat a woman” (p176). 

As a result of this indoctrination: 

If you are a boy, say twelve years old, and you see Mom and Dad fighting you naturally come to the defense of Mom… [because] from the time you were young, she’s the one who changed your diapers, bathed you, made sure that you were clothed and shoed and everything else, so you naturally come to the defense of Mom. And you forget entirely the fact that it was Dad was the one who made the money that put her in the position to do all these things in the first place. So when you become a man and encounter a woman you automatically accept the values which were taught to you there.” (p177) 

This again parallels Esther Vilar’s contention that: 

Men have been trained and conditioned by women, not unlike Pavlov conditioned his dogs, into becoming their slaves.

Thus, Vilar observes:

The advice a mother gives to her teenage son going out on his first date is a good example of woman’s audacity: Pay the taxi; get out first; open the door on the girl’s side and help her out. Offer her your arm going up the steps or, if they are crowded, walk behind her in case she stumbles so that you can catch her. Open the door into the foyer for her; help her out of her coat; take the coat to the cloakroom attendant; get her a program. Go in front of her when you are taking your seats and clear the way. Offer her refreshments during the intermissions – and so on” (The Manipulated Man: p40-41). 

As a consequence of such early indoctrination, even one otherwise resolutely ‘red-pilled’ player acknowledged:

There are things in me right now that I can’t help that have been conditioned over a period of time. I do things automatically, you know. I open doors for old ladies and if I go through a doorway, and hesitate and let the woman go first” (p177).

Thus, whereas the family structure of the ghetto has, on account of the prevalence of female-headed households and absent fathers, been characterized by sociologists as matriarchal, black players suggest a more nuanced interpretation:

Although the ghetto leans towards matriachy, players admit, it isn’t as all-pervasive or as smoothly functioning as the White matriarchy of the majority. For the White man is not even aware that he lives in a matriarchy, while Black men are becoming more sensitive to being pimped by both White society and their own Black women… White men, like Samson, are still sound asleep and unaware that Delilah has cut their hair” (p171). 

Indeed, the analogy with ‘red pill philosophy’ and the so-called men’s rights movement is made all but explicit by the Milners when they write: 

Woman’s liberation movement is not revolutionary, say the players. What would be truly revolutionary would be the liberation of men” (p227). 

However, the black players are capitalists at heart and hence reject all political liberation movements, including, not only women’s liberation, but also black liberation: 

In this… the pimp expresses a common ghetto sentiment: ‘Fuck Black power and White power; I believe in green power’” (p223). 

Thus, the Milners recount one anecdote of how:

“[When] a militant black man in the bar loudly proclaimed ‘I’m gonna get my piece and shoot all the whiteys’… another player replied, ‘Don’t do that, brother. Shit, you gonna take all my business away’” (p237). 

The same would apply to the liberation of men. After all, according to pimp philosphy, it is only because:

So-called normal and moral marriage is aberrant… [that] many husbands… pay hos for sex they cannot get at home, which [pimps] point to as the final degradation of the American male under the heel of the almighty bitchy American wife. She not only doesn’t give him what he is paying for, but forces him to go out and also pay some other woman if he wants sex. Often he pays another woman only to have a shoulder to cry on, because the wife loses respect for a man she can dominate and is unhappy in her unnatural unwomanly role as boss” (p175). 

Thus, the Milners envisage one pimp commiserating with the hapless henpecked husband, but then rationalizing: 

But, of course… I wouldn’t have it any other way, trick. Because, without you and your fucked-up illusions, without your fucked-up sex life—I’d be out of business tomorrow” (p251). 

Pimp Philosophy Evaluated 

Pimp philosophy is certainly illuminating and thought-provoking. 

It is moreover undoubtedly more insightful than feminist theory, which represents the dominant paradigm for understanding the relations between the sexes among social scientists, journalists in the mainstream media, the academic establishment, politicians, women’s rights activists and other such professional damned fools. 

Indeed, although they never quite go so far as to endorse it, the Milners themselves are nevertheless clearly taken by what they call ‘pimp philosophy’, and even acknowledge:

Once the world, and particularly the relations between the sexes, is viewed from a black player’s vantage point, things never again seem quite the same” (p243). 

Indeed, according to the Milners, this is hardly surprising. 

Like the sociologist and anthropologist, pimps and hustlers depend for their livelihood on an awareness of social forces and the human psyche… [but whereas] the social scientist rarely applies his knowledge directly, and so has far more leeway than the hustler or the pimp in being wrong before he is out of a job” (p242). 

In other words, unlike feminist sociologists and women’s studies professors (and indeed anthropologists like themselves), who are insulated in universities behind ivory towers at the taxpayer’s expense and can therefore can hold fast to their flawed ideological dogmas with blind faith notwithstanding all evidence to the contrary, the pimp’s psychological and sociological analysis is subject to ruthless falsification at the hands of the market forces beloved of neoliberal economists. 

However, in claiming that male dominance is the natural state of humanity, pimp philosophy seems, to me, to have taken something of a wrong turn. 

Thus, according to the pimps, male dominance is the natural and harmonious order of mankind, and this was disrupted only when, according to ‘pimp mythology’ (an ingenious reinterpretation of biblical mythology), Adam gave in to sexual temptation, and was tempted by Eve to bite into the forbidden fruit (i.e. pay for sexual favours), thereby becoming, not the first man, but rather the first trick (p168-70; p259-60). 

Therefore, according to the pimps, as a result of this decision to bite into the forbidden fruit, most men are no longer ‘real men’ but rather mere ‘tricks’. Pimps themselves therefore represent, according to the ‘players’ themselves: 

The only real men [left] in America today” (p162). 

However, viewing male dominance as the natural and harmonious order of mankind necessarily raises the question: If, as pimps contend, male dominance is so natural and harmonious, why then is it found, at least in the West today, only among a small and exclusive subculture of pimps? What is more, why, even among pimps, is it maintained only by levels of violence and of self-control on the part of pimps far greater than that typically apparent in conventional, so-called ‘square’ relationships? 

However, the real flaw in the pimp perception of male dominance as the natural and harmonious state of nature lies in the nature of the pimps’ own dominance over their prostitutes and the lifestyle and occupation of the prostitutes themselves. 

Thus, as the Milners themselves observe: 

“[Although] the Book [i.e. the unwritten code of how to pimp passed from mentor to student] provides a blueprint for a male-dominated society and a rationale for wrestling all control over men from women… ironically, this condition is achieved by making women’s full-time occupation the control of men who are outside the subculture” (p48). 

In other words, the pimp’s exploitation of his women necessarily relies and depends on those women’s own exploitation of other men

A ho… is both ‘pimping’ off her customers and is being a trick [i.e. being pimped] by her man” (p213). 

The ‘Book’ provides, then, not a blueprint for male domination throughout society, but rather a blueprint for domination by a necessarily small subset of men – an exploitation both of women (i.e. the prostitutes whom the pimp controls) but also, indirectly, of other men (i.e. the clients of these prostitutes). 

The pimp survives, then, not only through the exploitation of women, but also, more fundamentally, by the vicarious exploitation of other men (namely the prostitutes’ clients, or, aptly named, ‘tricks’). 

Sweet Jones, a character from Iceberg Slim’s famous novel, Pimp: The Story of My Life, succinctly and eloquently summarized the same point: 

A pimp is really a whore who has reversed the game on whores. So Slim, be sweet as the scratch, no sweeter, and always stick a whore for a bundle before you sex her. A whore ain’t nothing but a trick to a pimp. Don’t let ’em georgia you. Always get your money in front just like a whore.” (Pimp: The Story of My Life: xxi).[5]

On this view, with their characteristically feminine concern for clothing, fashion, hair and hygiene and their ability, like housewives, to leech off the income of their sexual partners, pimps represent, not so much, as they themselves contend, “the only real men in America today” (p162), but rather second-rate female-impersonators. 

Endnotes

[1] Indeed, many aspects of sex roles (e.g. sex differences in intraspecific aggression, and in levels of parental care) appear to be, not only cross-culturally universal, but also universal throughout the mammalian order, and indeed widespread among animals in general. This, of course, reflects the fact that they are not only innate, but moreover the product of analogous selection pressures operating among many different species (see Bateman 1948; Trivers 1972). Thus, for example, in all human societies for which data is available, men are responsible for an overwhelming majority of homicides, and also represent the majority of homicide victims. Similarly, in all documented cultures, mothers rather than fathers provide the vast majority of direct care for infants and babies.

[2] This pattern appears to be longstanding and hence deeply ingrained, lending credence to the suggestion that it may reflect an innate racial difference in sexuality and mating systems. Indeed, even among surviving African hunter-gatherer groups, it is female gatherers, not male hunters, who provide most of the caloric requirements of the group, in stark contrast to the situation among arctic hunter-gatherers like the Inuit (Ember 1978).

[3] To illustrate just how comically caricatured public perceptions of the pimp lifestyle have become, it is worth pointing out that, in response to the use of the term in many rap songs, many people seem to believe that a ‘pimp stick’ is, to quote one definition, an ornate or gaudy cane, as might be used by a stereotypical pimp. In fact, however, pimps traditionally carried no such stick. Instead, the phrase ‘pimp stick’ originally referred, and among pimps presumably still refers, to a weapon composed of “two wire coat hangers twisted together” which is used by pimps as a whip with which to discipline disobedient whores (Whoreson: p212).

[4] In addition to Esther Vilar’s The Manipulated Man and my review of this work, see also Matthew Fitzgerald’s purported update to Esther Vilar’s work, namely his delightfully subtitled, Sex-Ploytation: How Women Use Their Bodies to Extort Money from Men

[5] Curiously, the Milners claim to have interviewed Iceberg Slim (alias Robert Beck, née Robert Lee Maupin) and refer to this supposed interview at various points in their book. However, Beck himself, without mentioning them by name, denies this in The Naked Soul of Iceberg Slim (p200), where he accuses the Milners of stealing black culture, i.e. what would today be called cultural appropriation. The mysterious interview is supposedly contained in the recently published collection, Iceberg Slim: The Lost Interviews With The Pimp

References 

Bateman (1948), Intra-sexual selection in Drosophila, Heredity, 2(3): 349–368.
Draper P (1989) African marriage systems: Perspectives from evolutionary ecology. Ethology and Sociobiology 10(1–3):145-169.
Ember (1978) Myths about Hunter-Gatherers Ethnology 17(4): 439-448.
Hirschi T (1962) The professional prostitute. Berkeley Journal of Sociology 7(1):33-49.
Trivers, R. (1972). Parental investment and sexual selection. Sexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 

Meyer and the Myth of the American Mafia – Cutting Lansky Down to Size 

Robert Lacey, Little Man: Meyer Lansky and the Gangster Life (Boston: Little Brown & Co, 1991).

Robert Lacey’s biography of the infamous Jewish-American organized crime figure, Meyer Lansky, was originally published in 1992 with the title Little Man: Meyer Lansky and the Gangster Life, only to be reissued in 2016 with a new title, Meyer Lansky: The Thinking Man’s Gangster

This latter subtitle, ‘The Thinking Man’s Gangster’, perhaps accords more with the popular image of Lansky as a kind of nefarious criminal mastermind, and may therefore have helped boost the book’s sales. However, given that Lacey’s biography is actually concerned, to a large extent, with debunking that very image of Lansky, and indeed much of the popular mythology surrounding him, it is the earlier title, ‘Little Man’, that better reflects the book’s actual content.  

It is true that Lansky, despite his diminutive stature, was never, to my knowledge, known by the sobriquet, ‘Little Man’.[1]

However, in a metaphoric sense, Robert Lacey’s biography is indeed very much concerned with ‘cutting Lansky down to size’. 

Debunking Sensationalist Claims 

The history of organized crime in America is a subject that has rarely attracted the attention of serious historians or first-rate researchers. What literature does exist on the subject is largely to be found, not in the ‘history’ section, but rather in the much-maligned, and often justly maligned, true crime section of the library or bookshop, and is typically sensationalist in tone and often historically inaccurate. 

Indeed, Lacey coins a new and apt term for this literary subgenre – “Pulp Nonfiction” (p314).[2]

Thus, inevitably, much of Lacey’s text is concerned with debunking the many myths perpetuated in earlier Mafia histories.

One famous example is the so-called ‘Night of the Sicilian Vespers’, when, according to mafia folklore, and countless previously published mafia histories, a whole succession of Mafia bosses across America were assassinated in a single night in the aftermath of the assassination of Salvatore Maranzano.  

Actually, however, despite being repeated as lore in countless mafia histories, the nationally-synchronized bloodbath never actually seems to have occurred.  Thus, aside from the killing of Marazano himself, Lacey reports: 

systematic study of newspapers in eight major cities in the two weeks before and the two weeks following September 10, 1913, the date of Maranzo’s killing… [revealed] only three reports of similar gang- or racketeer-linked killings – two in Newark and one in Pittsburgh” (p65). 

Lucky Luciano and World War II

Another source of much “legend and exaggeration”, Lacey reports, has been the supposed role of then-imprisoned crime boss Charles ‘Lucky’ Luciano in the Allied invasion of Sicily. Thus, Lansky reports how, in some of the more outlandish accounts: 

Lucky Luciano has been pictured hitting the beaches in person, waving triumphantly from atop a tank, and there have been dark tales of planes dropping flags and handkerchiefs bearing the letter L behind enemy lines – signals supposedly from Luciano to local Mafia chiefdoms” (p125).[3]

These claims are obvious make-believe. However, the real story of the cooperation between organized crime and the American government to forestall infiltration and sabotage on the New York docks, which cooperation likely gave rise to the more sensationalist rumours referred to above, is arguably almost as remarkable in its own right. 

The impetus was a fire onboard the SS Normandie, a French liner that had been commandeered for military use, and was being converted into a troop ship in a New York harbour. 

With the benefit of hindsight, it is today all but certain that the fire was simply an accident. However, authorities at the time, wary of the threat of infiltration, suspected enemy sabotage, and hence moves were made to establish contact with the underworld figures who were known to control the New York docks in order to forestall any possible recurrence.[4]

This search for underworld contacts on the docks led the authorities ultimately to Luciano, then serving a sentence for prostitution offences in a New York prison. Lansky’s own role in this process was to act as an intermediary, having been recommended for this role by Luciano’s Jewish lawyer Moses Polakoff. 

The result was a remarkable meeting between Luciano, Lansky and representatives of the US Navy in an interrogation room in Great Meadow Correctional Facility, at which Luciano somewhat reluctantly agreed to cooperate. 

Perhaps surprisingly, genuine patriotism seems to have been at least part of the reason both Luciano and Lansky agreed to help out.

Lansky, being Jewish, was obviously no friend to the Nazis; Luciano, meanwhile, may have been unsympathetic to Mussolini’s Fascist regime in his native Italy due to its crackdown on the Sicilian Mafia under Cesare Mori – both, however, also claimed to see themselves as patriotic Americans (though Luciano was soon to be deported).

Whether there was also some implicit quid pro quo agreement whereby Luciano would receive early release after the war in return for his cooperation is not clear. However, Luciano did attach at least one condition to his cooperation – namely, that it remain strictly a secret, lest he be subject to retribution after his envisaged deportation back to Sicily after the War (p119). 

Unfortunately for Luciano, however, after the war he was to discover that he was not the only one who wanted his secret agreement with US naval intelligence to remain very much a secret. On the contrary, with Mussolini’s regime now in tatters and the War very much won, it was now naval intelligence themselves who had every incentive to keep their disreputable secret dealings with organized crime elements very much out of newspaper headlines and the public domain, ultimately to the chagrin of Luciano himself. 

Thus, when Luciano’s attorney, the same Moses Polakoff who had played such an instrumental role in arranging the meeting between Luciano and US naval intelligence representatives, applied for a grant of clemency and commutation of sentence as recompense for Luciano’s wartime cooperation with the authorities, Naval Intelligence, contacted by the parole board for corroboration of Polakoff’s claims but loathe to admit their dealings with such a notorious figure, denied ever having been in contact with Luciano (p125-6). 

Polakoff did ultimately obtain evidence of his client’s wartime cooperation with the authorities, and, as a result, Luciano was indeed ultimately granted parole, albeit on condition that he not contest his immediate deportation to Italy, from where he was subsequently alleged to have orchestrated the international trade in heroin. 

However, the wartime cooperation between government and organized crime remained a tightly-guarded secret and it was probably this secrecy, combined with the inevitably leaking of “hints and half revelations” regarding what had occurred, that gave rise to some of the more outlandish claims of Mafia involvement in the invasion of Sicily (p119). 

History vs. True Crime 

Yet, if most true crime authors are indeed rightly to be criticized for the quality of their research, then the fault does not lie entirely with them. It also lies, according to Lacey, with the serious historians and researchers who have neglected this area of American history as somehow beneath them. 

Yet the history of organized crime is by no means a matter of peripheral importance in the history of twentieth century America. On the contrary, organized crime in America has had a substantial impact on America’s social, economic, legal, cultural and even its political history.  

Thus, Lacey rebukes his fellow-historians, declaring: 

There is a dire need for objectively analysed data on organized crime, an area which academics have too readily surrendered to the custody of popular entertainment” (p445). 

Gangster or Businessman? 

Unfortunately, however, in exhorting serious historians to research the history of organized crime in America, Lacey could almost be accused of failing to take his own advice, since the subject of his own biography, Meyer Lansky, was, at least in Lacey’s own telling, only really on the fringes of organized crime for most of his adult life.[5]

Indeed, perhaps the most remarkable revelation of Lacey’s biography of the most notorious Jewish gangster of the twentieth century, or perhaps of all-time, is that, for most of his adult life, Lansky apparently genuinely regarded himself as no such thing.

Rather, after youthful dalliances as, first, a shtark or “strong-arm man” and perhaps as a pimp, and then in his early adulthood as a prohibition-era bootlegger, Lansky thenceforth cultivated a respectable, or at least semi-respectable, image.  

In his own self-image, Lansky saw himself, not as a gangster, but rather as a businessman – albeit a businessman whose chosen line of business, namely casino gambling, happened to be unlawful. 

This makes large sections of Lacey’s biography rather less exciting in content than one might expect for a book ostensibly in the sensationalist ‘true crime’ genre of literature. Certainly, any reader who goes in expecting dramatic accounts of gunfights, gang wars and the like is liable to be disappointed.

Gang wars and assassinations occur only in the background, and Lacey discounts any notion that Lansky had any role in ordering such assassinations as those on Albert Anastasia or Bugsy Siegel with which he has sometimes been linked. 

Yet, unlike so many other prohibition-era bootleggers who took advantage of the repeal of prohibition to move into the lawful production and distribution of alcoholic beverages or other lawful business ventures, some of whom would ultimately establish themselves as respectable, and sometimes highly successful businessmen, Lansky never did quite ‘go straight’ (p80). 

Neither did he grasp the other main opportunity to ‘go legit’ that presented itself to him over the course of his career, namely in Las Vegas, Nevada, where casino gambling had been legalized in 1931 (p152). 

Instead, as an organizer of gambling activities, Lansky operated in an illegal and illicit industry. As such he could not turn to the police for protection, and had instead to rely on the muscle provided by organized crime. 

However, Lansky appears to have sought to distance himself from this side of the business, which he kept at arm’s length and contracted out, mostly to Italian-American criminals. Nevertheless, in Lacey’s eyes, Lansky still remained a gangster: 

Ethically and practically, the perceived threat of muscle is the same as muscle itself, and all Meyer’s businesses rested ultimately on that threat” (p170). 

Gambling and Other Victimless Crimes 

In keeping with his respectable self-image, Lansky’s casinos were very much respectable institutions – or at least as close to respectable as casinos could be in a jurisdiction where casino gambling was illegal. 

One lesson he had learned in to the crap games of the Lower East Side was that the principal ingredient for long-term gaming success is not flashiness but probity. It is easy to fix a roulette wheel or to rig a game of craps… But such tricks can only yield temporary dividends. The moment that serious players sniff the slightest suspicion that the games are rigged against them, they will go elsewhere, and word spreads very quickly. A crap game or casino can be dead in a matter of hours, and once dead, it stays dead. So, as with his bootlegging, Meyer Lansky found himself in an illegal enterprise where enduring success depended on being honest” (p186). 

In short, in Lansky’s casinos, just like in the crooked ones, the high rollers would ultimately lose their money. But, unlike in the crooked casinos, they would be fleeced fair and square – and hence keep coming back eagerly for more. 

Lansky took pride in running a clean operation. For all their illegality, and the sinfulness of gambling, his carpet joints were essentially bourgeois establishments” (p143). 

Indeed, it was Lansky’s reputation for probity that led General Bastista, the then-dictator of pre-revolutionary Cuba, to invite Lansky to take control of the Cuba’s lucrative casino gambling operations so as to ensure that the games were fair and hence counter negative publicity in America that had resulted from the fleecing of American tourists. 

In accordance with his carefully-cultivated semi-respectable image, Lansky naturally sought to distance himself from other, less reputable, criminal activities besides his chosen vocation of gambling. 

Interestingly, this included not only victimful and violent crimes such as robbery and murder, but also other so-called victimless crimes besides gambling itself, such as prostitution and narcotics. Thus, Lacey reports: 

Throughout his adult career, Meyer Lansky was careful to distance himself from the ‘dirty’ crimes⁠—drugs, prostitution” (p159) 

I haven’t ever dealt in narcotics,” Lacey quotes Lansky as telling a journalist “with a mixture of pride and distaste” (p90).

 As for prostitution, not only did Lansky himself not profit from or involve himself in the trade, but he also strictly forbade prostitutes from frequenting and soliciting within his respectable ‘carpet joint’ casinos.[6]

Yet, ironically, Lansky adduces evidence to suggest that Lansky may have begun his criminal career as a pimp.

The evidence is tentative but tantalizing – each of Lansky’s first appearances before the courts related to violent assaults on women, who themselves, Lacey infers from their addresses, likely worked as prostitutes (p42-3). 

In other words, it appears that Lansky’s pimp hand was strong. 

Frank Costello and ‘Street Activities’ 

However, Lacey’s claim that Lansky was not involved in any other illegal activities besides casino gambling is perhaps brought into doubt by the fact that Lacey also makes a similar claim regarding Lansky’s friend and sometime business partner Frank Costello, claiming that, at least by the 1950s: 

There is no evidence that Frank Costello was involved in street activities like loan-sharking, drug-dealing, or pimping” (p189). 

This, however, in my view, puts the whole matter in some doubt, since, while this claim may indeed be true of Lansky, it cannot be true of Costello, since the latter was, at least according to the orthodox mafia chronology, at this time the ‘boss’ (or, in some versions, merely ‘acting boss’, accounts vary) of what is today known as the Genovese crime family

As boss, Costello probably had no need to directly participate in such activities, and almost certainly didn’t, having no wish to ‘dirty his hands’ or risk implicating himself in such a way. 

However, as boss of an American Mafia family, Costello would automatically be entitled to a cut of profits earned by members or associates of his crime family who did engage in such activities. 

Given that many Genovese family members no doubt did engage is such “street activities” as loan-sharking, and very possibly prostitution and drug-dealing as well, this would mean that Costello did indeed profit from, and hence involve himself, albeit indirectly and at arm’s length, in these activities. 

Lacey’s claim that Costello was wholly uninvolved in such activities is therefore doubtful. 

The Myth of the American Mafia

This leads to another topic on which Lacey has an interesting take – namely, the existence and nature of what we today habitually refer to as ‘the American Mafia’. 

A recurrent theme of recent histories of the Mafia, in both its Sicilian incarnation and its American offshoot, is that the Mafia was indeed a real criminal organization, and that those who denied the existence of the Mafia were, at best, naïve and misinformed, but, at worst, corrupt collaborators with, lackeys of and apologists for the Mafia itself. 

This, for example, is a recurrent theme in John Dickie’s history of the Sicilian mafia, Cosa Nostra, as well as its sequels, Blood Brotherhoods and Mafia Republic, where those who claimed that Mafia was not a criminal organization, but rather, in some versions, a mere ‘state of mind’, or ‘attitude of exaggerated individualism and defiance of authority’, come in for repeated condemnation as disingenuous mafia apologists. 

Similarly, in recent histories of the American Mafia, FBI boss J Edgar Hoover invariably comes in for criticism for having long denied the existence of the Mafia during the first half of the twentieth century, before being belatedly forced to change his tune after the much-publicized police raid on the Apalachin meeting of mafia bosses in 1957.[7]

For example, in Selwyn Raab’s long and ponderous history of the New York Mafia, Five FamiliesHarry Anslinger, the then-head of the Federal Bureau of Narcotics, who is today mostly remembered for his hysterically exaggerated claims regarding the malign effects of cannabis, emerges as an unlikely hero, for recognizing the reality of the American Mafia while Hoover himself was still in denial, and not just about his sexuality

Lacey’s position regarding the existence, or non-existence, of the American Mafia is, however, more nuanced than that of most other mafia historians.

Certainly, Lacey does not deny the existence of the American Mafia. On the contrary, he readily acknowledges:

In the course of the last forty years countless law enforcement agencies, including the FBI, have shown that America is riddled with local associations of Italian malefactors. Mafia is as good a name for them as any” (p203). 

However, Lacey does question what exactly we mean by the term Mafia.  

Thus, he argues that, contrary to popular perception, the American Mafia is not a nationally-organized criminal conspiracy, or, as it was popularly termed, a ‘national syndicate’, but rather a combination of many different local criminal conspiracies and syndicates.  

These disparate local criminal structures and networks may share a common culture, a common vocabulary and even a similar structure. For example, they may use similar terms to refer to one another (‘made guy’, ‘associate’, ‘boss’) and have similar or identical initiation rituals to induct new members.[8]

However, the American mafia is not and never was a single organization with a single nationwide hierarchal structure, as it was sometimes imagined as being. 

Thus, Lacy concludes: 

Hoover’s personal position, that the Mafia did not exist, has proved as erroneous as the Kefauver Committee’s belief in a national conspiracy” (p203). 

Defining ‘The Mafia’ 

Ultimately, then, whether the Mafia exists depends on what we mean by the term ‘the Mafia’. 

Indeed, FBI supremo J Edgar Hoover, long infamous for denying the existence of the American Mafia, took advantage of this semantic pedantry to assert that he had been right all along – ‘the Mafia’ did not exist but La Cosa Nostra very much did and indeed suddenly represented a serious nationwide threat (p293). 

This might sound like mere semantics, but it actually had an element of truth. Thus, as early Mafia turncoat Joseph Valachi explained to a disbelieving Senate subcommittee: 

No one who was involved in what outsiders called the Mafia ever actually used the word” (p292).  

Instead, Mafia insiders in America referred, not to ‘the Mafia’, but to Cosa Nostra’which has been variously translated into English as either ‘Our thing’ or ‘This Thing of Ours’. 

However, to compound confusion, the FBI then decided to invent a new term of its own coinage – namely, not ‘Cosa Nostra’, but La Cosa Nostra, henceforth abbreviated to LCN in FBI documents (p293). 

Unfortunately, however, this was not only a term never actually used by mafia insiders (nor indeed, as far as I am aware, by anyone else prior to its adoption, or perhaps invention, by the FBI), but also made no grammatical sense whatsoever in the original Italian, translating to roughly ‘The Our Thing’ (Five Families: p136). 

The ironic result, Lacey observes, is that: 

After all the arguments, the FBI dedicated itself to the pursuit of an entity which literally did not exist” (p293). 

The Kefauver Committee and the Myth of the American Mafia 

Why then, in Lacey’s view, have American perceptions of Italian-American organized crime been so skewed and mistaken? 

Lacey places the ultimate blame primarily with the Kefauver committee, a Senate Committee set up to investigate organized crime in America in the 1950s, which, he argues, was responsible for several “fundamental and enduring misconceptions” about American organized crime, in particular the notion that American organized crime was a nationally organized criminal conspiracy (p203).  

Why then did the Kefauver Committee come to reach this strange conclusion, so contrary, at least in Lacey’s telling, to the evidence presented at its own hearings? 

Lacey proposes that the committee was itself institutionally predisposed to such a conclusion: 

As a national, federally constituted body… the committee was predisposed to a singular nationwide explanation” (p203). 

Indeed, not only was the Committee institutionally predisposed to just such a conclusion, it also, Lacey suggests, had a vested interest in depicting American organized crime in this manner. 

The Kefauver committee had no choice but to reach such a conclusion, for if organized crime was not fundamentally a matter of interstate commerce, then what business did an arm of the Senate have lavishing so much time and attention on the subject?” (p203). 

Thus, the Committee’s full title was The United States Senate Special Committee to Investigate Crime in Interstate Commerce, and, if interstate commerce were not involved, then organized crime would properly be the province, not of the Senate and Federal government, but rather of individual state governments and legislatures. 

In other words, if the committee had not decided as it did, it would have undermined the very constitutionality of its own remit.

The Commission: Intergovernmental or Federal

Yet, as we have seen, even Hoover was belatedly changed to change his tune with regard to the existence of the Mafia (or, at least, of ‘La Cosa Nostra’) after the Appalachin meeting of gang bosses from across America of 1957. Did not this meeting, and other similar nationwide meetings between organized crime bosses from different parts of the country, prove that the Mafia did indeed exist as a nationwide criminal organization? 

Lacey thinks not. He acknowledges the abundant evidence that, at meetings such as that at Appalachin

Gang leaders [from different parts of the country] might meet from time to time for sit-downs at which they would sort out disputes over territory and common threats” (p66). 

However, Lacey is adamant in maintaining: 

While local groupings of mafiosi can generate quite active links between each other, they do not constitute, and have never constituted, a centrally, almost corporately structured organization such as the one the Kefauver Committee led America to believe existed” (p204). 

Thus, to draw an analogy with international relations, the Mafia’s so-called National Commission, though it certainly existed, seems to been more intergovernmental than federal, let alone unitary or centralized, in its powers and structure.[9]

In other words, it is more analogous to the United Nations or League of Nations than to, say, US federal government or even the European Union

Certainly, it had prestige and, in a world of illegitimate activities, even, within criminal circles, a certain perverse perceived ‘legitimacy’.[10]

However, as Stalin is said to have contemptuously remarked of the Pope, it commanded no divisions (nor any ‘crews’, capos or soldiers) of its own.[11]

Boss of Bosses’? 

What then of claims made that a single figure is boss of bosses throughout America? 

Various figures, at various times throughout the twentieth century are said to have attained this position, including, in chronological order Giuseppe MorelloJoe MasseriaSalvatore Maranzano and Charles ‘Lucky’ Luciano. Yet, if there was no nationwide organization, then how could there ever be a single nationwide boss of bosses

Thus, some mafia authors have claimed that the very term capo di tutti i capi is a media invention, that has never actually been used by mafiosi themselves, let alone actually existed as a position in America or Sicily. 

Actually, however, the title capo di tutti i capi does not appear to have been entirely a myth. It does indeed appear to have been used by mafia insiders of certain influential figures during the history of twentieth century organized crime in America

For example, in his remarkable study of the early history of Italian-American organized crime in America, The First Family, historian Mike Dash reports that the title predates both Masseria and Maranzano and was first bestowed upon Giuseppe Morello at the dawn of the twentieth century. 

However, the meaning accorded by this title may have been rather different than that presumed by many popular historians and true crime writers. 

Thus, while the title capo di tutti i capi may indeed have been periodically claimed by, or bestowed upon, certain especially powerful and influential bosses, such a figure was, a best, first among equals vis a vis the bosses of other families.  

In this light, Lacey describes the differing approaches of, on the one hand, Salvatore Maranzano, and, on the other, Charles Lucky Luciano, when each was said, successively, to have assumed this position.  

Maranzano, Lacey reports, seems to have wanted to “extend his authority beyond the confines of New York City” and become, if not the nationwide head of the American Mafia, then at least “some sort of northeastern ‘boss of bosses’” (p66). 

Thus, like his rival, Joe ‘The Boss’ Masseria before him, Maranzano stood accused of attempting to demand a cut from the profits made by other bosses and crime families operating within New York City in return for his protection

However, Luciano’s intentions were, it seems, more modest. Thus, Lacey quotes Bonanno family boss Joe Bonanno as observing in his self-serving autobiography:  

Luciano… mainly wanted to be left alone to run his enterprises… He was not trying to impose himself on us as had Masseria. Lucky demanded nothing from us” (p66). 

Thus, Lacey concludes: 

The fundamental rule was live and let live – laissez-faire, the unstructured free market principle upon which the country’s legitimate business had long been founded” (p66). 

Luciano was, then, a true American laissez-faire capitalist. 

Mafia Ranks and Hierarchy?

Indeed, according to Lacey, ‘boss of bosses’ is not the only mafia title that has been misinterpreted by authors, senate committees and law enforcement. On the contrary, Lacey argues that the entire hierarchal structure of Cosa Nostra is in fact something of a myth. 

Thus, Lacey argues, just as the Kefauver committee, as a national legislative body, was predisposed to see a nationwide criminal conspiracy, so law enforcement was predisposed to seeing a hierarchical, bureaucratic and semi-military structure analogous to their own.

Thus, Lacey suggests that the hierarchal charts that famously adorn law enforcement walls in movies, television and real-life, and which attribute to Mafiosi such supposed Mafia ranks as ‘soldier’, capoconsigliere and underboss, reflected less real mafia ranks and relationships than they did: 

The bureaucratic and semimilitary cast of thought prevailing in the average police office. Everybody had a rank, and they did little justice to the confused, fluid, and essentially entrepreneurial character of most criminal activity” (p293). 

Thus, describing the criminal organization of Lansky’s ostensible model, and, according to Lacey, “the archetype of what would become known in America as organized crime” (p48), namely Arnold Rothstein, the man who is famed for supposedly fixing the 1919 World Series (even though, according to Lacey, he was not directly involved: p48; p460 n14), Lacey writes: 

The essence of organized crime as perfected by Arnold Rothstein was not structural organization as the conventional world knew it. It was, rather, the absence of structure… This was not the integrated empire of a czar or a JP Morgan. Such comparisons fail to grasp the secrecy and nimbleness necessary to success in organized crime… Each of Rothstein’s deals was separate, flexible, detached. His protegés and partners might operate individually or together. It was a question of what worked” (p50). 

In short, Lacey concludes: 

The secret of his organization was the lack of it” (p50). 

Like his early mentor, Lansky was to operate the same way: 

True to the example of his Arnold Rothstein, [Lansky’s] organization lay in the absence of structure… He kept the paperwork in his own head” (p54-5). 

Thus, in Lacey’s telling, what the authorities invariably failed to grasp about the nature of organized crime relationships was that they were based ultimately, not in hierarchy, but in partnership.[12]

As a consequence of this misunderstanding, Lacey notes the difficulty that early Mafia turncoat Joe Valachi had in explaining to senators that ‘soldiers’ received no salary from their boss or family, but rather, on the contrary, were expected to pay their boss a cut of what they themselves made (p293).
 
Lacey also notes the difficulty of fitting the non-Italian Lansky into this hierarchical scheme (p292). 

Ostensibly, Lansky, as a non-Italian and hence ineligible for membership, was a mere ‘associate’. However, even Lacey, who argues that Lansky’s power and importance in the American Mafia has been much exaggerated, admits that to describe Lansky as a mere ‘associate’ is not to do him justice.[13]

The Kosher Nostra?

Another mafia myth Lacey purports to debunk is the notion: 

The early thirties saw America’s gangsters became overwhelmingly Italian” (p65). 

In response, Lacey points out: 

This makes no allowance for the flourishing in New York City, throughout this period and beyond, of Dutch Schultz, Lepke Buchalter, Jake ‘Gurrah’ Shapiro, and Benny Seigel… who were responsible for more deaths between them than Lucky Luciano and all the Padrones in the Castellammarese Wars” (p65). 

It is certainly true that Italian-American organized crime has been much mythologized in the popular media, especially in the latter part of the twentieth century, to the almost complete exclusion of organized crime involving criminals of other ethnicities. 

Jewish American organized crime, in particular, seems to have been largely ignored in Hollywood films, Sergio Leone’s characteristically masterful Once Upon a Time in America representing a notable exception. This is perhaps a reflection of the fact that so many Hollywood movie moguls were Jewish, and hence had little desire to feed into familiar anti-Semitic stereotypes of Jews as dishonest or as criminals.[14]

Yet, before the 1930s, organized crime in early twentieth century New York seems to have been, if anything, more Jewish-dominated then Italian-dominated, with figures such as Arnold Rothstein and Dutch Schultz representing perhaps the predominant prohibition-era bootleggers in New York City. 

Actually, however, I suspect that the popular perception which Lacey purports to debunk – namely that “the early thirties saw America’s gangsters became overwhelmingly Italian” – is not so much false, as about five or ten years premature. 

Thus, of the examples of Jewish criminals cited by Lacey in this passage, Schultz was assassinated in 1935, apparently on the orders of Italian-American organized crime figures who increasingly viewed him as a liability, and Murder Inc, the predominantly Jewish hitmen supposedly responsible for his assassination (and many others), not only took their orders from the Italian Albert Anastasia and the rest of the five families, but were, at any rate, themselves broken up by law enforcement in the early 1940s. 

Siegel, meanwhile, was assassinated in 1947, and, unlike the Italian-American mafia families that survived and flourished over several generations of leadership changes over the course of the twentieth century, the criminal organizations of Schultz and Siegel did not outlive their leaders.[15]

Meanwhile, looking outside of New York, the predominantly Jewish Purple Gang in Detroit imploded through internal warfare in the 1930s. 

Thenceforth, Jews like Lansky operated largely as adjuncts to Italian-American crime syndicates, not independent powers in their own right as Schultz and Rothstein had been.  

Only in Las Angeles, with a relatively small Italian-American population, and where the much-maligned Mickey Mouse Mafia was long perceived as weak, were Jewish racketeers like, first, Bugsy Seigel, and, later, Mickey Cohen, able to give the Italians a run for their money into the mid-twentieth century. 

This is a process known to sociologists and criminologists as ethnic succession theory, whereby, over the course of the twentieth century, successive waves of new immigrants replaced previous waves, not only in the urban ghettos where they resided, but also in the organized crime rackets that they successively inherited and came to control.[16]

Thus, in New York, organized crime was first dominated by the Irish in the nineteenth century, then, around the turn of the century, Jews started to attain dominance. Jews were then displaced by the Italians, who are now themselves now largely giving way to blacks and Latinos

This chronology, of course, represents a gross over-simplification.  

For one thing, the most recent incumbents in this chain of inheritance, namely American blacks, been resident in America rather longer than many of the Anglos, let alone most of the Italians, Irish Catholics and Jews. At most, they were internal migrants, having arrived in northern cities by fleeing the Jim Crow South in a series of so-called Great Migrations over the course of the twentieth century. 

Yet, even when Francis Ianni published Black Mafia: Ethnic Succession in Organized Crime in 1974, he was widely ridiculed for his claim that blacks, together with Latinos, were the rising force in organized crime in America.

In short, African-American dominance in organized crime has been long in gestation.[17]

For another thing, there have been people of many other ethnicities, besides Irish, Jews, Italians, blacks and Latinos, who have also been involved in organized crime over the course of the twentieth century.[18]

Finally, there has been considerable overlap in the periods of dominance of the different groups, and, of course, substantial geographic variation too, depending on the ethnic groups present in large numbers in any given area.

For example, the Irish-American Westies remained the dominant organized crime faction in the Hell’s Kitchen neighbourhood of New York until at least the 1980s, and, in other parts of America (e.g. Boston), Irish-American dominance may have lasted even longer.

As for Jews such as Lansky, their own period of dominance seems to have been especially short-lived if only on account of their exceptional levels of upward social mobility.

Thus, by the mid-twentieth century, Jews were already, one suspects, as likely to be lawyers, doctors and legitimate businessmen as organized crime figures. 

By the late-twentieth century, meanwhile, Jewish organized crime was all but extinct, only to belatedly re-emerge in the 1990s with a new wave of Russian Jewish immigrants to the Brighton Beach area

Las Vegas: A Gambling Oasis in the Desert 

Yet another Mafia myth debunked by Lacey is the notion that has Bugsy Siegel as the lone visionary single-handedly responsible for constructing the modern Las Vegas amid the Nevada desert. Actually, according to Lacey, Siegel was almost a latecomer:

In reality, Bugsy followed a trail pioneered by quite a few others. When he arrived in Las Vegas in 1941, there was already one luxurious hotel-casino in the desert… and in December 1942 [it] was joined by an even larger and more luxurious development” (p150). 

Indeed, the Las Vegas Review-Journal reported as early as 1946: 

“‘I’m going to build a hotel’ was the stock comment of wealthy visitors to Las Vegas in the early months of peace” (p151). 

Instead, Seigel’s role was altogether more modest: 

Seigel did not invent the luxury resort hotel casino. He did not found the Las Vegas Strip. He did not [even] buy the land or first conceive the project that became the Flamingo. But by his death he made them all famous” (158). 

The conclusion is clear. Although Mafia figures certainly later bought, maneuvered and muscled their way in, the Las Vegas we know today, whether we love or hate it, would have come into being even without the involvement of the American Mafia, though its history may have been less colourful and bloody in the process. 

As for Lansky, Siegel’s friend and sometime partner, his own involvement in Las Vegas was, according to Lacey, even more modest. Thus, after the end of prohibition: 

Las Vegas offered the second great chance in [Lansky’s] life to go legit, but he made no special effort to take it” (p152). 

Thus, the extent of Lansky’s own investment in Vegas casinos was modest, and he remained largely a silent partner, with a minimal investment, allowing Siegel and others to take the leading role. 

Cuban Casinos and the Coming of Castro and the Communists 

Instead of investing heavily in Vegas hotel-casinos, Lansky chose to back a different horse – Cuba, constructing the massive, luxurious Havana Riviera hotel-casino in Havana, apparently in imitation of similar resorts in Vegas. 

At the time, and without the benefit of hindsight, Lansky’s investment actually made a great deal of sense. 

The then President-turned-military-dictator of Cuba, Fulgencio Bastista was indeed a visionary leader, being among the earliest Third World leaders to recognise the wealth and inward investment that a growing international tourist industry could bring to a country like Cuba, known for the beauty of its beaches, its women and its climate. 

Unfortunately, however, Cuba’s visionary leader was overthrown by puritanical communists opposed to gambling, as well as to prostitution, sex tourism and other such fun and healthy recreational activities.

Castro and communist Cuba were, of course, to become long-running headaches for the American government. Yet what is often forgotten is that the Cuban revolution was initially favourably received among most Americans.

Thus, on a visit to America soon after coming to power: 

In New York… Fidel Castro arrived in April 1959 to a hero’s welcome… The young guerrilla leader, charismatic in his beard and fatigues, was hailed as a liberator in the finest Latin American tradition – another Bolívar” (p253).  

Such naivity about incoming totalitarian despots is a recurrent feature of American politics. Previous generations of American journalists, such as John Reed and Walter Duranty, had hailed the Bolshevik revolution as a positive development, and Lenin and Stalin as benign and progressive statesmen. Later generations of American journalists were to fall into the same trap again, when they hailed Mugabe as a progressive and democratic liberator and freedom-fighter, and the so-called Arab Spring as motivated by support for western-style liberal democracy rather than for theocratic Islamic fundamentalism.

To misquote a famous (mis-)quotation from the philosopher Georg Hegel, we might observe:

The one thing we learn from history is that American left-liberals learn nothing from history’.

In respect of Cuba, among the first to see the writing on the wall was Lansky himself, perhaps because, unlike most Americans, he was present on the ground in Cuba attempting in vain to protect his investment and business interests and hence could hardly afford to be as naïve and deluded as his fellow countrymen regarding the true nature of the new regime. 

Thus, it was that Lansky reluctantly took it upon himself to explain the truth about the new Cuban regime to the US government. The representatives of the US government to whom Lansky delivered his carefully prepared presentation were two FBI agents whom his lawyer had arranged for him to meet in the latter’s office. 

The agents were impressed with Lansky’s presentation. However, predictably, the government took no notice. Although the copious notes made by the FBI agents present were added to Lansky’s FBI file, there is, Lacey reports, no evidence they were ever passed to the State Department or indeed anyone involved in formulating US foreign policy. 

Lansky nefarious reputation simply overshadowed the substantive content of his presentation “such that anyone who accepted what he said at face value risked being labelled tainted or naive” (p256) – and it was one thing to be naïve about Castro and the Cuban communists, quite another to be naïve regarding infamous Jewish-American crime figure Meyer Lansky. 

Lansky was, however, ultimately proven right: 

Subsequent events in Cuba suggested that the FBI might have paid more attention to what Meyer Lansky said… The records of the FBI’s meeting… show with rare clarity, that Meyer Lansky predicted almost exactly what was going to happen in Cuba the best part of a Year before it did” (p256). 

Of course, the main victims of the Castro and Cuban communism were the Cubans themselves, condemned to a half-century or more of poverty and repression by the misguided communism of the ruling regime (and, of course, US sanctions, though these were themselves a consequence of Cubas communism and alignment with the Soviet block). Another lesser victim was, however, Lansky himself. 

Thus, Lansky, the consummate gambler, had, in the greatest investment of his life, backed a losing horse. 

Meyer Lansky had staked his personal bankroll solidly on the success of the Riviera – to the exclusion of almost everything else. His spectacular casino-hotel was to be the culmination – and ultimate vindication – of his career… Meyer Lansky had invested much more than his money in the Havana Riviera. He invested himself. He gambled everything – and, as he later put it, ‘I crapped out’” (p257-8). 

Financial Genius? 

Lansky has sometimes been described as the accountant for the Mob’. In reality, ‘The Mob’, as a whole, not being a single homogenous entity, had no single accountant, and, if it did, they would probably have picked someone who was, well… an accountant. 

Thus, Lacey observes: 

The fantasies that depicted Meyer Lansky as the ‘Accountant of the Mob’ misrepresented organized crime as a corporate entity, and they also failed to take note of how much money the accountant in any deal tends to finish up with in real life… The owner of chief executive of a corporation may become a millionaire. The chief financial officer remains on a salary” (p405). 

Another familiar claim is that Lansky was the financial genius behind the Mafia

However, while it is sometimes claimed that Lansky himself was responsible for inventing the process that became known as money laundering, Lacey shows that there is no support for this claim (p304-5).

On the contrary, in laundering his own money, Lansky had his own financial “guru” who advised him on financial affairs and how to invest, namely one Paul Pullman (p306).

The latter, Lacey reports, then fatefully introduced Lansky to his boss, Tibor Rosenbaum – who, investing in a property development in Italy, but bribing the wrong set of corrupt politicians who promptly lost office, managed to lose the entirety of Lansky’s investment. Lacey concludes: 

This episode scarcely suggested that Meyer Lansky could be considered an infallible guide when it came to the dangers and complexities of international high finance” (p309). 

Making money in illegitimate ventures is always easier than making money legitimately, if only because the risk of arrest deters much of the competition, and the threat of violence deters most of the remainder. 

Lansky is therefore skeptical of the oft-repeated claim that, in the words of an unnamed FBI agent quoted in Lansky’s New York Times obituary: 

He [Lansky] would have been chairman of the board of General Motors if he’d gone into legitimate business” (p423). 

Indeed, according to Lacey, Lansky himself “ruefully remarked… more than once” that he had an “unerring ability… to lose money whenever he went legit” (p296). 

In his better moments Meyer managed to laugh at his atrocious sense of timing as a businessman… the millions lost in Cuba, his inability to take legal advantage of Las Vegas, the Bahamas, Atlantic City, or anywhere else that his own game of casino gambling became legal in his later years” (p430). 

Therefore, reviewing the failure of Lansky and his partners’ attempt to make money from a legitimate TV rental business, Lacey concludes: 

The television adventures of Meyer Lansky and his fellow czars of the underworld showed what sort of businessmen they were when the playing field is level” (p172).

This is perhaps unfair. It is a feature even of the careers of many successful entrepreneurs that their careers involve as many failures as successes, especially when they stray outside their main area of business. Successful entrepreneurs tend to be risk-takers, and risks, by their very nature, only sometimes pay off. Their success often seems as much a consequence of perseverance in the face of failure (and of luck) as of pure business acumen. 

Thus, Lansky does seem to have been successful in Cuba, and, to a lesser extent, in Vegas, where casino gambling had been legalized.

However, in Vegas, Meyer had Mafia might behind him, and, in Cuba, Batista’s regime may have provided the muscle necessary to secure Lansky’s monopoly even more effectively than did the Mafia

Family 

Some reviewers of Lacey’s book on amazon and goodreads have accused Lacey of producing a whitewash, a biography absolving Lansky of almost all the nefarious, criminal activities of which he has been accused and altogether too favourable to its subject. 

In fact, however, this is only half the story. Although Lacey does indeed suggest that Lansky was not nearly as dangerous, powerful and malign as he has been made out to be in other popular accounts, he also reduces Lansky to a rather marginal, insignificant figure in the history of American organized crime. 

If, in Lacey’s account, Lansky loses much of his power, glamour and mystique, he acquires in its place perhaps a certain sympathy. 

If Lansky’s business and criminal career seem to have been hardly the unmitigated success story made out by the popular press and true crime authors, his family life, in comparison, seems to have been virtually an unmitigated disaster. 

There was, Lacey reports, no grand romantic affairs. Any extra-marital affairs were conducted by Lansky with the same secrecy and discretion as that with which he couched his business affairs (p129). 

Lansky’s first wife succumbed to mental illness. Lacey, perhaps unfairly, blames this on Lansky himself, arguing that it was Lansky’s obsessive secrecy regarding his business affairs (necessitated, no doubt, by their criminal nature) that led to his wife’s breakdown. 

Lansky’s first son, Buddy, who seems to have been a primary source for Lacey’s biography, was born with a crippling physical disability and, as a result, never managed to live independently, being supported by his father throughout the latter’s life, and later by charity and the state, before dying in poverty. 

Given his disability, Buddy’s inability to live an independent life was perhaps excusable.  However, no such excuse was available to Lansky’s daughter who, after a short, unsuccessful marriage to a closeted homosexual, and an illegitimate child of unknown paternity who was so handicapped he ultimately had to be institutionalized, became something of a socialite, again on her father’s dime (p268). 

However, showing little gratitude to the father who funded her extravagant lifestyle, she also became an FBI informant against him, albeit providing little of real evidential value if only because of the secrecy with which Lansky hid his business affairs from his family (p269).[19]

Her ultimate betrayal, however, came only after her father’s death when, after her father’s underworld associates had got together to provide her and her disabled brother with a lump sum of $300,000 as a legacy to help them get by, she promptly embezzled the share of her by now severely debilitated disabled brother. 

Only Lansky’s second son, Paul, was something of a success and source of pride to his father, graduating from West Point and having a successful career in the military and then in civilian life. 

He disdained the lifestyle of both his father and his brother, being law-abiding, proudly independent and refusing any gifts from his father, but defiantly insisting on naming his own son Meyer Lansky II. 

He seems, therefore, to have inherited something of his father’s obsessive scrupulousness. 

Unfortunately, this personality trait may also have been implicated in his marital breakdown when it was discovered that, at the same time the FBI spying on and recording the phone calls of Lansky and possibly Paul himself, Paul himself was surreptitiously using expensive surveillance equipment to spy on and record his own family (p353-4). 

When the Honeymoon Was Over 

Lansky’s second marriage seems to have been more successful. However, his children from his first marriage naturally resented their father’s new wife, regarding her as “crude… loud and flashy”, but also stingy and cheap – in short, though Lacey never actually says this, stereotypically Jewish (p277). 

Her adult son, now Lansky’s stepson, caused both parents no little headache. Now relishing and trading on his new status as the ‘son’ of Meyer Lansky, he was ultimately murdered in apparent retaliation for himself killing the (actual biological) son of local Miami underworld figure in a barroom brawl (p394-5). 

Meyer’s second marriage also led to what was, at least in Lacey’s telling, perhaps the greatest mistake of Lansky’s long criminal career. This was his decision to take his new bride on an ostentatious new honeymoon, which was reported on by a reported from the New York Sun

As with the fictionalized Frank Lucas in the movie, American Gangster, who, at least in the film version, attracted law enforcement attention by attending a Muhammed Ali fight dressed in an expensive fur coat and occupying front-row seats, Lansky had made the fatal mistake of engaging in conspicuous consumption to impress his new wife. 

For whatever reason… Meyer had broken the cardinal rule that he had laid down to Vinnie Mercurio: ‘You must not advertise your wealth’” (p176). 

Previously, Lansky had had little problem complying with this advice. After all, Lacey reports: 

Meyer had genuintly sober tastes… [and] indulged none of the extravagance which characterized many… ‘hoodlums’” (p285-7). 

Unfortunately, however, this one extravagance was the beginning of the end for Lansky’s anonymity. Until that honeymoon, Lacey reports: 

Lansky’s name had only been mentioned, almost in passing, in occasional articles lists New York racketeers and gangsters… usually as an associate, and, by implication, something of a sidekick to underworld stars like Luciano and Bugsy Siegel. But with his appearance on the front page of the New York Sun and his first ever newspaper photograph, Lansky was starting on the path to becoming an underworld star in his own right” (p176). 

The price of fame, however, was a heavy burden to pay. While Lacey also suggests that Lansky sometimes rather relished his media infamy and reputation as a major mafia mogul, the negative consequences of his reputation surely, in the long-term, far outweighed any superficial boost to his ego. 

The result was endless years of law enforcement harassment and failed prosecutions, even as Lansky entered his dotage, finally culminating when even the Israeli government, despite its infamously broad, and overtly racially (and religiously) discriminatory, law of return, rejected his application for citizenship. 

Mafia Millions? 

Hank Messick, who launched a literary career out of mythologizing Lansky, described the diminutive Lansky as: 

Boss of the Eastern Syndicate and probably the biggest man in organized crime today” (quoted: p311).

In the course of the same article, he claimed: 

Lansky’s wealth is reliably estimated at $300 million” (quoted: p311). 
 

However, after the millions lost in Cuba, Lacey himself estimates Lansky’s wealth rather more modestly: 

Meyer would have had a hard job listing realizable assets and cash resources that stretched as far as $3 million” (p312). 

Indeed, even Messick himself later backed away from the figure he had earlier cited, insisting, in an interview conducted with Lacey: 

It was not my figure. It came from an expert who was supposed to know what he was talking about” (p311).[20]

For his part, Lansky himself affected to envy Messick the money the later made out of writing about him, on the one occasion they actually met remarking, “You ought to pay me half the money that you’ve made writing about me” (p315).

As for the claim, “We’re bigger than US Steel” – a quotation so famous it got into the script for The Godfather II – Lacey traces the origin of this quote to an FBI bug.

Lansky was, it seems, watching “a documentary… on organized crime, followed by a discussion among a studio panel of experts” (284). 

Meyer sat in silence… until one of the panellists ‘referred to organized crime as only being second in size only to the government itself’. Lansky remarked to his wife that organized crime was bigger than US Steel” (p284). 

The transcript was all that remained, the tapes having been recorded over and this transcript “shows that the agent chose to paraphrase” (p284).

Yet the context of the remark seems to suggest it was made sarcastically and in disbelief, and certainly concerned organized crime as a whole rather than Lansky’s, or even the Mafia’s, own operations alone. However, Lacey reports: 

By the time that Lansky’s comment was made public five years later… it had also been subtly altered: ‘We’re bigger than US Steel’” (p294). 

Antisemitism? 

For his part, Lansky himself tended to blame the law enforcement harassment and media attention that he received on antisemitism

Indeed, anti-Semitism seems to have been something to which Lansky was hypersensitive, perhaps even paranoid, having something of a persecution complex.[21]

Thus, Lacey even interprets Lansky as blaming the Israeli Supreme Court’s refusal to allow his appeal against the decision not to grant him amnesty as evidence of antisemitism, Lansky being quoted in a newspaper in Israel as lamenting ruefully after his courtroom defeat “a Jew has a slim chance in the world” (p351). 

In this court case, even Lansky’s relative lack of serious criminal convictions was perversely turned against him, being cited as evidence of his power and hence untouchability, the state attorney arguing that: 

The slight and comparatively trivial nature of Meyer Lansky’s criminal record… was no indication of his innocent, argued the state attorney. On the contrary, it confirmed his guilt, since it was in the nature of US organized crime that those who were masterminds of criminal activity should insulate themselves from its practical execution. This meant that they were seldom caught, and, when brought to justice, tended to escape conviction. It followed, therefore, that those who were the most culpable usually had the fewest convictions – so the very lack of solid evidence against Meyer Lansky must, in fact, be considered the strongest possible evidence against him” (p343-4).[22]

Certainly, the popular image of Lansky, as a shadowy and sinister criminal mastermind, dominating organized crime from behind the scenes, is indeed disturbingly redolent of familiar antisemitic canards: 

Often hinted at, if seldom explicitly stated, Meyer Lansky’s Jewishness was an important part of his mystique” (p313). 

Interestingly, Lacey even posits Lansky as the ultimate prototype for the archetypical Bond villain

Unprepossessing little men, for the most part, they terrorized with the power of their minds… and to judge from their names, could never be mistaken for WASPsBlofeld, Stromberg, Dr Julius, Drax” (p313).[23]

Criminal Mastermind? 

If he was not then as rich and powerful as the popular imagination suggested, was Lansky indeed then the evil genius and criminal mastermind that he was so often credited as being? 

Certainly, Lansky seems to have been good with figures. His cellmate during his only substantial spell of incarceration recalls how, presumably to relieve the boredom of incarceration, Lansky would demonstrate his remarkable speed and accuracy at arithmetic (p209). 

Lansky also, Lacey reports, had a remarkable memory, which facilitated the expedient of not having to write anything down where it could be used as evidence against him. 

In a world without filing cabinets, Meyer Lansky’s genius [was] the ability to act as a human cash register and ledger book in the succession of shifting partnerships and deals” (p53). 

However, having a good head for figures and a good memory is hardly evidence of genius. Many people employed as bookmakers, for example, develop quick computation skills, and likewise rote-memory is not an especially g-loaded cognitive ability. 

Certainly, his criminal associates tended to be overawed by Lansky’s alleged intellect. 

However, someone else who got to known Lansky well, concluded that, though “reasonably sharp and quick-witted” (p327), Lansky “was not intelligent” (p339).

This was the opinion, perhaps tellingly, not of a criminal, but of a lawyer.

Perhaps then Lansky was regarded as an intellectual heavyweight only by dint of comparison with the company he kept. 

Thus, Lacey, a Cambridge-educated historian, notes with subtle but unmistakable intellectual snobbery the amazement of Lansky’s fellow criminals, Bugsy Seigel and Joe Adonis: 

Can you believe it? He’s even a member of the Book-of-the-Month Club” (p4). 

Daniel Seligman in his popular science book, A Question of Intelligence, notes that John Gotti, later to become a particularly infamous boss of the powerful New York-based Gambino crime family, when given an IQ test while still at school, had “tested at 110”. 

Since IQs are normed by reference to an average score of 100, with a standard deviation of about 15 points, a score of 110 is above average, but well within the normal range. Therefore, Seligman concludes: 

“[Since] criminals tend to have IQs clustered around 90, in a sense, then, you can think of Gotti’s rise to mob stardom as basically concordant with the general rule that smart people get to the top” ( A Question of Intelligence: p35).[24]

In general, criminals tend to have low IQs because ultimately crime, even serious organized crime, is not an especially smart career choice in the long-term, especially for someone with sufficient smarts to be successful in an alternative career where s/he does not run the risk of imprisonment.[25]

Thus, ultimately, Lansky’s refusal to ‘go straight’ either at the end of prohibition or in Vegas with its legalized gambling turned out to be his greatest mistake – since it was, ironically, those organized crime figures who didgo legit’ who ultimately amassed the sort of wealth and power which Lansky himself possessed only in the imaginings of those ‘pulp nonfiction’ writers whom Lacey so disparages. Thus, Lacey reports the irony whereby:

In reality, Dutch Schultz, Benny Siegel, Joe Adonis, Frank Costello, and Lucky Luciano all died without much money to their names. The millionaires of their generation were Moe Dalitz, Morris Klienman, and the other moguls of Las Vegas – the truly clever ones who went straight” (p405).  

Endnotes

[1] Carlos Marcello, the boss of the New Orleans crime family, most famous for his supposed role in the assassination of John F Kennedy, was the only major American organized crime figure, to my knowledge, widely known by the sobriquet ‘The Little Man’.

[2] Indeed, such is the quality and accuracy of some research and writing in this genre that, as with the word ‘true’ in the genre ‘true crime’, one might argue that the phrase ‘pulp nonfiction’ is inaccurate in so far as it implies that the content of such work is indeed anything other than fictional. Books outside the ‘true crime’ genre that might also qualify as ‘pulp nonfiction’ include conspiracy theory books and many celebrity biographies.

[3] While the role of Luciano and the American Mafia in the invasion of Sicily may be a myth, it does seem to be true that many figures associated with the Sicilian Mafia did take advantage of the American invasion by offering themselves up as translators and aides to the American invaders. Also, some Sicilian Mafiosi, imprisoned by the authorities during the fascist regime’s campaign to crush the Mafia for their Mafia associations and activities, seemingly succeeded in passing themselves off as anti-fascists, imprisoned instead for anti-fascist activities. In so doing, they managed to secure influential appointments as mayors in some Sicilian towns and villages. This alliance between Mafia and America was later institutionalized when the Southern Italian mafias found themselves elevated in American eyes to the lesser of two evils in an unholy alliance against the perceived communist threat in Italy during the Cold War. Here, again, however, exaggerated conspiracy theories abound, especially regarding Operation Gladio and the supposed culpability of the CIA for terrorist attacks during Italy’s Years of Lead

[4] Some true crime writers have even proposed that the Mafia themselves deliberately set the vessel afire in order to panic the authorities into approaching the imprisoned Luciano and thereby enabling him to negotiate his release in return for his cooperation. This, for example, is the view espoused in in The Mafia Encyclopedia (Third Edition) by author Carl Sifakis (The Mafia Encyclopedia: p333-5). However, the idea that Mafia figures would have anticipated that a fire onboard the vessel would somehow lead to the government making contact and conducting negotiations with the organized crime figures who controlled the docks, let alone with Luciano himself, something that would have appeared beforehand to be a very improbable scenario, is obviously preposterous. As with the more outlandish tales of Luciano’s involvement in the US invasion of Sicily, this idea is therefore best dismissed as a conspiracy theory.

[5] In Lacey’s defence, it seems that he only became aware of just how marginal Lansky was to organized crime in America for most of the twentieth century as he researched his biography. Before undertaking his research, he had apparently believed the hype.

[6] The term ‘carpet joint’, refers to a relatively upmarket casino, though without glamour of later Vegas casinos, or of Lansky’s own Cuban establishments, and is used to distinguished such an establishment from a less-pretentious, downmarket ‘sawdust joint’.

[7] Various conspiracy theories have been formulated to explain Hoover’s refusal in investigate the Mafia, usually involving either gambling debts owed to Mafia bookies or the Mafia supposedly having dirt regarding Hoover’s alleged homosexuality (see Potter 2006 Queer Hoover: Sex, Lies, and Political History Journal of the History of Sexuality 15(3): 355-381). Indeed, some versions even have Lansky himself possessing incriminating photographs of Lansky engaged in homosexual activities, Lansky being quoted as bragging, I fixed that sonofabitch.
More prosaically, and perhaps more realistically, it is suggested that Hoover simply did not want his FBI agents to become corrupted by mafia bribes, as he rightly feared would result from their investigating non-political profit-oriented organized crime. Thus, Selwyn Raab writes: 

Hoover’s reluctance to seriously challenge the Mafia stemmed from three main factors, according to former FBI agents and criminal-justice researchers. First was his distaste for long, frustrating investigations that more often than not would end with limited success. Second was his concern that mobsters had the money to corrupt agents and undermine the bureau’s impeccable reputation. And third, Hoover was aware that the Mob’s growing financial and political strength could buy off susceptible congressmen and senators who might trim his budget” (Five Families: p89).  

At any rate, since, at least according to Lacey, the American Mafia, far from being a nationwide conspiracy, is predominantly organized at the local level, one might question whether organized crime is indeed within the remit of a federal law enforcement authority, though no doubt some Mafia crimes did indeed cross state boundaries. 

[8] However, even these similarities may be exaggerated. For example, bemoaning law enforcement’s overreliance on (and overgeneralization from) what they were told by a few relatively low-level informants like Joe Valachi, Lacey observes observes that:

Valachi was only a comparatively minor figure in one subgroup of New York Italian criminals. The strength of his testimony was that he had had firsthand experience of life in this group. His weakness was that he knew little, at first hand, about crime elsewhere – in Chicago, for example, Capone’s successors talked neither of ‘Cosa Nostra’ or ‘Mafia’, but of ‘The Outfit” (p292). 

Indeed, the Chicago Outfit, founded by predominantly non-Sicilian Italian-Americans like Big Jim Colosimo, Johnny Torrio and Capone, was initially a very different beast to the five families of the New York Metropolitan area. Initially, at least, it was said to have lacked the initiation rituals of the New York families altogether. Capone, in particular, was said to be mistrustful of the his nominal allies, the Sicilian Genna brothers, who may indeed have practised such rituals and certainly represented the closest thing prohibition-era Chicago had at the time to a New York-style Mafia family.

[9] Indeed, the so-called National Commission, not only existed, but may have been rather older than it is usually credited as having been. Its origins are usually traced, in most historiies of the American Mafia, to the end of the Castellammarese War in 1931, under either Maranzano or Luciano, though the idea for a National Commission is sometimes attributed to Johnny Torrio or sometimes even Lansky himself. However, in his remarkable book The First Family: Terror, Extortion, Revenge, Murder, and the Birth of the American Mafia, historian Mike Dash adduces evidence that the Commission, under the name ‘the Council’ actually existed almost twenty years earlier, being established, he concludes “some time before 1909”.

[10] The use of the term ‘legitimacy’ in this context may seem exaggerated or even absurd, but this is exactly how mafiosi themselves saw it. Thus, in FBI agent Joe Pistone’s account of his work as an undercover agent posing as an associate in the Bonnano crime family, Donnie Brasco: My Undercover Life in the Mafia, when fellow undercover agent Edgar Robb (alias Tony Rossi) enquires as to what is precisely the advantage of being ‘straightened out’ to become a ‘made guy’ or ‘wiseguy’ (i.e. a member of the mafia), their sponsor Benjamin ‘Lefty’ Ruggiero explodes:

Donnie, don’t you tell this guy nothing? Tony, as a wiseguy, you can lie, you can cheat, you can steal, you can kill people—legitimately. You can do any goddamn thing you want and nobody can say anything about it. Who wouldn’t want to be a wiseguy?” (Donnie Brasco: p360)

[11] Thus, whatever its pretensions and theoretical powers, the National Commission, rather like the United Nations or earlier League of Nations, possesses no monopoly on the use of force, quite the contrary, and is therefore obliged to rely for enforcement of its edicts on the cooperation of its constituent members (i.e. individual crime families).
Another way of putting this is to say that crime families, like nation states, exist, vis a vis one another in a Hobbesian ‘state of nature, without a central authority, sovereign or ‘leviathan’ exercising a monopoly on the use of force.
Of course, in respect of crime families, this analysis is complicated by the fact that, of course, the American government does exist and, at least in theory, does claim a monopoly on the use of force. However, it is, of course, the fact that this ostensible monopoly on the use of force is, in practice, far from absolute, that allows organized crime syndicates and other violent criminal enterprises to survive and flourish.

[12] Indeed, even protection rackets offer a form of partnership, the extortioner offering protection, not just from himself but also from other extortionists, in exchange for a fee or cut of the profits. However, perhaps the better analogy here would be with the concept of tribute and fealty which governed the relationships between different ranks of rulers under feudalism. On this view, the so-called mafia operates as a sort of ‘shadow government’, which provides services, especially the maintenance of order, in return for taxes (or protection money). 

[13] Of course, there is today ample evidence, in the form of both turncoat testimony and wiretap recordings, of mafiosi themselves using such terms as ‘soldier’, ‘capo’, ‘consigliere’ and ‘underboss’ to refer to one another. Perhaps, however, this was a later development, or even a case of life imitating art, as mafiosi, themselves often avid viewers of mafia films, themselves adopted the terminology used first by the police and then later in movies. At any rate, it is clear that terms such as soldier and boss had very different meanings for mafiosi than for senators.

[14] Thus, even the earliest Hollywood gangster movies, the Warner Brothers gangster cycle of the 1930s, tended to ommit Jews. Thus, the two biggest stars of these movies were perhaps Edward G Robinson and James Cagney. The former, Edward G Robinson, made a career for himself being typecast as an Italian gangster, while his rival James Cagney’s characters, although their ethnicity was less explicit, are usually interpreted as having been Irish-American. Neither were exactly what they pretended to be. Although Cagney was indeed of predominantly Irish ancestry, he was no archetypal tough guy, but rather a talented dancer and former female impersonator, and also spoke fluent Yiddish. Robinson, on the other hand, was actually of Jewish ancestry, the ‘G’ initial in his name supposedly standing for his original surname of ‘Goldenberg’. Robinson therefore disguised his Jewish origins by anglicizing his name, only only to pursue an acting career in which he mostly portrayed Italians.

[15] Interestingly, even before his assassination, Schultz had taken the step of converting to Catholicism, something interpreted by many biographers and mafia historians as an attempt to ingratiate himself with Italian-American mafiosi, especially Luciano, who were already coming to dominate organized crime in the city. This decision may then have reflected a recognition on Schultz’s part of the changing demographics and ethnic power balance in New York organized crime. 

[16] Not all immigrant groups to the USA have been associated with organized crime. For example, there is, to my knowledge, little history of German-American involvement in organized crime in America.
Instead, the three dominant groups in organized crime in New York during the early twentieth century – Irish, Sicilians and Jews – all have a history of living under the rule of foreign rulers, and hence a tradition of not trusting, or relying on, government or law enforcement to resolve their problems. The Irish, in particular, have their own history of secret societies (the Defenders, the Whiteboys, the Ribbonmen) to rival the Sicilian Mafia.
Jews, on the othe hand, even arguably have their own version of the Sicilian code of omertà, namely mesirah, a Talmudic law whereby Jews were forbidden to inform against, or turn fellow-Jews over to Gentile authorities. Of course, American-Jewish organized crime figures were, as a rule, not religious. However, this rule against informing against fellow-Jews, nevertheless illustrates the general milieu in which Jewish crime figures would have been raised and grew up, with little trust in the government, law enforcement or the authorities, who were perceived as inherently anti-Semitic and corrupt, as indeed they often were.

[17] Others have argued that African-American organized crime has existed since the early-twentieth century, but, for whatever reason, has attracted less attention and publicity (e.g. Lombardo 2002 The Black Mafia: African-American organized crime in Chicago 1890–1960 Crime, Law and Social Change 38: 33–65; see also Gangsters of Harlem). This seems to be true. However, until the last few decades of the twentieth century, African-American organized crime groups and figures seem to have been, in general, relatively less powerful, wealthy and politically-connected than equivalents of other ethnicities. 

[18] For example, the so-called Dixie Mafia in the South were composed of white Southerners. Meanwhile, even in, for example, Chicago, where ethnic succession theory seems to be broadly applicable, Murray Humphries of the Chicago outfit was of Welsh descent, and George ’Bugs’ Moran of the rival North-Side Gang was apparently of French-Canadian ancestry.
It might be noted here that surnames are not an accurate indicator of the ethnicity of American crime figures, many organized crime figures not so much ‘anglicizing’ their names as, if you like, Irish-izing them. Thus, just as George ’Bugs’ Moran had been born Adelard Leo Cunin, but adopted the Irish-sounding surname Moran, so Paul Kelly, leader of the infamous early-twentieth century Five Points Gang, though possessing the quintessentially Irish surname of Kelly, was actually born, Paolo Antonio Vaccarelli; while, Jack McGurn, the ostensible choreographer of the St Valentine’s Day Massacre, had been born Vincenzo Antonio Gibaldi – both Kelly and Mcgurn supposedly first adopting Irish names to further their boxing careers. Frank Costello, born Francesco Castiglia, confused this pattern somewhat by choosing an Irish surname that nevertheless actually sounds more Italian than Irish

[19] According to her FBI handlers, she blamed her father for having her mother committed in order to marry his second wife, something that was untrue, but which, Lacey suggests, given her age at the time, she might have been forgiven for believing (p269-70).

[20] Lacey responds rather incredulously, “It is difficult to imagine who this expert could have been” (p311), and concludes “It is impossible to square the figure with anything that is known or can reasonably be imagined about the finances of Meyer Lansky” (p312). 

[21] For example, in a private conversation with the eponymous Estes Kefauver (of Kefauver Committee fame) where he challenged Kefauver as to why he and his committee were so concerned about the victimless crime of gambling, even though Kefauver himself was known to gamble, and Kefauver responded that he had no problem with gambling as such, but only with “you people” controlling it, Lansky chose to interpret that phrase “you people” as a racial remark, and retorted “I will not allow you to persecute me because I am a Jew” – even though, at least according to Lacey, by “you people” Kefauver almost certainly meant, not Jews (nor Italians), but rather criminals. Of course, the reason criminals control so much of the gambling in the USA is precisely because so many forms of gambling are illegal in puritanical America.

[22] One Israeli law student, who became a friend and champion of Lansky, one Yuram Sheftel (who seems to be the same Yuram Sheftel who later represented the convicted murderer of an Israeli Prime Minister and a Ukrainian-American falsely accused of being a succession of different Nazi war criminals) had advocated a different, more innovative, legal argument on Lansky’s behalf. Sheftel, who organized a petition on Lansky’s behalf, readily conceded that Lansky might have been a powerful American gangster, but maintained: 

Jewish gangsters like Lansky, Bugsy Siegel, Waxey Gordon, Doc Stacher – even Lepke Buchalter, a convicted murderer – might have broken the law. But that law, in Sheftel’s eyes, was the law of ‘white Christian countries… based on Christianity, which is the most anti-Semitic phenomenon in history’. ‘I don’t see anything wrong,’ says Sheftel, ‘with a Jewish person breaking the law of countries which were persecuting, murdering, torturing, and discriminating against Jews for the past two thousand years” (p335). 

This argument is odd given both that Jews have thrived and prospered in the USA and other Christian countries and that America has, over the past half century, given more aid to Israel than to any other country in the world.

[23] Indeed, in the hands of professional anti-Semite David Duke, this unspoken anti-Semitic subtext becomes explicit, Duke claiming in his book My Awakening that, although Italian-American gangsters took most of the heat, it was Jews who were really to blame for the American Mafia, and Lansky himself who was the worst of the bunch: 

The top law enforcement sources and investigative reporters agreed that Lansky was the master gangster in America. He had been the most powerful person in the American crime syndicates for four decades, yet most Americans – who certainly know the names Al Capone and John Dillinger – have never heard of Meyer Lansky. The most notorious gangster was not Italian; he was in fact Jewish and an ardent supporter of Zionism” (My Awakening: p250). 

Actually, however, far from Jewish gangsters being the real powers behind the scenes, with Italian-American criminals merely representing the window dressing, the truth seems to have been, for a long time, almost the exact opposite of this. Thus, in the mid-twentieth century, at the Central Intelligence Bureau, Selwyn Raab reports: 

The consensus among the department’s brass was that Jewish bookmakers were raking in the big bucks as organized crime’s most productive money makers. [NYPD detective Remo] Franceschini got nowhere trying to convince officials that major bookies were not independent and could only operate with the acquiescence of one of the five families” (Five Families: 158). 

[24] In fact, however, John Gotti, despite his notoriety (or indeed because of it), was a rather inept and unsuccessful crime boss. After all, genuinely smart criminals rarely court publicity as did the infamous ‘Dapper Don’. This only invites law enforcement attention, as John Gotti, like Capone before him, was subsequently to discover. Instead, smart criminals try to keep as low a profile as possible.
An interesting counterpoint to Gotti is his contemporary Vincent ‘The Chin’ Gigante, who reigned as boss of the Genovese family around the same time as Gotti was boss of the Gambinos. Yet, while Gotti invited media attention, Gigante shunned the spotlight, faking mental illness so successfully that he was, at first, genuinely believed by most law enforcement to be largely retired and inactive rather than boss of the most powerful crime family in America.
Interestingly, however, Gigante himself was no intellectual heavyweight. According to his biographer, Larry McShane, Gigante had a “recorded IQ just north of 100  a slightly above average score” (Chin: The Life and Crimes of Mafia Boss Vincent Gigante: p6).
Here, it is worth noting that offenders, upon conviction and admission to prison, if not before then so as to present a psychological report in court before sentencing or even before trial, are often given a battery of psychological tests, including of cognitive ability. IQs for convicted criminals are therefore often rather more credible than those cited in respect of celebrities or other public figures.
However, criminals may sometimes deliberately get questions wrong on an IQ test in order to qualify for mitigation of sentence, especially in order to evade the death penalty. Assuming Gigante’s IQ was tested in these circumstances, it is possible he may have faked a low IQ score in order to lend credence to his courtroom defence, whereby his lawyers insisted that he was suffering from dementia. However, if he did, he obviously did not fake dementia very well, given that his score was, according to McShane, slightly above average.

[25] The claim that criminals tend to have low levels of intelligence, as claimed by Seligman, is based largely on the testing of convicted offenders in prisons. An obvious rejoinder is that it is disproportionately the dumber criminals who are successfully convicted. In contrast, one might argue, the smart criminals tend to avoid being successfully prosecuted and thus are less likely to ever see the inside of a prison cell in the first place. However, it is generally agreed that there is nevertheless some correlation between criminal behaviours and IQs in the low normal range.

‘Chosen People’?: A Memetic Theory of Judaism

Kevin MacDonald, A People That Shall Dwell Alone: Judaism as a Group Evolutionary Strategy, With Diaspora Peoples. Writers Club Press 2002.

Every people claims to be unique and in some sense, of course, the claim is true. But some people are more unique than others.” 

Pierre van den Berghe, The Ethnic Phenomenon (reviewed here).

Ethnocentrism is an innate and pan-human facet of human nature. Every ethnic group therefore regards itself as special and unique (see The Ethnic Phenomenon: which I have reviewed here).  

Viewed in this light, the Jewish claim to be special and unique (i.e. to be God’s chosen people) is, of itself, not so special and unique.

However, of all the ethnic groups in the world that claim to be special, Jews perhaps have the best claim to actually being justified in their self-assessment. 

The impact of the Jewish people on world history is vastly disproportionate to their numbers. The two largest world religions, Christianity and Islam, both derive ultimately, in large part, from Judaism, and Jews are vastly overrepresented public intellectuals, Nobel Prize winning scientists, celebrities, and multibillionaires

Yet, the most remarkable achievement of Jews is arguably their very survival as a people, despite conquestbanishment, persecution, successive pogroms, the holocaust and almost two thousand years of diaspora, not to mention to the recent trend towards secularization.[1] 

Thus, professor of evolutionary psychology (and alleged anti-Semite) Kevin Macdonald, in his book ‘A People That Shall Dwell Alone’ (henceforth, ‘PTSDA’), argues: 

From an evolutionary perspective, the uniqueness of… Jews lies in their being the only people to successfully remain intact and resist normal assimilative processes after living for very long periods as a minority in other societies” (p86). 

He therefore concludes: 

They [Jews] are the only group that has successfully maintained genetic and cultural segregation while living in the midst of other peoples over an extremely long period of time… ‘the most tenacious people in history’” (p76). 

Off the top of my head, I can think of only two other groups who might plausibly assert a competing claim to this mantle: 

  1. Upper-caste Hindus, whose ancestors supposedly subjugated India several millennia ago, but who supposedly created the caste system precisely so as to preserve their racial and ethnic integrity; and 
  2. The Romani people (aka Gypsies or Roma), who have lived in Europe for at least several hundred years but have maintained their separate identity and way of life, resisting assimilation into the mainstream. 

Indeed, regarding the former, one might even argue that this complete genetic and cultural segregation applies, not only to upper-caste Hindus, but to all Indian castes, since each is, at least in theory, expected to marry endogamously

Moreover, this applies, not just to the four hierarchically-organized varna, plus the untouchable dalits (not to mention pseudo-castes such as Parsis, themselves often considered India’s own middleman minority, and hence the subcontinental equivalent of the Jews in Europe), but also, again at least in theory, to each of the literally thousands of separate Jāti within each varna scattered across the subcontinent.

As a consequence, castes remain genetically distinguishable even today, with upper-caste Indians having greater genetic affinities with European populations, presumably a reflection of the Iranian, Indo-European origins of the Aryan invaders who settled and subdued the subcontinent, and are thought to have established the caste system (Bamshad et al 2001).

Indeed, to some extent, different castes are even distinguishable phenotypically, with upper-caste Indians having relatively lighter complexions (Jazwal 1979; Mishra 2017). Thus, Varna, the Hindi word for caste, originally derives from the Sanskrit word for ‘colour, possibly being a reference to the lighter complexions of the Aryan invaders.[2]

In this light, it is perhaps no surprise that the second group listed above, namely the Romani (or ‘Gypsies’), themselves also trace their ancestry ultimately to the Indian subcontinent. Therefore, the Romani insistence on maintaining remaining strict separation from the disdained ‘Gadjo’ outgroup, an aspect of their concern for ritual purity and cleanliness, is itself likely an inheritance from the Indian caste system

However, curiously, Macdonald characterizes “the caste system of India” as:

An example of a fairly open group evolutionary strategy… In India wealthy powerful males were able to mate with many lower-status concubines” (p31).[3]

In contrast, Macdonald claims, for Jews, all sexual contact with Gentiles was proscribed (p54-62). 

However, other biblical passages seemingly envisage the forced concubinage of foreign women (e.g. Deuteronomy 20:14Numbers 31:18). 

Macdonald acknowledges this, but argues that “although captured women can become wives, they have fewer rights than other wives”, citing the ease with which the divorce of foreign women captured as spoil is permitted under Deuteronomy 21:14 (p57). 

Similarly, with regard to the admonition in Numbers 31:18 to “keep alive for yourselvesMidianite virgins, Macdonald concludes, given the prohibition on actually marrying Midianites which is contained in the very same biblical Book (Numbers 25:6), that the offspring of such sexual unions would be illegitimate: 

The captured women will be slaves and/or concubines for the Israelite males [and] their children would presumably have lower status than the offspring of regular marriages” (p57-8).[4]

However, much the same was true of lower-caste women used as concubines by upper-caste men under the Indian caste system

Thus, in India, only legitimate issue of upper-caste men inherit the caste status of their father, not illegimate offspring fathered outside of wedlock with concubines. Thus, the offspring of unmarried lower-caste concubines inherit the caste status of their mothers, irrespective of their paternal lineage.

Therefore, at least in theory, the practice of concubinage would have no impact on the genetic composition, and ‘racial purity’, of the highest caste-group, namely the Brahmins.

In short, the concubinage envisaged in the Bible seems directly analogous to that practiced by upper-caste Indians under the caste system

Cultural Group Selection 

In ‘A People That Shall Dwell Alone’ (PTSDA), Kevin Macdonald explains Jewish survival and success through a theory of cultural group selection, whereby he conceptualizes Judaism as a group evolutionary strategy that functions to promote the survival and prospering of Jews throughout the diaspora. 

Macdonald is not here referring to group selection in the strict biological sense. Instead, Macdonald seems to have in mind, not biological, but cultural evolution.  

Thus, although he never uses the term, perhaps on account of an animosity towards Richard Dawkins, the originator of the term, whom he credits with indoctrinating evolutionists against the view that groups have any important role to play in evolution (pviii), we might characterise his theory of Judaism as a memetic theory, in accordance with Richard Dawkins’ concept of memes as units of cultural evolution (see The Selfish Gene: which I have reviewed here). 

PTSDA is, then, a work, not of evolutionary psychology or human sociobiology, but rather of memetics

Thus, Dawkins famously described religions as Viruses of the Mind that travel between and infect human hosts just like biological viruses (Dawkins 1993). 

On this view, the success of a religion in surviving and spreading depends partly on its ‘infectiousness’. This, in turn, depends on the behaviours (or ‘symptoms’) that the infection produces in those whom it afflicts. 

Thus, proponents of Darwinian medicine contend that pathogens (e.g. viruses) produce symptoms like coughing, sneezing and diarrhoea precisely because such symptoms enable the pathogen to infect new hosts via contact with the bodily fluids expelled, as part of the pathogen’s own evolutionary strategy to reproduce and spread. 

Indeed, some pathogens even affect the brains and behaviours of their host, in such a way as to facilitate their own spread at the expense of that of their hosts. For example, rabies causes dogs and other animals to become aggressive and bite, which, of course, helps the virus spread to a new host, namely the individual who has been bitten.[5]

Similarly, successful religions also promote behaviours that facilitate their spread. 

Thus, Christians are admonished by scripture to save souls and preach the gospel among heathens; while Muslims are, in addition to this, admonished to wage holy war against infidels.[6]

These behaviours promote the spread of Christianity and Islam just as surely as coughing, sneezing and diarrhoea facilitate the spread of flu or the common cold. 

In short, a religion that commands its adherents to be fruitful and multiply, indoctrinate infants in the faith from earliest infancy, persecute apostates and actively convert nonbelievers will likely enjoy greater longevity than would a religion that commanded its adherents to be celibate hermits and taught that proselytism and having children are both mortal sins.[7]

Christianity and Islam are examples of the former type of religion and, no doubt partly for this reason, have spread around the world from inauspicious beginnings to become the two largest world religions. 

In contrast, religions which forbid proselytism and reproduction are few and far between, probably precisely because, even when they are founded, they do not survive long, let alone spread far beyond their originators. 

Macdonald quotes biologist Richard Alexander as citing the Shakers, an eighteenth-century Christian sect that practised strict celibacy, as an example of this latter type of religion – i.e. a religion which, because of its tenets, in particular strict celibacy, has today largely died out (p8). 

In fact, however, a small rump group of Shakers, the Sabbathday Lake Shaker Village, does survive in North America to this day, perhaps because, although celibate, they did apparently proselytize.[8]

In contrast, any religion which renounced both reproduction and proselytism would surely never have spread beyond its original founder or founders and hence never even come to the attention of historians, or theorists of religion like Alexender and Macdonald, in the first place. 

Judaism: A ‘Closed Strategy’ 

Judaism has also survived – indeed rather longer than has either Christianity or Islam. However, its numbers have not grown to the same degree. 

This is perhaps because, unlike Christianity and Islam, it adopted what Macdonald calls a ‘closed strategy’. 

In other words, whereas the Shakers renounced reproduction but practised proselytism, Jews did the exact opposite. 

Thus, the Israelites are repeatedly admonished by scripture to be fruitful and multiply (p51-4), marry within the faith (p54-62) and indoctrinate their offspring as believers from earliest infancy (p326-335). 

However, Jews do not actively seek converts. Likewise, they were forbidden to intermarry with Gentiles (e.g. Deuteronomy 7:3;), and punished for so doing (e.g. 1 Kings 11:1-13). 

It is sometimes claimed that Judaism was once a proselyting religion. However, Macdonald dismisses this as “apologetics”, designed to deflect the charge that, in contrast to the universalism of Hellenism (and later of Christianity), Judaism was a parochial, particularist or even a racist religion (p92). 

Indeed, Macdonald even hints that the decision to admit converts at all reflected a desire to forestall and counter precisely this charge. 

Macdonald therefore characterizes the Jewish strategy as: 

Allow converts and intermarriage at a formal theoretical level, but minimise them in practice” (p97). 

Thus, Rabbinic attitudes towards proselytes fluctuated, at least in Macdonald’s telling, from ambivalent to overtly hostile. Prospective converts to Judaism are traditionally turned away by a rabbi three times before being accepted, required to devote considerable effort to religious study, and, if male, undergo the brutal and barbaric practice of circumcision

However, contradicting himself somewhat, Macdonald also claims that the Israelites did forcibly convert conquered groups, notably the Galileans and Nethinim, the latter, Macdonald argues, representing the descendants of non-Israelite conquered peoples who were forcibly converted to Judaism. 

However, both these groups were, Macdonald claims, relegated to low status within the Jewish community, and subject to discrimination (p11). 

Indeed, this was, according to Macdonald, true of converts in general, who, even when they were admitted, faced systematic discrimination (p91-113). 

In particular, they were genetically quarantined from the core Jewish population, through restrictive marriage prohibitions, designed to maintain the “racial purity” of the core Jewish population, especially the priestly ‘kohanim’ line descended from Aaron

These restrictions remained in force for many generations, until all evidence of their alien origins had disappeared – an especially long time given the Jewish practice of maintaining genealogies (p119-127). 

Racial Purity” 

Macdonald repeatedly refers to Judaism as designed to conserve the “racial purity” of the group, this very phrase, or variants on it, being used by Macdonald on over twenty different pages.[9]

Thus, for example, it was, Macdonald claims, perceived racial impurity, rather than theological differences, that explained the rift with the Samaritans (p59).[10]

Racial Purity” is, of course, a phrase today more often associated with Nazis than with Jews. However, this apparently paradoxical link between the Jews and their principal persecutors during the twentieth century is, according to Macdonald, no accident. 

Thus, a major theme of Macdonald’s follow-up book, Separation and Its Discontents, is that: 

Powerful group strategies tend to beget opposing group strategies that in many ways provide a mirror image of the group which they combat” (Separation and Its Discontents: pxxxvii). 

Thus, Macdonald claims: 

There is an eerie sense in which National Socialist ideology was a mirror-image of traditional Jewish ideology. As in the case of Judaism, there is a strong emphasis on racial purity and on the primacy of group ethnic interests rather than individual interests. Like the Jews, the National Socialists were greatly concerned with eugenics” (Separation and Its Discontents: p194). 

On other words, Macdonald seems to arguing that Judaism provided, if not the conscious model for Nazism, then at least its ultimate catalyst. Nazism was, on this view, ultimately a defensive, or at least reactive, strategy.

Indeed, Macdonald goes further, arguing that the ultimate source of Nazi race theory was not WagnerChamberlain or Gobineau, let alone EckartRosenberg or Hitler himself, but rather ethnically Jewish British Prime Minister, Benjamin Disraeli, who, despite being a Christian convert and having married a Gentile, nevertheless considered the Jews a superior race, something he apparently attributed to their supposed racial purity. Thus, Macdonald quotes historian L.J. Rather as claiming:

“Disraeli rather than Gobineau—still less Chamberlain—is entitled to be called the father of nineteenth-century racist ideology” (Reading Wagner: quoted in Separation and Its Discontents: p180).

Jewish Genetics 

So, if the Jewish group evolutionary strategy is indeed focussed on maintaining the ethnic integrity and “racial purity” of the Jewish people, how successful has it been in achieving this end? 

Recent population genetic studies provide a new way to answer this very question. 

As a diaspora community with ostensible origins in the Middle East, but having lived for many generations alongside host populations with whom they were, at least in theory, forbidden to intermarry, save under certain strict conditions, the study of the population genetics of the Jews is of obvious interest to both geneticists and historians, not to mention many laypeople, Jewish and Gentile alike.  

Add to this the fact that many leading geneticists are themselves of Jewish ancestry, and it is hardly a surprise that the study of the genetics of contemporary Jewish populations has become something of a cottage industry within population genetics in recent years.[11]

Unfortunately, however, Kevin Macdonald’s ‘A People That Shall Dwell Alone’ was first published in 1994, some years before any of this recent research had been published.[12]

Therefore, in attempting to assess the success of the Jewish population in reproductively isolating themselves from the host populations amongside whom they have lived, Macdonald is forced to rely on studies measuring, not genes themselves, but rather of their indirect phenotypic expression, for example studies of blood-group distributions and fingerprint patterns (p34-40). 

Nevertheless, recent genetic studies broadly corroborate Macdonald’s conclusions, regarding: 

  1. The genetic distinctness of Jews; 
  2. Their Middle Eastern origins; and 
  3. The genetic affinities among widely dispersed Jewish populations – including the Ashkenazi JewsSephardi Jews, Mizrahi Jews, and perhaps even possibly the Lemba of Southern Africa (but not the Beta Israel of Ethiopia).[13]

However, this is true only with one major proviso – namely, the Ashkenazim, who today constitute the vast majority of world Jewry, trace a substantial part of their ancestry to Southern Europe (Atzmon et al 2010).[14]

Interestingly, comparison of the mitochondrial DNA and Y chromosome ancestry of Ashkenazim, passed down the male and female lines respectively, suggests that most of this ancestry ultimately derives from Jewish men marrying (or at least mating with) with Gentile women, and their offspring being incorporated into the Jewish population (Costa et al 2013). 

This is perhaps ironic given that, according to traditional rabbinic law, Jewish identity is, at least in theory, traced down the female line

Economic Success 

Macdonald identifies various elements of the Jewish group evolutionary strategy that have enabled Jews to repeatedly economically outcompete Gentile host populations. These include: 

  1. High levels of collectivism and ethnocentrism
  2. Emphasis on education and high-investment parenting (e.g. the stereotypical Jewish mother); 
  3. High levels of intelligence

Collectivism

Macdonald characterizes Judaism as “hyper-collectivist”, in accordance with the distinction between collectivist and individualist cultures formulated by Harry Triandis in Individualism and Collectivism (p353). 

Collectivist refers to a tendency for a person to regard their group membership, and ethnic identity, as an important part of their identity and to elevate the interests of the group above those of the individual, sometimes to the level of willing self-sacrifice. 

Macdonald regards this tendency towards collectivism and indeed to ethnocentrism as at least partly genetic in origin, although accentuated by rearing practices in which Jews are encouraged to identify with the in-group (p54-62). 

Partly, he claims, this genetic predisposition to collectivism is an inheritance from the Middle East, the region from which Jews trace (some of) their ancestry. In the Middle East, Macdonald claims, all groups are relatively collectivism and ethnocentric, at least compared to Europeans. 

This seems plausible given the tribal structure, and tribal and ethnic conflict seemingly endemic throughout much of the region. 

Actually, it would be more accurate to say, not that Middle Eastern populations are especially collectivist or ethnocentric, but rather that Europeans are unusually individualist, since, viewed in global perspective, it is clearly we Europeans who are the WEIRD’ ones in this respect.[15]

One might imagine that, at least for the Ashkenazim (and perhaps Sephardi Jews too), both living among Europeans and, to some extent, acculturating to their norms, not to mention, as we have seen, incorporating a significant proportion of their genes from interbreeding with Europeans, might have accentuated, moderated or diluted these alleged ethnocentric and collectivist impulses, at least as compared to those Middle Eastern populations who remained resident in the Middle East

However, Macdonald makes no such concession. On the contrary, he argues that, far from Jews being less collectivist and ethnocentric than other Middle Eastern populations, that Jews actually remain especially collectivist, even as when compared to other Middle Eastern groups. Moreover, he claims that this tendency long predates, though has not been noticeably moderated since, the Exile.[16]

Thus, even in ancient times, Macdonald observes:

Jews alone of all the subject peoples in the Roman Empire engaged in prolonged, even suicidal wars against the government in order to attain national sovereignty… [and] only… Jews, of all subject peoples were exempt from having to sacrifice to the Empire’s Gods, and… were… allowed its own courts and… ex officio government” (p356-8).[17]

This tendency towards ethnocentrism was augmented through strict prescriptive endogamy (i.e. marrying within the group), which increases the level of relatedness between group members, and hence facilitates cooperation and trust (p54-62).

In addition to endogamy, a further factor is a preference for consanguineous marriage (i.e. incestuous marriage), which again increases relatedness within the group, and hence further facilitates cooperation and trust – but also, over time, threatens to divide the group into separate, inbred, endogamous lineages, with loyalty only to themselves. 

This is, again, like endogamy, a common feature of marriage throughout the Middle East. However, whereas Muslims, Arabs and other Middle Eastern groups typically favour cross-cousin marriage, the Jews, Macdonald reports, extolled, in particular, uncle-niece marriage, a practice probably even more distasteful to contemporary western sensibilities, not so much because of the greater degree of relatedness, as on account of the generational difference and hence likely the age-disparity. They were therefore, he reports, sometimes exempted from Christian laws prohibiting such unions (p118-9).[18]

As evidence of Jewish clannishness, Macdonald cites what he calls the ‘double-standards’ that are imposed by Judaic law. 

The most famous example relates to usury. Whereas Christians were forbidden outright to lend money at interest, Jews interpreted the same biblical passages as forbidding only the lending of money at interest to other Jews.[19]

Yet, ironically, this double-standard actually benefited its ostensible victims, since it gave Jews an incentive to lend money to Gentiles in the first place, and the resulting availability of capital for investment was probably a major factor in the economic growth of the West and its rise to world dominance.[20]

Other prohibitions, however, evinced greater economic understanding. Thus, Macdonald reports, Jews were not permitted to encroach upon the monopolies of other Jews, or undercut Jews, but only if the customers were Gentile – if the customer-base was Jewish, then competition was to be free so as to drive down prices and thereby benefit consumers (p227-230).

However, although Macdonald cites such laws as evidence of the alleged clannishness and ethnocentrism of Jews, such racially or ethnically discriminatory legal provisions are hardly exclusive to Jewish law.

On the contrary, at least prior to modern times, such discriminatory laws may even have been the norm, at least where people of different religions or ethnicities lived alongside one another under the same set of laws and the same rulers.

Indeed, such laws are to be found, not only among the allegedly more collectivist cultures of the Middle East (e.g. the second-class Dhimmī status accorded Christians and Jews living in Muslim societies), but even among ostensibly more individualistic Northern Europeans (for example, the status of Catholics in Ireland under the Protestant Ascendancy, or indeed of Jews themselves under Medieval Christendom).

Thus, one well-known example comes from the famous legal code issued by Ine of Wessex, a late-seventh to early-eighth century King of Wessex, a leading Anglo-Saxon kingdom in the South of England. These laws prescribed that the compensation (‘weregild’) payable to relatives for causing the death of an indigenous Briton was to be less than half of that payable in relation to the death of an Anglo-Saxon.

Macdonald acknowledges that the more egregious examples of this ‘dual morality’ (e.g. “while the rape of an engaged Israelite virgin was punishable by death, there was no punishment at all for the rape of a non-Jewish woman”: p228) were tempered from the medieval period onward. 

However, this was done, he insists, only “to prevent ‘hillul hashem’ (disgracing the Jewish religion)” (p229). 

In other words, Macdonald seems to be saying that even the abolition of such practices was done in the interests of Jews themselves, in order to forestall, or avoid inciting, anti-Semitism, should such laws became widely known among Gentile audiences. 

This, though, means that his theory comes close to being unfalsifiable

Thus, if an aspect of Judaism involves favouring Jews at the expense of non-Jews, then this, of course, supports Macdonald’s contention that Judaism is a group evolutionary strategy centred on maximizing the success and prospering of Jews and of Judaism. 

But if, on the other hand, an aspect of Jewish teaching actually involves tolerance for or even altruism towards Gentiles, then this also, according to Macdonald, supports his theory, because it is, in his view, a mere public relations exercise aimed at deceiving Gentile audiences into viewing Jews and Judaism in a benign, non-threatening light.  

On this interpretation, it is difficult to see just what kind of evidence would falsify or be incompatible with Macdonald’s theory.[21]

Thus, Macdonald’s theory comes close to being a conspiracy theory. 

Indeed, if one were to go through the whole of Macdonald’s so-called ‘Culture of Critique trilogy’ replacing the words “Jewish group evolutionary strategy” with the words “Jewish conspiracy”, it would read much like traditional anti-Semitic conspiracy literature. 

Collectivism and Capitalism 

Ironically, the Jewish tendency towards collectivism gave them a particular economic advantage in quintessentially individualist Western capitalist economies. 

Thus, in terms of game theory, a society otherwise composed entirely of atomized individualists, with no strong preference for one trading partner over another, is obviously vulnerable to invasion by a collectivist group with strong in-group bias, who, through preferentially favouring one another, would, all else being equal, outcompete the individualists and gradually come to dominate the economy. 

Thus, Macdonald writes: 

Jewish economic activity has historically been characterized by high levels of within-group economic cooperation and patronage. Jewish elites overwhelmingly tended to employ other Jews in their enterprises” (p220). 

Indeed, even in pre-capitalist times, Macdonald notes: 

The importance of highly placed courtiers in the general fortunes of the entire Jewish community” (p220). 

Moreover, both kinship ties which crossed international boundaries, and a common language (Yiddish), meant that Jews had business links and lines of credit that crossed international boundaries, giving Jews an advantage in an already increasingly globalized economy. 

Middleman Minorities? 

One concept central to understanding the economic, social and political position of Jews in host societies is that of the middleman minority group

Yet Jews are by no means the only ethnic group to have occupied this social and economic niche.  

Indeed, although Jews are often regarded as the quintessential exemplar of a middleman minority, this is arguably a western-centric perspective. Other ethnicities occupying an analogous economic niche in their host societies include the Lebanese in West AfricaSouth Asians in East Africa, and the overseas Chinese in much of Southeast Asia

As Thomas Sowell, an economist, leading American conservative intellectual and long-term student of ethnic relations in comparative cross-cultural perspective, observes in his essay Are Jews Generic?’

Although the overseas Chinese have long been known as ‘the Jews of Southeast Asia’, perhaps Jews might be more aptly called the overseas Chinese of Europe” (Black Rednecks and White Liberals: p84) 

Thus, the overseas Chinese dominate the economies of South-East Asia to a far greater extent than the Jews have ever dominated the economy of any western economy save in the imaginings of the most paranoid of anti-Semitic conspiracy theorists, and also, again like Jews in Europe, have been the subject of ongoing resentment combined with periodic persecution (see Amy Chua’s World on Fire).[22]

Yet Jews acted, not only as economic middlemen (e.g. bankers, moneylender, peddlers, wholesalers), but also as, if you like, ‘political middlemen’ – i.e. intermediaries between rulers and their subjects. 

Thus, for Macdonald, the quintessential Jewish role in host cultures was one that combined both these roles, namely as tax farmers

The prototypical Jewish role as an instrument of governmental oppression has been that of the tax farmer” (p175). 

Tax-farmers were private agents responsible for collecting taxes on behalf of a ruler, who, in return for this service, received a cut of the monies received as payment and recompense. He therefore had a direct incentive to extract the maximum taxes possible so as to maximise his own profits. 

According to Macdonald, Jews’ status as strictly endogamous aliens perfectly preadapted them for this role: 

Precisely because their interests, as a genetically segregated group, were maximally divergent from those of the exploited population… [they would have] no family or kinship ties (and thus no loyalty) to the people who were being ruled” (p172). 

They could therefore be entrusted to extract maximum revenue with all necessary ruthlessness. 

He even discovers a biblical precursor to this role, namely Joseph from the Book of Genesis, claiming: 

The archetype of the well placed courtier who helps other Jews, while oppressing the local population, is Joseph in the biblical account of the sojourn in Egypt” (p175).  

Thus, in the famous bible story, Joseph, by building up stockpiles of grain and selling it back to the Egyptians during famine, ultimately reduced the latter to servitude (p175; Genesis 47:13-21).[23]

Thus, while the masses usually resented Jews, ruling elites often acted as patrons and protectors. 

However, protection could only go so far, and Jews also served another vital function for elites, namely to act as a convenient scapegoat in times of revolt and rebellion. 

As economist and leading American conservative intellectual Thomas Sowell puts it in his essay Are Jews Generic?’:

Because the middleman is essential to the overlords, these rulers may protect him when necessary from overt violence. On the other hand, during periods when resentments reach the point where the governing powers themselves are at some risk, nothing is easier than to throw the middleman minority to the wolves and not only withdraw protection but even incite the mobs in order to direct their anger away from the overlords” (Black Rednecks and White Liberals: p69).

Thus, Pierre van den Berghe observes, since middleman minorities groups “deal more directly and frequently with the masses than the upper class” and are ethnically alien, they, not the ruling-elite itself, “become primary targets of hostility by the native masses… and are blamed for the system of domination they did nothing to create” (The Ethnic Phenomenon: reviewed here: p145). 

Thus, Macdonald quotes Hubert Blalock in Toward a New Theory of Minority group Relations as observing: 

The price the [middleman] minority pays for protection in times of minimal stress is to be placed on the front lines of battle in any showdown between the elite and the peasant groups” (quoted: p173).

Jews’ IQs?

Another factor contributing to Jewish economic success is their high intelligence.  

I have discussed the topic of Jewish intelligence in a previous post

The subject of Jewish IQs, unlike other postulated race differences in intelligence, recently became a semi-respectable, if politically incorrect, topic of polite, and not so polite, conversation, with the publication of a paper, championed by Steven Pinker, proposing that Ashkenazi Jews in particular have evolved high intelligence, and that this intelligence is mediated in part through the same genetic mutations that result in higher rates of certain genetic diseases among Ashkenazim, such as Tay Sachs, through a form of heterozygote advantage (Cochran et al 2005). 

Interestingly, Macdonald has a claim to having anticipated Cochran et al’s theory in PTSDA, where he writes: 

Eldridge (1970; see also Eldridge & Koerber 1977) suggests that a gene causing primary torsion dystonia, which occurs at high levels among Ashkenazi Jews, may have a heterozygote advantage because of beneficial effects on intelligence. Further supporting the importance of selective processes, eight of the 11 genetic diseases found predominantly among Ashkenazi Jews involve the central nervous system, and three are closely related in their biochemical effects (see Goodman 1979, 463) (p36).[24]

Despite his reputation as an anti-Semite, Macdonald’s estimate for the average IQ of Ashkenazi Jews is actually even higher than that of Cochran et al and indeed most other researchers on the topic.[25]

Thus, he estimates the average Ashkenazi IQ at a whole standard deviation above the white Gentile mean – i.e. 15 IQ points, or the roughly same as the difference between white and black Americans in the United States

However, despite the famous g factor (i.e. the correlation between scores for all different types of intelligence – verbal, spatial, mathematical etc.), Macdonald reports a massive difference in the verbal and spatio-visual IQs of Jews, with Ashkenazi Jews scoring only about the same as the white European average for spatio-visual ability, but almost two standard deviations higher in verbal intelligence (p290).[26]

This, then, may explain the relative paucity of famous Jewish engineers or even architects as compared to Jewish overrepresentation in other spheres of achievment. It might also explain why, as MacDonald puts it:

This, together with the fact that Jewish entrepreneurs and financiers sometimes lent their financial and business skills to promote, market and profit from the innovations of Gentile engineers, lent superficial credence to the anti-Semitic charge that “Jews were not innovators, but only appropriated the innovations of others” (p291).[27]

Eugenics? 

If a component of the Jewish group evolutionary strategy, and Jewish economic success, is their high level of intelligence, how exactly did they obtain and maintain this high level of intelligence? Macdonald attributes the higher average IQ of Jews primarily to what he terms “eugenics” (p275-88). 

As evidence he cites various Rabbinic quotations regarding the desirability of marrying the daughter of a scholar, or marrying one’s daughter to a scholar, some of which seem to recognize, sometimes implicitly, sometimes almost explicitly, the heritability of intellectual ability (e.g. p275; p278; p281). 

This accords with what Steven Pinker rather disparagingly terms the Jewish ‘folk theory’ of Jewish intellectual ability, namely:

The weirdest example of sexual selection in the living world: that for generations in the shtetl, the brightest yeshiva boy was betrothed to the daughter of the richest man, thereby favoring the genes, if such genes there are, for Talmudic pilpul” (Pinker 2006).

In addition, Macdonald also observes that wealthy Jews generally had more surviving offspring than poor Jews and infers that this would produce an increase in intelligence levels, because wealth is correlated with intelligence. 

However, this pattern surely existed among all ethnic groups prior to the demographic transition and development of effective contraception and the welfare state, which disrupted the usual association between wealth and fertility

Thus, even in the absence of polygyny, the rich had higher numbers of surviving offspring, if only because only they could afford to feed and care for so many offspring. 

However, among Jews, wealth may have been especially correlated with intelligence, because most were concentrated in occupations requiring greater intellectual ability (e.g. moneylending rather than farm labouring).[28]

Poor Jews, meanwhile, were often the victims of substantial discrimination, sometimes including restrictions on their ability to marry, which, he infers, may have motivated the latter to abandon Judaism. Thus, their genes were lost from the Jewish gene pool. 

However, he provides no hard data showing that it was indeed relatively less well-off Jews who did indeed abandon Judaism in greater numbers. 

Moreover, in an earlier chapter on the alleged ‘clannishness’ of Jews, he discusses Jewish charity directed towards less well-off Jews, which may have represented an incentive for poor Jews to remain within the fold (p234-241). 

More plausible is Macdonald’s claim that Jews low in the personality trait known to psychometricians as conscientiousness may have been more prone to defect from the fold, because they lacked the self-discipline to comply with the incredible ritual demands that Judaism imposes on its adherents (p312-9). 

Religious Scholarship 

Whereas Jewish religious scholars were apparently much favoured as husbands, celibacy was imposed on many Christian religious scholars. As Francis Galton first surmised, this may have had a dysgenic effect on intelligence among Christians

Of course, today, religious scholarship is not regarded as an especially intellectually demanding field, nor arguably even an academically respectable one. Indeed, Richard Dawkins is even said to have disparaged theology as “not a real subject at all”. 

Moreover, there is a well-established inverse correlation between religiosity and IQ (Zuckerman et al 2013). 

My own view is that theology is indeed a real subject, just a rather silly and unimportant one rather like, as Dawkins has put it elsewhere, the hypothetically postulated field of ‘fairyology’ (i.e. the academic study of the nature of fairies). 

However, just because a subject-matter is silly and unimportant does not necessarily mean that it is intellectually undemanding. These are two separate matters. 

Moreover, in the past, theology may have been the only form of scholarship it was safe for intellectually-minded Jews, Christians or even closet atheists to undertake. 

After all, anyone taking it upon himself to investigate more substantial matters, such as whether the Earth orbited the Sun or vice versa, was in danger of being burnt at the stake if he reached the wrong conclusion – i.e. the right conclusion.[29]

Untestable Panglossianism? 

Macdonald tends to view every aspect of Judaism as perfectly designed to ensure the survival and prospering of the Jewish people. Often, however, this is questionable. 

For example, Macdonald describes the special status accorded the Tribe of Levi, and the priestly Aaronite (Kohanim) line, as “from an evolutionary perspective… a masterstroke because it resulted in the creation of hereditary groups whose interests were bound up with the fate of the entire group” (p385).  

Thus, he contends: 

The presence of the priesthood among the Babylonian exiles and its absence among the Syrian exiles [i.e. the fabled lost tribes] from the Northern Kingdom may explain why the latter eventually… assimilated and the former did not” (p394).

However, one could just as plausibly argue that this arrangement, especially the hereditary right of the Levite priestly caste to payment from the other tribes, would produce resentment in other tribes and hence division. 

Again, this suggests that MacDonald’s theory is unfalsifiable.

Conscious Design or Random Mutation? 

In biological evolution, adaptions emerge without conscious design, through random mutation and selection.  

A similar process of selection may have occurred among rival religions: Some, like the Shakers, die out; others, like Christianity, Judaism and Islam, survive and spread. 

However, religions are also consciously created by their founders – i.e. by figures such as Muhammad, Joseph Smith, Zoroaster, Ron Hubbard, Jesus and Saul of Tarsus. 

Thus, although Macdonald is an atheist and evolutionist, with respect to Judaism he seems to be something of a creationist. 

Thus, he writes that, although Moses, like Lycurgus of Sparta, may have been mythical, the systems developed in their respective names “have all the appearance of being human contrivances” (p395). 

Thus, Macdonald seems also to envisage that the teachings of Judaism were indeed consciously designed with the survival and prospering of the Jews in mind. 

Indeed, there were likely, he suggests, multiple authors. Thus, Macdonald argues that: 

The Israelite system has been so successful in its persistence precisely because crucial aspects of the strategy were continually changed… to meet current contingencies” (p396).[30]

Thus, Jewish writings authored in Exile (e.g the Talmud) extol very different traits than the martial values celebrated in the Books of Deuteronomy and Joshua, authored when the Jews were, if not independent, at least still resident in Palestine; while the twentieth-century establishment of the state of Israel presaged, once again, Macdonald reports, “a return to military values” (p318). 

Yet, in proposing that the Jewish evolutionary strategy was consciously designed by its formulators, Macdonald credits the authors of the Biblical texts with remarkable judgement and foresight. 

It also casts them in the role of a sort of metaphoric premodern Elders of Zion

This suggests, once again, that Macdonald’s thesis comes close to a conspiracy theory. 

Indeed, as I have already noted, if one were to go through Macdonald’s work replacing the words “Jewish group evolutionary strategy” with the words “Jewish conspiracy” then it would read much like traditional anti-Semitic conspiracy literature.[31]

Cultural or Biological Evolution? 

Since Judaism represents what Macdonald terms a ‘closed’ group strategy, it has as its effect, not only of ensuring the survival of Judaism as a religion, but also the survival of the Jewish people and their genes. 

Sometimes, this makes Macdonald’s theory read more like a theory of biological evolution than of cultural evolution or memetics. For example, he repeatedly talks of the Jewish group strategy as being designed to conserve “Jewish genes” and, as we have seen, preserve the racial purity of the group. 

This could cause confusion. Indeed, I suspect Macdonald has even managed to confuse himself. 

Thus, in his opening chapter, Macdonald emphasizes that: 

Strategizing groups can range from complete genetic segregation from the surrounding population to complete panmixia (random mating). Strategizing groups maintain a group identity separate from the population as a whole but there is no theoretical necessity that the group be genetically segregated form the rest of the population” (p15). 

Also consistent with this, Macdonald writes: 

At a theoretical level… a group strategy does not require a genetic barrier between the strategizing group and the rest of the population. Group evolutionary strategies may be viewed as ranging from completely genetically closed… to genetically open” (p15; see also p27). 

However, in a later chapter, Macdonald seems to contradict himself, writing: 

In order to qualify as an evolutionary strategy, genetic segregation must be actively maintained by the strategizing group” (p85). 

This suggests that ‘open strategies’ like ChristianityIslam, and Shakerism cannot qualify as ‘group evolutionary strategies’ and hence reduces the applicability, and hence, in my view, the usefulness, of the concept. 

Towards a ‘Culture of Critique’? 

Most problematically, this confusion carries over into The Culture of Critique (reviewed here), Macdonald’s more (in)famous sequel to the present work, where Macdonald envisages even secular intellectuals of Jewish ethnicity, including Marxists, Freudian psychoanalysts and Boasian cultural anthropologists, as somehow continuing to pursue a Jewish group evolutionary strategy even though they have long previously abandoned the religion in whose teachings this group evolutionary strategy is ostensibly contained. 

Yet, if the Jewish group evolutionary strategy is encoded, not in Jewish genes, but rather in the teachings of Judaism, how then can secular Jews, some of whom have abandoned the religion of their forebears, and others, raised in secular households, never been exposed to it in the first place, somehow continue to pursue this group evolutionary strategy. 

The Culture of Critique, then, seems to be fundamentally theoretically flawed from the onset (see my reviewhere). 

In contrast, ‘A People That Shall Dwell Alone’ represents a tenable and, in some respects, persuasive theory in explaining the survival and success of the Jewish people over the centuries, and it is regrettable that its reputation has been tarnished and overshadowed somewhat by Macdonald’s more recent writings, reputation and political activism. 

Antisemitic? 

A final issue must also be addressed – namely, is Macdonald’s ‘A People that Shall Dwell Alone’ an anti-Semitic work? Certainly, in the light of Macdonald’s subsequent writing on the Jews, and his political activism, it has been retrospectively characterized as such. 

Indeed, even at the time he authored the book, Macdonald was sensitive to the charge, insisting on the opening page of his Preface that, in his opinion: 

I believe that there is no sense in which this book may be considered anti-Semitic” (xcvii). 

In contrast, in the sequel, Separation and Its Discontents, Macdonald does not deny the charge of anti-Semitism, but rather predicts that this charge will indeed be levelled at his work, and indeed concludes that it is entirely compatible with his theory of Judaism as a group evolutionary strategy that it would be.

The charge that this is an anti-Semitic book is… expectable and completely in keeping with the thesis of this essay” (Separation and Its Discontents: pxxxvi). 

Most recently, in the Preface to the First Paperback Edition of the The Culture of Critique (reviewed here), the last work in Macdonald’s trilogy, the most (in)famous and, in my view, also the least persuasive, Macdonald comes very close to admitting the charge of anti-Semitism, writing: 

Whatever my motivations and biases, I would like to suppose that my work on Judaism at least meets the criteria of good social science, even if I have come to the point of seeing my subjects in a less than flattering light” (Culture of Critique: plxxix). 

Yet, here, Macdonald is surely right. 

The key question is not whether Macdonald himself is anti-Semitic, nor even whether his books are themselves anti-Semitic (whatever that means), or are liable to provoke anti-Semitism in others. Rather, it is whether his theory is true – or, rather, provides a useful and productive model of the real world. 

Moreover, it bears emphasizing that any evolutionary theory is necessarily cynical. 

All organisms evolve to promote their own survival, often if not always at the expense of competitors. Likewise, superorganisms, including ‘cultural group strategies’, also evolve to promote their own survival, often at the expense of other groups and other individuals. 

Indeed, as Macdonald shows in Separation and Its Discontents, this is no less true of anti-Semitic movements, such as medieval Christianity or National Socialism, than it is of Judaism itself (p1-2). 

Interestingly, in an even more recent speech/essay, Macdonald returns again to denying the charge of anti-Semitism, instead professing: 

I greatly admire Jews as a group that has pursued its interests over thousands of years, while retaining its ethnic coherence and intensity of group commitment (Macdonald 2004).[32] 

Moreover, as suggested by the title of this speech (Can the Jewish Model help the West Survive?), he even suggests that Judaism, as a successful ‘closed’ group strategy, might even provide a useful model for the contemporary West. 

In other words, for the West, and white westerners in particular, to survive amidst globalization, mass immigration, declining birth-rates, below replacement-level fertility and gradual demographic displacement even in our own indigenous homelands, perhaps white Americans, and white Europeans, must, in imitation of Judaism, develop a new, and rather less ‘open’, group evolutionary strategy of our own. 

Endnotes

[1] Indeed, ironically, even the very first definite textual and archaeological reference to the Jews is a reference to their ostensible destruction, namely the Merneptah Stele, dated to the Second Millennium BCE, which reads, in part, Israel is laid waste and his seed is no more. Yet some four thousand years later, the Jewish people survive and thrive, still practising a continuation of the same religion, while Egypt itself has long been relegated to a global backwater. As Twain is apocryphally quoted as observing in response to his own obituary, reports of Israel’s demise were greatly exaggerated.

[2] In fact, although the word varna is undoubtedly cognate with the Sanskrit word for ‘colour, recent attempts have been made to deny a connection with skin colour. Thus, the latest version of the Encyclopædia Britannica entry for ‘varna argues that the idea that:

Class distinctions were originally based on differences in degree of skin pigmentation between an alleged group of lighter-skinned invaders called ‘Aryans’ and the darker indigenous people of ancient India… has been discredited since the mid-20th century.”  

Instead, the authors of this entry argue: 

The notion of “colour” was most likely a device of classification.” 

In support of this interpretation, it is notable that, in discussing Georges Dumézil’s Trifunctional hypothesis with respect to the original proto-Indo-Europeans, from which the four varna system of India likely developed, David W Anthony writes: 

The most famous definition of the basic divisions within Indo-European society was the tripartite scheme of Georges Dumézil, who suggested there was a fundamental three-part division between the ritual specialist or priest, the warrior and the ordinary herder/cultivator. Colors may have been associated with these three roles: white for the priest, red for the warrrior and black or blue for the herder/cultivator” (The Horse, the Wheel and Language: p92). 

Similarly, leading Indo-Europeanist JP Mallory observes:

Indo-Iranian, Hittite, Celtic and Latin ritual all assign white to priests and red to the warrior. The third function would appear to have been marked by a darker colour such as black or blue” (In Search of the Indo-Europeans: p133).

Likewise, Mallory also observes that “both ancient India and Iran expressed the concept of caste with the word for colour” (In Search of the Indo-Europeans: p133).
These commonalities suggest that the association of caste with colour predated the conquest of the Indian subcontinent by Indo-Europeans and therefore cannot have been a reference to the lighter complexion of the Indo-European conquerors as compared to the subjugated indigenous Dravidian peoples.
On the other hand, however, given the increasing genetic support for Aryan invasion theory in the populating of the subcontinent, and continued caste differences in complexion and skin colour, the idea that the term ‘varna’ was at least in part a reference to differences in skin colour cannot be ruled out.
Moreover, it is notable that, although ostensibly based on clothing not skin tone, even in the colour schemes outlined by Anthony and Mallory in the passages quoted above, it is the relatively higher caste groups that are associated with lighter colours (e.g. priests with white) and the lower status groups (e.g. herders/commoners) with darker colours (e.g. black or blue).
Part of the reason for the persistent denial of an association with skin colour seems to be a distinctively Indian version of political correctness, since the idea of an Aryan conquest, and an association with lighter complexion, is associated in India both with notions of racial supremacy and also with caste snobbery. In fact, however, it was presumably the earlier indigenous pre-Aryan Dravidian populations who were responsible for founding one of the world’s earliest civilizations, so there is no reason to think of the Aryan invaders as in any way racially superior. On the contrary, like later waves of nomadic horse warriors who originated in the Euasian Steppe but, with their mastery of the horse, subjugated more advanced civilizations (e.g. the Mongols and Huns), the proto-Indo-Europeans may have been militarily formidable but, aside from their mastery of the chariot, otherwise culturally and tehnologically backward barbarians.

[3] This claim, namely that the Indian caste system represents a “fairly open” group evolutionary strategy, seems to me to be contrary to all the historical, and the genetic, evidence. For example, even Gregory Clark’s recent The Son Also Rises, which uses surname analysis to determine rates of social mobility, finds that, until very recently, India had exceptionally, indeed uniquely, low rates of social mobility as compared to anywhere else in the world.

[4] Since Jewish identity is traditionally passed down the female line, the offspring of non-Jewish concubines and Jewish males would not qualify as Jewish, unless either the mother, or the offspring him or herself, had formally converted. However, this idea first finds scriptural authority in the Mishnah, compiled in the Tannaitic period, i.e. the first couple of centuries of the Common Era. It therefore appears to be an innovation of Rabbinic Judaism, and hence of little if any relevance to the interpretation of the passages quoted by Macdonald from the Book of Numbers and of Dueteronomy, which, as part of the Pentateuch (i.e. the first five books of the Hebrew Bible), were composed many centuries earlier. Indeed, some evidence suggests that originally Jewish identity was passed down the male line, and that this was only later altered in the early Tannaitic era.

[5] There are more dramatic examples of behavioural manipulation of hosts by pathogens. For example, one parasite, Toxoplasma gondii, when it infects a mouse, reduces the mouse’s aversion to cat urine, which is theorized to increase the risk of its being eaten by a cat, hence facilitating the reproductive life-cycle of the pathogen at the expense of that of its host. Similarly, the fungus, ophiocordyceps unilateralis turns ants into so-called ‘zombie ants’, who willingly leave the safety of their nests, and climb and lock themselves onto a leaf, in order to facilitate the life cycle of their parasite at the expense of their own. Similarly, dicrocoelium dendriticum (aka the lancet liver fluke) causes the ants whom it infects to climb to the tip of a blade of grass during daylight hours, increasing the chance they will be eaten by cattle or other grazing animals, again facilitating the next stage of the parasite’s life-history.

[6] For example, the Islamic promise that martyrs will receive 72 virgins in paradise seems perfectly designed to encourage young, unmarried males, excluded from reproduction in the polygynous mating milieu of Islam, where there are inevitably not enough fertile females to go around, to risk their lives or even commit suicide attacks in the name of holy war. Such an afterlife is vastly more appealing to young males than the Christian conception of heaven, or even the ancient Norse conception of Valhalla

[7] For example, the requirement of the Catholic Church, since relaxed, whereby, for a marriage between a Catholic and a non-Catholic to be permitted, the parties had to agree to raise any offspring as Catholic, and also that the Catholic partner continue to attempt to convert the non-Catholic, obviously had high ‘memetic fitness’ and likely contributed to the changing demographic fortunes of Catholics and Protestants in Ireland.
Similarly, the strict Catholic prohibition on abortion and many other forms of contraception also likely had high ‘memetic fitness’ and may have affected the demographic fortunes of Irish Catholics and Protestants, as well as contributing to the stereotypically high fertility rate, and family size, in Ireland. One is also reminded of the predominantly Protestant ‘Quiverfull movement’, popular among some Christian fundamentalists in North America, and undoubtedly representing another high fitness meme.
Interestingly, however, Ireland no longer has a high fertility rate. As in most developed western economies, fertility is now well below replacement levels, which, together with mass migration from the developing world, will likely have dire demographic consequences in the future.
Nor is the fertility rate noticeably higher in other traditionally Catholic regions of Europe (e.g. Spain, France, Italy) than in those where the majority of the population was traditionally Protestant (e.g. the UK, Germany, the Netherlands), despite Catholic opposition to abortion and contraception. This may perhaps be a consequence of increasing secularization, such that religious prohibitions no longer carry much weight with the majority of the population, and are no longer enforced by secular law.

[8] A celibate group which replenishes its numbers through accepting newcomers is therefore capable of surviving. Perhaps the various (ostensibly) celibate holy orders of the Christian Church, and other religions, can be conceptualized in a similar way, though they, of course, exist only as part of, and with the support of, the wider Christian religious community as a whole. 

[9] E.g. p50; p55; p60; p78; p82; p98; p107; p117; 118, p119; p120; p122; p127; p158; p163; p120; p121; 122; p227; p360; p362; p363; p366; p403; p404. This is easily discoverable by using the ‘search inside’ feature on either amazon or google books. 

[10] On this view, the Samaritans supposedly represented the remnants of the Northern Kingdom who, being of lower social status, had not been exiled by the Assyrians, but rather remained in Samaria, but had supposedly intermarried with non-Jews. In addition to any concern for racial purity, there seem seems also to have been an element of class snobbery involved in the split, since those remnants of the Northern Kingdom who were not expelled were mostly of a lower social class.

[11] For example, several books aimed at a popular readership have been published on the topic, including Jon Entine’s Abraham’s Children: Race, Identity, and the DNA of the Chosen People (2008), David Goldstein’s Jacob’s Legacy: A Genetic View of Jewish History (2008) and Harry’s Ostrer’s Legacy: A Genetic History of the Jewish People (2012).

[12] Admittedly, in the ‘Diaspora Peoples: Preface to the Paperback Edition’, included in more recent editions of PTSDA, Macdonald does discuss a few of the early genetic studies (pxiv-iv). Unfortunately, however, these all seem to involve Y chromosome ancestry (i.e. male-line ancestry). Subsequent studies which also sample mitochondrial DNA, which is passed down the female line, have shown that most European input into the Ashkenazi gene-pool has come from Jewish men mating with Gentile women (Costa et al 2013). Therefore, Macdonald’s review of studies of Y chromosome ancestry in this preface causes him to overestimate the segregation of the Jewish gene-pool in diaspora. There have also now been studies of Jewish autosomal DNA (i.e. neither Y chromosome nor mitochondrial DNA, but rather genes from the remainder of the genome besides the sex chromosomes), which reflects both male- and female-line ancestry.

[13] In A Troublesome Inheritance, science journalist Nicholas Wade reports:

As to European Jews, or Ashkenazim, genetics show that there has been a 5% to 8% admixture with Europeans since the founding of the Ashkenazi population in about 900 AD, which is equivalent to 0.05% per generation” (A Troublesome Inheritance: p200). 

As evidence for this claim, Wade cites a study entitled ‘A genome-wide genetic signature of Jewish ancestry perfectly separates individuals with and without full Jewish ancestry in a large random sample of European Americans’ (Need et al 2009). Wade also estimates:

The rate of admixture with host populations has probably been similar among the other two main Jewish populations” (A Troublesome Inheritance: p200). 

[14] Population genetics studies also suggest that Sephardi Jews (i.e. those who inhabited the Iberian Peninsula prior to their expulsion in the late fifteenth century) also have substantial European admixture. Only Mizrahi Jews, who remained in the Middle East and with whom Sephardi are sometimes conflated, are likely of wholly Middle Eastern ancestry, since they lived among, and hence intermarried only with, other Middle Eastern populations. 

[15] Thus, for example, East Asian populations also seem to be highly collectivist in orientation. For example, a famous Japanese saying has it that ‘the nail that sticks out gets hammered down’ and it seems difficult to imagine Europeans volunteering, or even agreeing, to become kamikaze pilots. The issue of European individualism, which Macdonald traces much further back in human history than would most historians, is a principal theme of Macdonald’s most recent book Individualism and the Western Liberal Tradition.

[16] Interestingly, in the Preface to the Paperback Edition of The Culture of Critique (reviewed here), a sequel to the work currently under review, Macdonald cites evidence of a difference in stranger anxiety as between infants from North Germany and those from Israel, including both Kibbutz-raised and city-dwelling infants (The Culture of Critique (paperback): pxxxii). This finding is consistent with a greater level of group-mindedness and ethnocentrism. The source cited by Macdonald for this claim in the associated endnote is the edited book, Growing Points of Attachment Theory and Research (pp233–275), which I have not read myself.

[17] However, interestingly, the suicidal wars against their Roman overlords were pursued most tenaciously by the Galileans. Yet the Galileans were, at least according to Macdonald, themselves only recent converts to Judaism, and still of lower status than other Jews. This is, of course, contrary to Macdonald’s theory that Jews are especially ethnocentric and collectivist. It also suggests that suicidal wars against the Romans were a manifestation of the phenomena sometimes referred to as the zeal of the convert.

[18] Macdonald reports that Jews also practised polygyny, both in Biblical times (p53-54; e.g. Exodus 21:10), and indeed into relatively modern times, the practice remaining common especially among Sephardi and Mizrahi Jews (p373). Polygyny is, of course, another marriage pattern less frequent in the West than the Middle East, and which is today frowned upon, and unlawful, in all western cultures.

[19] Exodus 22:25; Deuteronomy 23:19-20. The Jewish interpretation actually seems more reasonable given the wording of the passages. Indeed, according to anaesthesiologist-anthropologist John Hartung, many Old Testament Biblical injunctions that are today interpreted as universalist both by Christians and by many Jews, such as to love one’s neighbour and thou shalt not kill, and indeed many of the teachings of Jesus in the New Testament as well, are properly to be interpreted, in their proper historical context, as applying only to fellow Jews (Hartung 1995).

[20] Macdonald, in contrast, sees Jewish usury, at least in ancient times, as exploitative. Thus, he observes:

“[F]ew individuals could expect to profit by taking a loan at the interest rates common in the medieval period. Interest rates in northern France were 65 percent and compounded until 1206, when the rate was fixed at 43 percent and compounding was made illegal… [But] both compounding and rates higher than the legal limit continued after attempts to abolish these practices. The great majority of loans were not for investment in businesses, but for living expences in a society that hovered near the subsistence level” (p406-7).

Although he acknowledges that moneylending, in making capital available for investment, is now an essential economic service, he emphasizes the exorbitant interest rates charged by Jewish moneylenders in the medieval period (in Separation and its Discontents: p46-7).
However, Jewish moneylenders were only able to charge such exorbitant rates because of a lack of competition (i.e. because Christians were forbidden to lend money at interest). The ultimate fault therefore lies with the misguided prohibition on Christians charging interest on loans, not the Jewish moneylenders who took advantage of this exclusive market niche. Perhaps high interest rates were partly a product of price-fixing by Jewish monopolist cartels. However, if so, this was only possible because Christians were not permitted to compete with Jews as moneylenders, thereby undercutting them and hence driving down interest rates through increased competition.
Moreover, the high interest rates Jewish moneylenders charged probably also reflected the fact that the authorities had a habit of periodically declaring all debts void and expelling Jews from their territory without reimbursing them. The high interest rates charged therefore at least partly reflected the level of risk.
At any rate, even lending money at these seemingly exorbitant rates provided a service to the public. If it did not, then no one would ever have chosen to borrow money even on these terms. After all, if this was the only way in which monies were available to borrow, then it was better than nothing, if an urgent demand for capital demanded it.

[21] Interestingly, in its unfalsifiability, Macdonald’s theory mirrors Marxist sociology. Thus, for Marxist sociologists, if, for example, the law seemingly favours the capitalist class at the expense of workers, then this, of course, only confirms the Marxist in his belief that the capitalist legal system is biased in favour of the former. But if, on the other hand, laws are passed that, say, protect workers’ rights at the expense of their employers, then this is interpreted by the Marxist as a ‘sop to the workers’ – a forlorn effort on the part of the bourgeois capitalist government to appease the proletariat and thereby forestall, or at least postpone, the inevitable overthrow of capitalism – and hence proof of the inevitable coming of communism. Thus, Marxist social theory is as unfalsifiable as Marxist historicism.
In this light, the title of John Derbyshire’s piece on Macdonald in The American Conservative – namely The Marx of the Anti-Semities – is, I feel, rather insightful (thought Derbyshire himself, it must be noted, disclaimed this title, saying it had been forced on him by an editor).

[22] Macdonald argues that Jews differ from other middleman minorities, who usually attempt to maintain a low-profile, by their relatively greater aggression and ‘pushiness’. Thus, Macdonald refers to the aggressiveness of the Jews, compared to the relative political passivity of the Overseas Chinese (Macdonald 2005).
For example, Amy Chua begins her book World on Fire by discussing the murder of her aunt, who was part of the Philippines’ wealthy Chinese business community, and the indifference of the police, and even of her own family, regarding the murder, writing of how:

Hundreds of Chinese in the Philippines are kidnapped every year, almost invariably by ethnic Filipinos. Many victims, often children, are brutally murdered, even after ransom is paid. Other Chinese, like my aunt, are killed without a kidnapping, usually in connection with a robbery… The policemen in the Philippines, all poor ethnic Filipinos themselves, are notoriously unmotivated in these cases” (World on Fire: p2-3).

Even her own family, Chua reports, had a “matter of fact, almost indifferent attitude”, she reports, passively accepting that the murderer, though known, was unlikely ever to be apprehended (p2). 
It is impossible to imagine Jews in the West today reacting similarly. On the contrary, Jewish groups would surely be outraged and publicly protesting if Jews were being disproportionately targeted in racially motivated killings and the police accused of failing to seriously investigate the murders. Thus, for example, the powerful American Jewish activist group, the Anti-Defamation League, was formed to protect Leo Frank, a wealthy Jewish factory superintendent accused (and convicted) of the rape and murder of a thirteen-year-old girl. 
On the other hand, however, I suspect, in previous centuries, attitudes among Jews in the West may have been similar to those in the Philippines. Perhaps the turning point for western Jewry in this respect was the Dreyfuss affair.
In stark contrast to Jews in the west, Macdonald reports:

The overseas Chinese in Indonesia have a reputation of being relatively uninterested in politics despite the fact that political trends have often had major effects on their business” (pliv).

Thus, the overseas Chinese strategy to avoid incurring enmity of the part of the host society among whom they live seems to involve maintaining a low-profile, keeping their heads down and concentrating on making money rather than making waves. Thus, Macdonald explains: 

Unlike the Jews, overseas Chinese have adopted a low profile political posture and have generally stayed out of local politics. Whereas Jews in the United States and elsewhere tend to have economic, political and cultural influence far out of proportion to their numbers, the Chinese are similar only in their economic influence.” (plxxxix). 

This is what sociologist-turned-sociobiologist Pierre van den Berghe, in his book The Ethnic Phenomenon (reviewed here and here) calls “weak money syndrome” (The Ethnic Phenomenon: p153). Thus, van den Berghe observes:

“[Middleman minorities] basically survive by keeping a low profile, by remaining as inconspicuous as possible, by being unostentatious about wealth, by staying out of politics (at least overtly) and by adopting a conciliatory, nonaggressive strange” (The Ethnic Phenomenon: p144).

The ironic result is that  “the more economically secure a [Middleman Minority group] becomes, the more precarious its position grows”, since their economic wealth produces an increase both their visibility and the resentment towards them that this provokes (The Ethnic Phenomenon: p144).
But Jews are seemingly almost as overrepresented among politicians and leading political activists as they are among businesspeople, though, as a rule, they tend to play down, sometimes even hide, their ethnicity.
Also, unlike Jews, Macdonald reports, the overseas Chinese “have not been concentrated in media ownership or in the construction of culture” (Macdonald 2005: 67). Neither, he reports, do we hear of: 

Chinese cultural movements, disseminated in the major universities and media outlets that subject the traditional culture of Southeast Asians and anti-Chinese sentiment to radical critique” (pxc)

However, to be fair, we don’t hear much about Jewish cultural movements that subject traditional western culture to radical critique either – unless of course, we happen to be readers of Macdonald’s own writings, especially The Culture of Critique (which I have reviewed here).
Macdonald himself attributes these differences partly to the fact that “The [overseas] Chinese [in Southeast Asia] are a very recent group evolutionary strategy” and partly also to the fact that, although both groups have high IQs, East Asians have a very different, almost opposite intelligence profile to Ashkenazi Jews (pxc).
Thus, whereas Jews, as discussed above and in a previous post, score very high in verbal ability, but not especially highly spatio-visual ability, East Asians score higher in spatio-visual and mathematical ability than in verbal ability.

[23] Though the Biblical passage in question actually describes this course of events as benefitting all concerned, including the subjects who were reduced to bondage, Macdonald regards this interpretation as disingenuous (p175). This is not unreasonable. It is rarely if ever to anyone’s advantage to be reduced to bondage and slavery

[24] Macdonald also notes in an accompanying endnote:

Motulsky (1977a) suggests that the higher incidence of myopia in Ashkenazi Jewish populations could be the result of selection for higher verbal intelligence. Myopia and intelligence have been linked in other populations, and Jews tend to have higher intelligence and higher rates of myopia

However, the celebrated (and ethnically-Jewish) geographer, anthropologist, physiologist, ornithologist and all-round polymath (and anti-racist) Jared Diamond has an even earlier claim to anticipating Cochran et al’s theory in a paper published in the jounral Nature in 1994 (see Sailer 1999). 

[25] E.g. Richard Lynn’s The Chosen People: A Study of Jewish Intelligence and Achievement.

[26] Interestingly, despite the g factor, Macdonald suggests that, if overall IQ (or g), is actually controlled for or held constant, then there is actually an inverse correlation between, on the one hand, verbal, and, on the other hand, spatio-visual, intelligence, suggesting that there is a degree of trade-off between the two, perhaps whereby the more brain tissue is devoted to one form of ability, the less remains to be devoted to the other. Thus, Macdonald writes:

Visuo-spatial abilities and verbal abilities are actually negatively correlated in populations that are homogeneous for Spearman’s g, and… there are neurological trade-offs such that the more the cortex is devoted to one set of abilities, the less it can be devoted to the other” (p292; see Lynn 1987).

[27] Interestingly, and no doubt controversially, in an associated endnote, Macdonald credits Nazi-era German geneticist and eugenicist Fritz Lenz, in his account of Nordic and Jewish abilities, as tentatively recognizing this difference in verbal versus spatio-visual ability. According to Macdonald, Lenz explains this difference in terms of what contemporary racial theorists would call cold winters theory. Thus, Macdonald writes: 

Lenz gives major weight to the selective pressures of the Ice Age on northern peoples. The intellectual abilities of these peoples are proposed to be due to a great need to master the natural environment, resulting in selection for traits related to mechanical ability, structural design, and inventiveness. Lens’s description of Jewish intellectual abilities conforms essentially to what is termed here verbal intelligence, and he notes that such abilities are important for social influence and would be expected in a people who evolved in large groups” (p341-2).

[28] Interestingly, contrary to popular opinion, Jews did not work as moneylenders primarily because they were forbidden from owning land and hence working as farmers. It is true that they were sometimes forbidden from owning land. However, in other times and places, they were actually encouraged by the Gentile authorities to own land and take up farming to facilitate assimilation. However, Jews generally resisted such entreaties. This was because the financial rewards offered by moneylending was actually greater than that available in other careers. However, non-Jews did not typically work as moneylenders, because to do so required literacy, and the vast majority of non-Jews were not literate, and the exorbitant costs of education actually more than offset the financial benefit associated with careers such as moneylending that required literacy. However, since Jews were required by religious law to be literate anyway, they naturally took advantage of this ability to earn more money in careers such as moneylending (Landsburg 2003). 

[29] The Jews were no more tolerant than the Christian Church in this respect, as the excommunication of Spinoza demonstrates. Neither were Protestants more tolerant than Catholics. Indeed, at least according to Bertrand Russell, both Luther and Calvin actually condemned Copernicus before the Catholic Church, and may have thereby indirectly provoked the Catholic Church into persecuting Galileo, since the latter were in danger of being seen as ‘soft on Heliocentrism’ as compared to their Protestant Reformation rivals. As Bertrand Russell observed in his History of Western Philosophy:

Protestant clergy were at least as bigoted as Catholic ecclesiastics. Nevertheless there soon came to be much more liberty of speculation in Protestant than in Catholic countries, because in Protestant countries the clergy had less power… for schism led to national Churches, and national Churches were not strong enough to control the lay government” (History of Western Philosophy).

Thus, if the Church of England did not persecute Darwin as the Roman Church did Galileo, it was, Russell argues, only because they lacked the power to do so and hence not for want of trying.

[30] Indeed, in practice, all successful religions have multiple designers, as they gradually evolve and change over time. Thus, Christianity, as we know it today, was probably at least as much the creation of Saul of Tarsus as it was of Jesus, while later figures such as Aquinas, Luther and Calvin also played key roles in shaping contemporary Christian beliefs and dogmas. Obviously, Christianity also draws on pre-Christian writings and religious ideas, most obviously those in the Old Testament.

[31] As Jeffrey C. Blutinger observes in a recent article on Macdonald’s work, A New Protocols: Kevin MacDonald’s Reconceptualization of Antisemitic Conspiracy Theory, Macdonald’s concept of Judaism as a group evolutionary strategy enables him to retain or resurrect all the essential elements of anti-Semitic conspiracy theories without positing any actual conspiracy or conspiring.

[32] As I have mentioned in a previous post, anti-Semitism has a curious tendency to slide over into its ostensible opposite namely philo-Semitism. Both anti-Semites and philo-Semites tend to view Jews as uniquely separate from, and different to, all other peoples, and both also tend to notice the hughly disproportionate overrepresentation of Jews among different groups – philo-Semites, for example, pointing to the overrepresentation of Jews among Nobel prize winning scientists; anti-Semites more often pointing to their overrepresentation in media ownership and among leftists.
As Robert, a character from Michel Houellebecq’s novel Platform observes:

“All anti-Semites agree that the Jews have a certain superiority. If you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113) 

Indeed, even Hiter occassionally seemed to cross the line into philo-Semiticism, the latter writing in Mein Kampf

“The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self- preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf, Manheim translation).

However, the precise connotations of this passage may depend on the translation. Thus, other translators translate the passage that Manheim translates as The mightiest counterpart to the Aryan is represented by the Jew instead as The Jew offers the most striking contrast to the Aryan”, which alternative translation has rather different, and less flattering, connotations, given that Hitler famously extols ‘the Aryan’ as the master race.
Nevertheless, if Hitler was loathe to openly admit Jewish intellectual superioriry, Nazi propaganda and ideology certainly came to close to inadvertantly implying Jewish superiority.
Thus, for example, Weimar-era Nazi propaganda often dwelt on, and indeed exaggerated, the extent of Jewish overrepresentation in big business and the professions, arguing that Jews had come to dominate Weimar-era Germany.
Yet if Jews, only ever a tiny proportion of the population of Weimar-era Germany, had indeed come to dominate the far greater number of ethnic Germans in whose midst they lived, then this not only seemed to indicate that the Jews were anything but inferior to those Germans, but also that the Germans were hardly the master race of Hitler’s own imagining. Nazi propaganda, then, came close to self-contradiction.

References 

Atzmon, Gil et al (2010) Abraham’s Children in the Genome Era: Major Jewish Diaspora Populations Comprise Distinct Genetic Clusters with Shared Middle Eastern AncestryAmerican Journal of Human Genetics 86(6): 850 – 859.
Bamshad et al 2001 Genetic Evidence on the Origins of Indian Caste PopulationsGenome Research 11(6): 994–1004.
Cochran, Hardy and Harpending (2006) Natural History Of Ashkenazi IntelligenceJournal of Biosocial Science 38(5):659-93.
Costa et al (2013). A substantial prehistoric European ancestry amongst Ashkenazi maternal lineages. Nature Communications. 4: 2543.
Dawkins (1993) “Viruses of the Mind,” in Bo Dalhbom, ed., Dennett and His Critics: Demystifying Mind (Cambridge, MA: Blackwell, 1993).
Hartung (1995) Love Thy Neighbor: The Evolution of In-Group MoralitySkeptic 3(4):86–98, 1995.
Jazwal (1979) Skin colour in north Indian populationsJournal of Human Evolution 8(3): 361-366.
Lansburg (2003) Why Jews Don’t FarmSlate June 13.
Lynn (1987) The intelligence of the Mongoloids: A psychometric, evolutionary and neurological theoryPersonality and Individual Differences 8(6): 813-844.
Macdonald (2004) Can the Jewish Model Help the West Survive? Acceptance speech, First Jack London Literary Prize (October 31, 2004).
Macdonald (2005) Stalin’s Willing Executioners: Jews as a Hostile Elite in the USSROccidental Quarterly 5(3): 65-100.
Mishra (2017) Genotype-Phenotype Study of the Middle Gangetic Plain in India Shows Association of rs2470102 with Skin Pigmentation. Journal of Investigative Dermatology 137(3):670-677.
Need et al (2009) ‘A genome-wide genetic signature of Jewish ancestry perfectly separates individuals with and without full Jewish ancestry in a large random sample of European Americans’ Genome Biology 10: R7.
Pinker (2006) Groups and Genes, New Republic, June 26.
Sailer (2019) Jared Diamond of ‘Guns, Germs, and Steel’ Respectability Anticipated Some of Henry Harpending’s ‘Ashkenazi Intelligence’ Theory in 1994 in ‘Nature’Unz Review, December 30.
Zuckerman et al (2013) The Relation Between Intelligence and Religiosity, Personality and Social Psychology Review. 17: 325–354. 

The Philosophy of Ragnar Redbeard

Might is Right or the Survival of the Fittest (1896) by Ragnar Redbeard
Sayings of Redbeard (1890) by Ragnar Redbeard  

Perhaps the most iconoclastic book ever written, a work so incendiary that it is widely dismissed as a parody, ‘Might is Right’ has, perhaps unsurprisingly, largely been ignored by mainstream philosophers and political theorists.[1]  

Written, like Thus Spake Zarathustra, in a pretentious, pseudo-biblical style, sometimes deliberately paralleling biblical passages (“Blessed are the strong for they shall possess the earth”, “If a man smite you on one cheek, smash him on ‘the other’” etc.), ‘Might is Right’ is, unlike Nietzsche’s infamously incomprehensible screed, a straightforward, if rather repetitive, read. 

Indeed, Redbeard would, I suspect, attribute the failure of earlier thinkers to reach similar conclusions to a failure of The Will rather than The Intellect—reflecting either a failure to face up to the reality of the human condition, or else a deliberate attempt on the part of our rulers to dissimulate and deceive in order to persuade us to acquiesce in our own subjugation. 

Interestingly, ‘Might is Right’ did come to the attention of some notable contemporaries, not least Alfred Wallace, the lesser-known co-discoverer of the theory of natural selection, himself copiously quoted by Redbeard within the pages of his book.[2]

Wallace, himself a socialist, predictably disavowed Redbeard’s social Darwinism, but nevertheless acknowledged: 

Dr. Redbeard has given us a very brilliant and rhythmical poem‘The Logic of Today’. I admire his verse, but I decline to alter the meaning of such words as ‘justice’ and ‘right’ to make them accord with his theory that men are merely herds of brute beasts.” 

Here, Wallace, himself a keen amateur poet as well as a pioneering naturalist, is surely right. 

Thus, whatever his demerits as a political theorist or moral philosopher, Redbeard is a talented wordsmith – and has a better claim to being a great poet than he does to being a consistent or coherent moral philosopher.[3]

Throughout ‘Might is Right’, and indeed Sayings of Redbeard, he coins countless quotable aphorisms, and his poetry, while sometimes clumsy, is oftentimes quite brilliant.

HL Mencken, a near-contemporary of Redbeard of similarly cynical, anti-Christian, social Darwinist and Nietzschean leanings, wrote, “Religion… like poetry, is simply a concerted effort to deny the most obvious realities” and “a device for gladdening the heart with what is palpably untrue” (A Mencken Chrestomathy: p7; p569). 

Redbeard would surely agree with Mencken with respect to religion. However, in regard of poetry, he disproves Mencken’s dicta with his own delightfully cynical social Darwinist verse, among which the twelve-stanza The Philosophy of Power (aka The Logic of Today) is indeed his masterwork.[4]

Amoralism, Moral Relativism or Morality of Power? 

At the core of Redbeard’s philosophy is his rejection of morality. On one occasion he opines: 

Conventional moral dogmas and political standards-of-value are, like wooden idols, the work of men’s hands.” 

Interestingly, at least in this passage, the critique is explicitly restricted to what Redbeard calls ‘conventional’ morality. It therefore holds out the possibility that Redbeard’s rejection of moral thinking does not necessarily apply to all forms of moral thinking, but only with conventional Christian moralisms. 

This interpretation is consistent with the fact that, as we will see, Redbeard does indeed seem to champion a form of morality, albeit a very different one that champions strength and conquest, much like that of Nietzsche, at other points during his treatise.[5]

Elsewhere, however, Redbeard is more absolute, emphatically rejecting all forms of morality, without exception. Thus, his treatise includes the following categorical pronouncements: 

All ethics, politics and philosophies are pure assumptions, built upon assumptions. They rest on no sure basis. They are but shadowy castles-in-the-air erected by day-dreamers, or by rogues upon nursery fables.” 

They are not even shadows; for a shadow implies a materialized actuality. It is somewhat difficult to define what is non existent. That task may be left to University professors and Sunday school divines. They are adepts at clothing their mental nudity in clouds of wonderous verbosity.” 

‘All moral philosophy is false and vain for man is unlimited… Good and Evil liveth only in men’s minds… Right and Wrong are no more than arbitrary algebraic signs, representing hypnagogic phantasies.” 

All rights are as transient as morning rainbows, international treaties, or clauses in a temporary armistice.” 

These passages suggest a wholesale rejection of all moral thinking, akin to that of amoralists like Richard Garner, Hans-Georg Moeller, Richard Joyce and JL Mackie

But Redbeard is nothing if not self-contradictory. Perhaps among the moral ideals that he rejects is that of intellectual consistency and internal coherence! 

Thus, elsewhere, he seemingly espouses instead a radical moral relativism.  

Yet, as always, Redbeard is insistent on going far further than other thinkers exploring similar ideas, and hence takes moral relativism to its logical conclusion, if not its reductio ad absurdum, by insisting, not only that conceptions of morality may differ as between different cultures and societies, and in different times and places, but also that even individuals within a single culture may legitimately differ in their moral ethos and philosophy. 

Indeed, for Ragnar, a single individual, not only can arrive at his own personal morality, quite different from that of his neighbours, but moreover that he must do so if he is to be truly free.[6]

Every age and nation must interpret Right and Wrong for itself. So must every man. It is each man’s manifest duty to invent his own Ethical Credo.” 

Here, morality is not abandoned altogether, but rather devolved to individual conscience. 

In this, Redbeard is partially anticipated by Nietzsche, who, in one letter, albeit not specifically in the context of moral philosophy, averred:

I want no adherents. May every man (and woman) be his own adherent only” (Selected Letters of Friedrich Nietzsche: p168).

Yet, in Redbeard’s formulation, the demand that each man must invent anew his own ethical credo becomes, paradoxically, itself a universal moral injunction. 

In other words, in insisting that each man must invent his own ethical credo afresh, Redbeard is propounding a universal moral law that in itself contradicts the very relativism that this moral law purports to insist upon. 

Thus, one might ask: If no man should accept any ethical credo unless he has arrived at it himself through his own reasoning power, does this then extend even to the very ethical credo that insists that no man should accept any ethical credo unless he has arrived at it himself by his own reasoning power? 

In other words, Redbeard’s envisaged moral ethos fails even by its own criterion for validity. 

Redbeard’s primary justification for his injunction against any man adopting the moral credo of another is that, by doing so, a person invariably renders himself vulnerable to exploitation at that other’s hands.  

A sensible man should never conform to any rule or custom, simply because it has been highly commended by others, alive or dead. If they are alive he should suspect their motives. If dead, they are out of Court. He should be a law unto himself in all things: otherwise he permits himself to be demonetized to the level of a domesticated animal.” 

He who ‘keeps the commandments’ of another is necessarily the slave of that other.” 

This suggests that the ultimate purpose of any moral system is to promote one’s own self-interest, and that self-interest is the ultimate moral good. 

Thus, a moral ethos promoted by a third-party is likely to reflect their self-interest, and hence must be rejected because it is likely to be in conflict with our own self-interest, which our own moral ethos would presumably promote. 

In practice, then, the ultimate moral end is one’s own self-interest, and any system of morality must be judged against this criterion. This, in effect, elevates the promotion of individual self-interest to a universal moral injunction, again contradicting Redbeard’s insistence that there are no universal moral moral laws.

Thus, Redbeard concludes: 

He abdicates his inherent royalty who bends before any human being or any human dogma – but his own.” 

However, this raises the question: Does this injunction against adopting the moral ethos expounded by a third-party extend even to the moral system expounded by Redbeard himself? 

For, elsewhere, Redbeard, contradicting himself yet again, does indeed champion a universal morality, albeit one very different to that of the Christian moralists and instead, like that of Nietzsche, idealizing strength, power and conquest.[7]

Thus, he writes: 

All ‘moral’ dogmatisms and religiosities are positive hindrances to the evolution of the Higher Manhood; inasmuch as men who honestly grasp at Morals, do not so energetically grasp at power – power being essentially non-moral.” 

Yet, here, in presuming that men ought to grasp towards power, Redbeard is implicitly elevating the pursuit of power itself to, itself, a moral ideal. 

Might Proves Right? 

If power, and the pursuit of power, is, then, the essence of Redbeard’s moral philosophy, what evidence does he present in support of this moral theory? 

More specifically, does not Redbeard’s own moral ethos, that of strength and the pursuit of power, suffer from the precise same defect that he purports to uncover in all other moral credos – namely that, in Redbeard’s own words, it “rests on no sure basis” and is but “a shadowy castle-in-the-air”. 

To this objection, however, Redbeard has a ready response—namely that the superiority of his own moral system is proven by its real-world success in competition with other moral systems, in particular the Christian morality that he so abhors and excoriates. 

Thus, a man who acts in accordance with a morality that idealizes conquest and confrontation will, Redbeard argues, inevitably and overcome, conquer, annihilate or enslave a man who acts instead in accordance with Jesus’s exhortation to ‘turn the other cheek’. 

Thus, Redbeard applies the notion of survival of the fittest, not only to competition as between individuals, or as between groups, populations or races but also to competition as between ideas

Thus, just as different individuals compete to survive and reproduce, and only the ‘best’ survive, so the same is true of what Richard Dawkins, in The Selfish Gene, called ‘memetic’ selection among ideas, including, for Redbeard, different conceptions of morality. 

Thus, Redbeard writes: 

Let a tribe of human animals live a rational life, Nature will smile upon them and their posterity; but let them attempt to organize an unnatural mode of existence an equality elysium, and they will be punished even to the point of extermination.” 

Let any nation throw away all ‘habits of violence,’ and before long it must cease to exist as a nation. It will be laid under tribute—it will become a province, a satrapy. It will be taxed and looted in a thousand different ways. Let any man abandon all property, also all overt resistance to aggression and behold, the first sun will scarcely have sunk down in the west, before he is a bondservant, a tributary, a beggar, or—a corpse.” 

This is, of course, the essence of so-called social Darwinism, whereby, in Redbeard’s own words:

Force governs all organic life
Inspires all right and wrong
It’s Nature’s plan to weed out man
And test who is the strong

Of course, for anyone with even a rudimentary schooling in the dogmas of contemporary moral philosophy, alarm bells will immediately start to sound in their mind on reading these passages.

Ah, they will insist, but Redbeard is committing the naturalistic fallacy, or appeal to nature fallacy. He is deriving ‘ought’ from ‘is and deducing facts from values and hence violating one of the most sacrosanct tenets and dogmas of contemporary moral philosophy. 

Yet, to his credit, Redbeard is not, it seems, entirely unfamiliar with this line of criticism. On the contrary, he explicitly anticipates this objection and pre-emptively responds thusly – namely by denying outright that naturalistic fallacy or appeal to nature fallacy is indeed truly a fallacy at all. 

Thus, Redbeard declares forthrightly and unapologetically: 

To be right is to be Natural, and to be natural is to be right.” 

Does Might Make Right? 

Thus, for Redbeard, the ultimate criterion of moral truth is to be found in the outcome of real-world conflict. 

This is, of course, quite different from most people’s conception of how moral truth is to be arrived at. 

Yet, for Redbeard, it is so obvious as barely to require supporting argumentation in the first place. Thus, he laments: 

That ‘Might is Master’ should require demonstrating is itself a proof of the mental and moral perversity that pervades the world.” 

Thus, Redbeard does not bother to justify his contention that morality is determined by force of arms. Instead, he insists that the fact this is so is so obvious and straight forward as not to require justification or supporting arguments. 

Readers may disagree with Redbeard on this matter, but, in one sense, Redbeard does indeed have a point. 

If, as most moral philosophers maintain, moral principles cannot be derived from facts, then it follows that moral principles can only be derived from other moral principles. Thus, one moral belief may be justified only on the basis of another, more fundamental, such principle.

However, whence then are our ultimate moral principles, from which all our other moral principles are derived, themselves to find justification? Ultimately, it seems, they must simply be taken on faith. 

Therefore, it follows that there can be no ultimate justification for preferring any one moral ethos over any other. Each is equally valid (and invalid). 

Therefore, Redbeard’s own proposed criterion for determining moral truth (namely, victory in battle) is quite as valid as any other such criterion – which is to say, not very valid at all. 

However, although Redbeard purports to believe his own ultimate moral axiom, namely ‘Might is Master’, so obviously true as to be scarcely even in need of justification were it not for the decadence and perversity of the age, this does not prevent him from nevertheless belabouring this same point, over and over, at several different points during his treatise. Thus, at various places during his diatribe, he writes: 

Might is victory and victory establishes rightness.” 

Ethical principles are decided by the shock of contending armies.” 

Right… can be logically defined… as the manifestations of solar energy, materialized through human thought and thew, upon battlefields—that is to say, in Nature’s Supreme Court.” 

The natural law is tooth and claw. All else is error.” 

Always, however, a better poet than he is a consistent or coherent moral philosopher, Redbeard expresses himself best in his poem, The Philosophy of Power (aka The Logic of Today), where he declares: 

Might is right when Caesar bled
Upon the Stones of Rome;
Might was right when Joshua led
His hordes through Jordan’s foam…
For Might is Right when empires sink
In storms of steel and flame;
And it is right when weakling breeds
Are hunted down like game.” 

In short, for Redbeard, might not only is right, but might makes right! 

Memetic Selection Among Moralities? 

Yet, if, as he claims, Redbeard’s own social Darwinist moral ethos will itself inevitably overcome and outcompete every other moral system, Christian morality very much included, then this raises the question as to how the latter body of moral thinking ever come to be so widely espoused and championed? 

Indeed, since Christian and egalitarian moral systems seem to be far more widely espoused, at least in the contemporary West, than is the ‘Might-is-Right’ social Darwinist ethic of Redbeard, this would surely seem to suggest that it is Christian ethics which actually has the higher memetic fitness

This, in turn, suggests that Redbeard’s moral system fails even in accordance with the very criterion for validity espoused by Redbeard himself, namely survival of the fittest

Thus, a contemporary review for an Australian socialist publication protested: 

[Redbeard] overlooks the fact, however, that if the fittest individuality survives, so does the fittest idea. The very fact of its survival is proof of its fitness. So his condemnation of Socialism falls flat, for Socialism survives and flourishes, so does Christianity.[8]

Of course, we may doubt whether, as this reviewer claims, socialism did indeed flourish, in 1899 when the reviewer penned these words any more than it does today. On the contrary, time and time again, socialism, when put into practice, has proven, at best, economically inefficient, and, at worst, utterly unworkable and conductive to tyranny.[9]

Yet, in another sense, socialism does indeed flourish, even today in the twenty-first century long after the dissolution of Soviet communism. Thus, while socialism as an practical real-world economic and political system may have proven again and again utterly unworkable and disastrous, socialism as an ideology has proven remarkably resilient and impervious to repeated falsification, whether at the hands of economists or indeed of history itself. 

In other words, if socialism itself certainly does not flourish, socialist ideas surely do. 

The same is also true of Christian moral teaching, which has indeed proven of greater longevity and resilience even than socialism.

Yet, if taken literally, Christian teaching is just as unworkable and utopian, when put into practice, as is socialism, if not more so.

Thus, no society, save the smallest of utopian communes,[10] has ever successfully put into practice such ideas as turn the other cheek[11] or judge not lest you yourself be judged[12] – ideas that, taken literally, are incompatible with either an effective criminal justice system or an effective defense policy and hence inherently self-defeating, leading as they do to either internal anarchy and/or external conquest at the hands of a foreign power, and hence are as hopelessly utopian as communism.

Likewise, Christian morality is just as self-defeating at the individual level. Thus, whereas at the state level, the adoption of Christian principles leads to rampant crime, internal anarchy, and likely conquest by a foreign power, so, if an individual were to live by such principles as turning the other cheek and giving up one’s worldly possessions,[13] both of which are explicitly demanded by Jesus in the Gospels, so he would inevitably invite exploitation and destitution. 

Thus, crime novelist and alumni of the American prison system Edward Bunker described, in a beautiful and poetically evocative metaphor, what was likely to happen if you tried turning the other cheek in the Californian prison system

If he turned the other cheek they’d have him bent over spreading both cheeks of his ass while making a toy girl of him—a punk” (Little Boy Blue: p193-4). 

Thus, ‘turning the other cheek’ results in anal rape – in a literal sense in the American prison system, but in a metaphoric sense in the world at large. 

In short, a Christly life is inherently self-defeating. As Redbeard himself observes: 

If we lived as Christ lived, there would be none of us left to live. He begat no children; he labored not for his bread; he possessed neither house nor home; he merely talked. Consequentially he must have existed on charity or stolen bread. ‘If we all lived like Christ’ would there have been anyone left to labor, to be begged from, to be stolen from. ‘If we all lived like Christ’ is thus a self-evident absurdity.” 

Yet, if Christian ideas are as unworkable as socialist ones, nevertheless Christianity as a belief system thrives, at least in the sense that people still profess to believe in its tenets, even if, in practice, their own behaviour almost invariably falls short. 

Indeed, Christian influences have seemingly outlived even Christianity itself. 

Thus, contemporary secularists, including militant atheists, continue to espouse a morality derived ultimately from Christian teaching, even though they have ostensibly abandoned the Christian scripture, and Christian God, in whom this morality formerly found its ultimate basis and justification. 

Thus, as John Gray argues in Straw Dogs: Thoughts on Humans and Other Animals (reviewed here), humanism replaces an irrational faith in an omnipotent God with an even more irrational faith in the omnipotence of Mankind himself.[14]

Much the same is true of the pseudo-secular political faiths of modernity, which derive ultimately from a thinly-veiled Christian eschatology. 

Thus, ostensibly secular Marxists replace the irrational Christian belief that we will ascend to heaven after death (or, in some versions, after Armageddon and the Day of Judgement) with the equally absurd and irrational Marxist belief that we will achieve communism (i.e. heaven-on-earth, in all but name) after the revolution

As Redbeard observes in Sayings of Redbeard

Rationalists in religion are numerous, but rationalists in politics are few. Nevertheless, salvation by politics is quite as much an insanity and a dream as salvation by the watery blood of a circumcised Jew. When his faith is analyzed the average Rationalist is even more irrational than the wildest Supernaturalist. What is politics but priestcraft in a new mask and cloak.” 

Morality as ‘Opiate of the Masses’ 

How then has Christian and egalitarian moral thinking ever come to acquire such a hold over the Western mind? And does not the popularity and resilience of these ideas prove their worth in accordance with the principle of ‘survival of the fittest’ championed by Redbeard himself? 

While he does not address this objection directly, a careful reading of Redbeard’s writing suggests his likely response. 

For Redbeard, the popularity of Christian moral thinking is attributable to its cynical adoption by ruling elites as a method of indoctrinating and thereby pacifying the masses, by encouraging them to acquiesce in their own subjugation and exploitation. 

Thus, the masses are admonished by scripture to turn the other cheek[15] and render unto Caesar what is his[16] because, if persuaded to do so, they are more easily subjugated, taxed and thereby exploited and enslaved

Thus, Redbeard concludes: 

All moral principles… are the servitors, not the masters of the strong.” 

Thus, Redbeard laments in Sayings of Redbeard:

The ‘light’ that comes from Jerusalem is a wrecker’s beacon.

Poison lurks in pastor preachments,
Satan works through golden rules,
Hell is paved with law and justice,
Christs were made to fetter fools.

Thus, for Redbeard, ‘Might is Right’ in yet another sense⁠⁠—namely, ‘Might’ permits the mighty to dictate to, and instil in, the weak a false morality that serves the interests of the mighty. 

Here, Redbeard, despite his trenchant social Darwinism, actually echoes Marxist theory

Thus, just as Marx contended that religion was the opiate of the masses’, and functioned to keep the subjugated in a state of subjugation, happy in their lot, and content in the belief that, despite their suffering, they would get their due recompense in the next world, so Redbeard extends this analysis to morality itself. 

In a sense, then, he is simply taking the Marxist critique of bourgeois values to their logical conclusion—a conclusion that, ironically, undermines the very moral basis upon which the Marxist critique of capitalist exploitation rests. 

For, if morality is indeed a capitalist contrivance and example of a dominant ideology in the Marxist sense, then there can, of course, be no moral grounds for regarding capitalist exploitation as immoral, nor for viewing Marx’s own envisaged communist utopia as in any way morally preferable to capitalism, feudalism or any other economic system. 

Thus, American professor of philosophy Allen Wood writes of how: 

Marxists often express a contemptuous attitude towards morality, which (they say) is nothing but a form of illusion, false consciousness or ideology. But… the Marxists condemn capitalism for exploiting the working class and condemning most people to lives of alienation and unfilfilment [sic]. What reasons can they give for doing so, and how can they expect others to do so as well, if they abandon all appeals to morality?[17]

To the extent, then, that:

1) Morality is an example of capitalist dominant ideology designed to perpetuate the existing class system; and

2) Marxism is founded upon a moral critique of capitalism, and moral advocacy for communism;

Then, it naturally follows that Marxism itself is an indirect inadvertent outgrowth capitalist indoctrination. If, then, morality is a capitalist invention, it is surely one with the potential to be turned against its capitalist inventors.[18]

Thus, as both Nietzsche and indeed Hitler were to argue, Marxism is, for all its anti-Christian rhetoric and pseudo-secularism, the illegitimate offspring of Christianity itself.[19]

Given the inconsistency of Marxists, and of Marx himself, on this issue, therefore, Redbeard’s true precursor is not Marx, but rather the fictionalized Thrasymachus of Plato’s Republic, the latter anticipating both Marx and Redbeard in his famous pronouncement that: 

Justice is whatever is in the interests of the stronger party”. 

Social Contract Theory Debunked 

Ultimately, however, social order and obedience to the law depends, for Redbeard, not on indoctrination or brainwashing, but rather on force of arms. Thus, in the poem The Philosophy of Power (or Logic of Today), he boldly proclaims in one of his many quotable aphorisms: 

Behind all Kings and Presidents
All Government and Law,
Are army-corps and cannoneers
To hold the world in awe” 

Here, Redbeard echoes the sentiments of Thomas Hobbes, who maintained that: 

Covenants without the sword are but words.” 

Thus, Thomas Hobbes argued only a strong central government could pacify society by maintaining a monopoly on the use of force, which, by maintaining the peace, worked to the benefit of all.

Redbeard, in contrast, is no fan of peace and views all governmental power as based, ultimately, on subjugation and oppression. 

Thus, where Hobbes recommended ceding all rights and powers to a sovereign authority in order to maintain the peace, Redbeard insists that no man ought ever to acquiesce in subjugation before any higher authority than himself. Far from viewing a government maintaining a monopoly on the use of force as a good thing, Redbeard instead insists:

Unarmed citizens are always enslaved citizens, always.”

‘Put not your trust in princes’ is a saying old and true
‘Put not your hope in governments’ translateth it anew
.”

Thus, Hobbes, most cynical, hard-headed and realist of the great philosophers of the Western cannon (and a personal favourite of mine for this very reason), is revealed to be, at least in comparison with the unrelenting cynicism of Ragnar Redbeard, a hopelessly naïve and utopian romantic. 

Redbeard also rejects the social contract theory championed by Hobbes, as well as such other eminent luminaries as Rousseau and Locke, who each envisaged free men in a state of nature freely coming together to jointly agree the terms of their cohabitation in a community. 

In contrast, Redbeard insists that, far from arising through voluntary agreement, all polities ultimately arise through conquest and subjugation: 

How did the government of man by man originate? By force of arms. Victors became rulers.” 

‘Government’ arises from physical force applied by the strong to the control and exploitation of vanquished foes.” 

This rather anticipates the so-called ‘stationary bandit theory’ of state formation that was later formulated by economist and political theorist Mancur Olson a hundred years or so later.

In terms of actual history, it strikes me as a far more realistic model of the origin of large modern states than the consensual social contract model favoured by Hobbes, Locke and Rousseau.[20]

For Redbeard, therefore, all taxation is thus ultimately tribute extracted from the vanquished by their conquerors, and this is the ultimate function and purpose of all government: 

Forms of government change but the principle of government never changes: It is taxgathering.

Thus, he concludes in Sayings of Redbeard:

While statesmen are your shepherds ye shall not want for shearing.”[21]

Moreover, if all taxation is ultimately tribute, so all laws originate ultimately from this same initial conquest and subjugation: 

When an army of occupation settles down upon an enemy’s territory, it issues certain rules of procedure for the orderly transference of the property and persons of the conquered into the absolute possession and unlimited control of the conquerors. These rules of procedure may at first take shape as orders issued by military generals but after a time they develop themselves into Statute Books, Precedents, and Constitutions.” 

Thus, in another of his poems, Redbeard counsels readers: 

Laws and rules imposed on you
From days of old renown
Are not intended for your good
But for your crushing down.” 

Similarly, he avers in one aphorism: 

Statute books and golden rules were made to fetter slaves and fools.[22]

Instead, Redbeard concludes: 

No man ought to obey any contract, written or implied, except he himself has given his personal and formal adherence thereto, when in a state of mental maturity and unrestrained liberty.” 

Yet this is, of course, manifestly not true of the US constitution which was agreed to, not by Americans alive today, but rather by men long dead even in Redbeard’s own time. Thus, Redbeard laments: 

We are ruled, in fact, by cadavers—the inhabitants of tombs”.[23]

Thus, for Redbeard, not only the constitution itself, but also all other laws, whether at the state or federal level, enacted ultimately thereunder, are invalid and of no moral force whatever. 

Indeed, on this ground, Redbeard dismisses the moral force, not only of the US constitution and legal system, and that of all other contemporary western polities, but also the influential school of political theory alluded to above known as social contract theory

In short, even if a polity and jurisdiction did indeed originate, not through conquest as Redbeard maintains, but rather through free men coming together to voluntarily relinquish their freedom and agree the terms of their cohabitation, as maintained by the social contract theorists, this is nevertheless an irrelevance. 

After all, any parties to such an agreement are long since dead. Why then should we, at most their distant descendants, be bound by the agreements of our distant ancestors? Thus, Redbeard forcefully maintains: 

It is only slaves that are born into contracts, signed and sealed by their progenitors. The freeman is born free, lives free, and dies free.” 

Democracy 

If conventional morality functions, as Redbeard maintains, to facilitate and disguise the subjugation of the masses, the same is also true, Redbeard contends, of democracy, or rather the façade of democracy that currently prevails in the West. 

I say the façade of democracy because, for Redbeard, real democracy does not exist and indeed simply cannot exist. It is, like socialism, a patent impossibility, defying the very laws of nature (or, at least, of human nature).

Here, Redbeard echoes the ideas of elite theorists such as Pareto, Mosca and Michels, the latter of whom coined the memorable phrase the iron law of oligarchy to describe what he saw as the inevitable domination of any large organization by a small elite minority, whatever its pretentions towards democratic principles in its decision-making processes.

Redbeard thus summarily dismisses the notion of the people as sovereign

In all lunatic asylums may be found inmates who fancy themselves kings and queens, and lords of the earth. These sorrowful creatures, if only permitted to wear imaginary crowns and issue imaginary commands, are the most docile and harmless of all maniacs.[24]

By analogy, he recounts the (almost certainly apocryphal) tale of how a native chief in the Americas was invited by one of Columbus’s lieutenants: 

To don… a set of brightly polished steel manacles; it being cunningly represented to him, that the irons were the regalia of sovereignty… When the chains were firmly clasped around his limbs, he was led away, to die of vermin, turning a mill in a Spanish dungeon. What those glittering manacles were to the Indian Chieftain, constitutions, laws [and] moral codes… are to the nations of the earth.” 

Thus, Redbeard concludes: 

Cursed indeed are the harnessed ones! Cursed are they even though their harness be home made—even though it tinkle musically with silver bells—aye! even though every buckle and link and rivet thereof is made of solid gold.” 

Indeed, for Redbeard, it is the very glittering beauty of the “polished steel manacles” that ought to provoke our suspicion and put us on guard.

Thus, he maintains that the very exalted and poetic language of such documents as the Bill of Rights and Declaration of Independence is itself evidence of their deceptiveness, since: 

“It is notorious, universally so, that the blackest falsehoods are ever decked out in the most brilliant and gorgeous regalia. Clearly, therefore it is the brave man’s duty to regard all sacred things, all legal things, all constitutional things, all holy things, with more than usual suspicion.” 

Work versus Warriorhood 

Today, people of all political persuasions mindlessly parrot the notion that work is somehow intrinsically liberating.

This is what I cheerfully call the Work Sets You Free mantra, by reference to the famous signs (Arbeit Macht Frei) displayed above the entrances to Nazi concentration camps such as Dachau and Auschwitz.

The idea is, of course, preposterous. Indeed, work is, perhaps by very definition, something one does, not because one enjoys the activity itself, but rather because of either the end product of such work, or the remuneration offered in recompense for doing it. 

Thus, for example, a person cleans their house, not because they enjoy cleaning their house, but rather because they enjoy living in a cleaner environment. On the other hand, a person does a salaried job, not because they enjoy doing the job, but rather because of the salary offered precisely in recompense for the fact that they don’t enjoy it.

Indeed, the very word for ‘work’ in French, namely travailler, along with various cognates with similarly meanings in related languages such as Spanish, Portuguese, Galician and Catalan, is derived from an ancient Roman instrument of torture.

To put the matter bluntly, if people really enjoyed their work, then you wouldn’t have to pay them to get them to do it! 

Yet it is natural that governments and capitalists should espouse and encourage the notion that work is somehow uniquely liberating, since, by doing so, they encourage the masses to willingly submit themselves to work for the benefit of capitalists and government.[25]

Redbeard, however, has no time for such nonsense. For him, work is the mark of a slave

The very idea of labor is in chains and yokes. There is no dignity in a bent back – no glory in a perspiring brow – no honor in greasy, copper-riveted rags.” 

Cursed is the brow that sweats – for hire, and the back that bends to a master’s burden. Calloused hands imply calloused minds.” 

Indeed, he insists that hard continuous labour is, not only unpleasant, but also has a negative effect on the constitution, both physical and psychological:

Hard continuous methodical labor destroys courage, saps vitality and demoralizes character. It tames and subdues men, just as it tames and subdues the young steer and the young colt. Men who labor hard and continuously have no power to think. It requires all their mental force to keep their muscles in trim.” 

Thus, Redbeard concludes: 

The civilized city working-man and working woman are the lowest and worst type of animal ever evolved from dust slime and oxygen. They actually worship work: and bow down before law as an ox-team crouches and strains under the lash.” 

Instead, he extols warriorhood over work: 

In the strength of his arm man eats his bread; in the sweat of his brow (and brain), the slave earns bread – for a master.” 

The Labour Theory of Property Debunked 

In accordance with this celebration of warriorhood over work, Redbeard also challenges the so-called labour theory of property, famously espoused by the British philosopher John Locke

Thus, John Locke famously formulated and expounded the notion that private property rights ultimately derive from labour expended in the transformation of natural resources

Thus, while God, according to Locke, gave the world to all mankind in common, nevertheless, if a person expends labour in transforming some natural resource – say sculpting a rock into an statue, chopping down a tree in order to construct a wooden hut, clearing a wilderness in order to raise crops, or castrating a slave to produce a eunuch – he or she thereby acquires ownership over the resource in its transformed state. 

This is Locke’s famous labour theory of property, whereby a person acquires property rights by mixing his labour with the resource in question, which also represents the philosophical basis for the so-called homestead principle.[26]

In Sayings of Redbeard, however, the pseudonymous Redbeard rejects wholly this notion and replaces it with the more cynical and realistic notion that property rights derive ultimately from force of arms. 

In the history of nations, the sword at all times commands the plow, the hammer and the spade. Everywhere the soil must be captured before it can be cultivated.” 

“‘The laborer is entitled to the full fruits of his labor’… but only on condition that he… can successfully defend his product against any one and everyone who comes up against him. Whoever can defend a thing against ‘all the world’ is its natural and rightful owner.” 

“Upon land titles written in blood the entire fabric of modern industrialism is founded.” 

On Women 

Predictably, in the current feminist-dominated political and intellectual climate, Ragnar’s views on women have drawn inevitable accusations of misogyny. Thus, among other things, Redbeard asserts:

Woman is two thirds womb. The other third is a network of nerves and sentimentality.” 

A woman is primarily a reproductive cell organism, a womb structurally embastioned by a protective, defensive, osseous network; and surrounded by antennæ and blood vessels necessary for supplying nutrient to the growing ovum or embryo.

Actually, however, these statements reveal an impressive understanding of the evolutionary basis for sexual differentiation. Indeed, they anticipate the great late-twentieth-century biologist Edward O Wilson’s infamous observation that: 

The quintessential female is an individual specialized for making eggs” (On Human Nature: p123).[27]

This certainly suggests a realistic view of human females, and arguably perhaps even an unflattering one, but it is certainly nothing amounting to a hatred of women, as suggested by the overused term misogyny

On the contrary, although Redbeard insists that women must be subservient to men, he nevertheless also insists in the very same breath that, among men’s duties with respect to women, are “providing for, and protecting them”. 

Indeed, far from hating women, he actually repeatedly refers to women as “lovable creatures” and even as “lovable always”.

Indeed, on the basis of these statements, one might even conclude that Redbeard is guilty of the same sentimental wishful-thinking of which he accuses the Christians and socialists

Certainly, it appears he has had the benefit of enjoying the company of rather different women to myself. 

In insisting that women are “lovable creatures”, whom men are responsible for “providing for, and protecting”, he could almost be accused of being a white knight male feminist

On the other hand, elsewhere Redbeard is, to his credit, altogether more realistic, or perhaps, once again, simply self-contradictory, writing: 

For innate cruelty of deed, no animal can surpass woman.” 

He also observes: 

In many respects women have proved themselves more cruel, avaricious, bloodthirsty and revengeful than men.” 

He also echoes Schopenhauer in observing that: 

“Women are also remarkably good liars. Deception is an essential and necessary part of their mental equipment… Without deception of some sort, a woman would have no defense whatever against rivals, lovers, or husbands.” 

Indeed, here, Redbeard seems to be directly drawing on Schopenhauer’s celebrated and insightful essay On Women, where the latter similarly observed that: 

Just as lions are furnished with claws and teeth, elephants with tusks, boars with fangs, bulls with horns, and the cuttlefish with its dark, inky fluid, so Nature has provided woman for her protection and defense with the faculty of dissimulation. 

Women, Warriors and Polygyny

Indeed, far from hating women, Redbeard seems to see their biological instincts, especially with regard to mate choice, as fundamentally sound, eugenic and conducive to the higher evolution of the species. 

Thus, he insists that, just as men are drawn to battle, so women are naturally drawn to warriors who have proven themselves in battle.

Wherever soldiers conquer in war, they also conquer in love… Women of vanquished races are usually very prone to wed with the men who have slaughtered their kindred in battle.” 

This is surely true. Indeed, it is proven by population genetic studies of the ancestry of contemporary populations. 

Thus, among populations that have been the subject of violent conquest at some point in their history, their mitochondrial DNA, passed down the female line, is invariably more likely to have been inherited from the indigenous, conquered population, whereas their Y-chromosomes, passed instead down the male-line, are more likely to have been inherited from the conquering group.[28]

Indeed, one particularly successful military leader and conqueror, Genghis Khan, is even posited as the origin of a Y chromosome haplogroup now common throughout much of Asia and the world

Yet, in observing that “women of vanquished races are usually very prone to wed with the men who have slaughtered their kindred in battle,” Redbeard does not reproach women for their faithlessness, treachery or lack of patriotic feeling for ‘consorting with the enemy’. On the contrary, he applauds them for thereby acting in accord with biological law and hence contributing to the propagation of, if you like, ‘warrior genes’ and, as he sees it, the progressive evolution of the species.[29]

Yet curiously, Redbeard seems to reject the primary means by which sexual selection might bring about this outcome – namely polygyny

Readers must distinctly understand that sexual morality is nowise condemned in these pages.” 

Thus, while he castigates other aspects of Christian morality, Redbeard seemingly takes Christian monogamy very much for granted, writing: 

Second-class males are driven by necessity to mate with second-class males; and in strict sequence third class males select partners from feminine remainders. (Hence the stereotyped nature of servile Castes.) Superior males take racially superior women, and inferior males are permitted to duplicate themselves, per media of inferior feminines.” 

However, in a highly polygynous mating system, this is not true. Here, high-status males command exclusive access to all females, and females themselves, anxious to secure the superior genes, and superior resources, commanded by high-status males, are often only too ready to comply. 

Indeed, according to the polygyny threshold model, it is in the female’s interests to comply. Thus, as George Bernard Shaw observed: 

Maternal instinct leads a woman to prefer a tenth share in a first-rate man to the exclusive possession of a third-rate one.[30]

Thus, under polygyny, low-status males, even if not altogether exterminated, are nevertheless precluded from reproducing altogether, facilitating the evolutionary process that Redbeard so extols. 

Women, Warriors and Intersexual Selection

Yet, in extolling female mate choice, Redbeard surely goes too far when he writes: 

Women instinctively admire soldiers, athletes, king’s nobles, and fighting-men generally, above all other kinds of suitors – and rightly so.” 

Certainly, the dashing soldier in his uniform has a certain sex appeal. However, “above all other kinds of suitor”? Surely not. 

Indeed, the sorts of ‘sex symbolsfawned over and fantasized about by contemporary women and girls are more often actors or pop stars than they are soldiers – and the foppish movie star or pop icon is about as far removed from the rugged, battle-scarred warrior of Redbeard’s own erotic fantasies as it is possible to envisage.

Similarly, Redbeard also insists: 

Women congregate at athletic sports and gladiatorial contests; impelled by the same universal instinct that induces the lioness to stand expectantly by, while two more rival males are ripping each other to pieces in a rough-and-tumble – for her possession.” 

Yet, actually, the audiences at most sporting events are overwhelmingly male. Moreover, the more violent the sport in question (e.g. boxing and MMA) the greater, in my experience, the scale of the disparity.[31]

Here, perhaps Redbeard, in his enthusiasm for Darwin’s theory of sexual selection, fails to fully distinguish what the latter termed intrasexual and intersexual selection

This is rather ironic since, among his copious quotations from Darwin himself in ‘Might is Right’, Redbeard actually quotes the very passage from Darwin’s The Descent of Man, and Selection in Relation to Sex where Darwin first made this distinction: 

The sexual struggle is of two kinds: in the one it is between the individuals of the same sex, generally the males, in order to drive away or kill their rivals, the females remaining passive; while in the other, the struggle is likewise between the individuals of the same sex, in order to excite or charm those of the opposite sex, generally the females, which no longer remain passive, but select the more agreeable partners.” 

Though actually, perhaps tellingly, the version of this passage quoted by Redbeard is subtly altered, omitting the parenthesis “in order to excite or charm those of the opposite sex”. This perhaps reflects his inability to adequately understand the nature of intersexual, as opposed to intrasexual selection, or perhaps even a deliberate attempt to play down this form of selection. 

Thus, intrasexual selection involves one sex, usually males, fighting over access to the other, usually females, and seems to be the form of sexual selection that Redbeard primarily has in mind and so extols. 

Intersexual selection, however, involves, not male fighting, but, at most, male display and female choice, as in so-called leking species. 

Here, females, not males, are very much in control of the mating process and the result is not so much the mighty antlers of the stag, as the beautiful but, from Redbeard’s perspective, rather less than manly tail of the peacock

Thus, if male warriors like Genghis Khan did indeed enjoy the remarkable reproductive success that genetic studies suggest, then this may have been as much attributable to male coercion as to female choice,[32] and, to the extent it is a product of female choice, as much a reflection of the female preference for high-status males as for successful warriors per se.

Men, Sexual Selection and Carnivory 

Yet, if, in failing to fully understand sexual selection theory, Redbeard misjudges the nature of women, the same is no less true of his assessment of the fundamental nature of men 

Thus, though he disparages contemporary men as pale and decadent imitations of their noble warrior forbears, nevertheless his image of Man, at least in his original pristine state, is distinctly flattering to what feminists disparagingly term ‘the male ego’. 

Man is, according to Redbeard, by nature, a warrior, conqueror and carnivore. Indeed, one of his chapters is even titled “Man – the Carnivore!”. 

Similarly, in his poetry, Redbeard repeatedly compares men to other carnivorous predators such as wolves and lions, writing:

What are men but hungry wolves, a prowling on the heath?
If in a pack of wolves you hunt, you’d better sharp your teeth.

Life is strife for every man,
for every son of thunder;
Then be a lion not a lamb,
and don’t be trampled under.

Of course, humans are indeed apex predators, with the unique distinction of having driven many prey species to extinction,[33] as well as having caused great death and destruction among our own kind through warfare and conflict.

However, Redbeard surely exaggerates the purely physiological formidability of Man. Thus, he maintains: 

Structurally, men are fashioned for purposes of inflicting and suffering pain. Every human anatomy is an elaborate nerve and bone infernal machine – a kind of breathing, perambulating Juggernaut – a superb engine of lethal immolation that automatically stokes its furnace with its victims… Men’s anatomy, external and internal; his eyes, his teeth, his muscles, his blood, his viscera, his brain, his verebra; all speak of fighting, passion, aggressiveness, violence, and prideful egoism.” 

Here, Redbeard surely flatters himself and other men. 

Actually, our muscles and teeth are decidedly unimpressive compared to other carnivores occupying a comparable place in the food chain (e.g. lions and tigers). Indeed, even our closest extant relatives, the primarily frugivorous chimpanzee, has far greater average upper-body strength than the average human, or even the average athlete. 

Thus, compared to a lion or a bear, or even the largely herbivorous gorilla, even Mike Tyson in his prime, unarmed, could not, I suspect, put up much of a fight. 

It is only our ability to devise weapons, tools, and tactics that gives us a chance. In other words, our greatest weapons are not the muscles, claws, fangs or antlers, of which, compared to other carnivores, and even some herbivores, we are sorely lacking – but rather our brains.

As the ‘The Beastdeclares in the excellent recent prison movie Shot Caller:

A warrior’s deadliest weapon… is his mind.”

Individualism vs. Nationalism

While popular among some more intellectually-minded (and sociopathic) white nationalists, Redbeard, far from nationalist, is actually a radical individualist, arguably influenced as much by Max Stirner as by Nietzsche

Indeed, Redbeard would surely reject all forms of nationalism, since nationalism invariably puts the survival and prospering of the group (i.e. the race or nation) above that of the individual. 

For Redbeard, this is anathema: No man should subordinate his own interests below those of another, be that other a rival, a monarch, a state, a nation or a race or volk.

Indeed, when military and political leaders demand that we sacrifice our lives for our race, tribe or nation, Redbeard would see this as representing, not the interests of the race, tribe or nation, but rather the individual interest of the military or political leader responsible for issuing the demand. 

Thus, Redbeard purports to admire the warrior ethos. Certainly, he extols the likes of Napoleon (“Darwin on horseback”) and Alexander the Great

However, Redbeard would, I get the distinct impression, have nothing but disdain for the ordinary soldier – the mere cannon-fodder who risked, and often lost, their lives in the service, not of their own conquest and glory, but rather the conquest and glory of their commanders, or, worse still, the economic interests of their capitalist exploiters. 

Indeed, Redbeard’s individualism is among his grounds for rejecting morality. Thus, he declares: 

All arbitrary rules of Right and Wrong are insolent invasions of personal liberty.” 

Yet, in purporting to reject morality on this ground, Redbeard is, in effect, not rejecting morality altogether, but rather, once again, championing a new moral ethos – namely, one which regards individual freedom as the paramount, if not the sole legitimate, moral end. 

Thus, in purporting to reject universalist morality on individualist grounds, Redbeard inadvertently transforms individualism itself into a universalist moral injunction. 

‘Every man for himself’ is the law of life. Every man for an Institution, a God or a Dogma, is the law of death.” 

Once again, the self-contradiction is obvious: If all universalist moralities are, in Redbeard’s words, “insolent invasions of personal liberty,” then this surely applies also to his own universal moral injunction (i.e. “the law of life”) that demands that we always act in our own individual self-interest. 

Moreover, such a moral system, if adopted by all, would obviously result only in anarchy and the impossibility of any sort of functioning society. 

Interpreted in this way, it would then seem to fail even by the criterion of ‘survival of the fittest’ that Redbeard himself espouses – since societies composed of group-minded altruists, who are willing to sacrifice their own self-interest for the benefit of the group as a whole, will inevitably outcompete societies composed of pure egoists, who look out only for themselves, and are all too ready to sell out their own group for individual advantage. 

However, in his defence, it is clear, at least by implication, that Redbeard never envisaged his morality being adopted wholesale by all. Instead, like Nietzsche’s philosophy, it is envisaged as necessarily restricted to a select and elite minority. 

Nihilism? 

Predictably, Redbeard has been charged with nihilism by some of his detractors. However, this is far from an accurate portrayal of his philosophy. 

It is true that, as we have seen, Ragnar does indeed flirt, albeit inconsistently, with a form of moral nihilism

Moreover, he sometimes seems to go further, seemingly embracing a more all-consuming nihilism, as, for example, in his alternative beatitudes, where he writes: 

Blessed are those who believe in nothing—Never shall it terrorize their minds.” 

Yet, in Sayings of Redbeard, Redbeard rejects any notion of nihilism, writing: 

One must have faith and courage even to be a pirate. He who does not believe in anything does not believe in himself, which is atheism of the worst kind. A religion is essential. Nobility of action is impossible without it. Faith is an integral part of all heroic and noble nature… He must believe something or else sit down to contemplate his navel and rot into nothingness as the Buddhists teach. The negative life won’t do, remember that.” 

Exactly what one should believe in, other than oneself, he is not altogether clear. 

Certainly, like Nietzsche, he purports to prefer paganism over the Christianity that ultimately displaced it, even avering in Sayings of Redbeard, in an extension of the famous Nietzschean dictum

Christ is dead. Thor lives and reigns.” 

Yet Redbeard clearly means this only in a metaphoric sense, just as Nietzsche meant the death of God in a metaphoric sense. In a literal sense, God could never die, simply because He had never existed, and hence never been alive in the first place. 

Ultimately, given his radical individualism, I suspect Redbeard believes that we must, in the last instance, believe ultimately only in ourselves. 

Thus, he would, I suspect, approve of the tenth century Viking, who, asked by a Frank, what religion he adhered to, reputedly replied: 

I believe in my own strength – and nothing else.[34]

In other words, to translate Redbeard into explicitly Nietzschean terms, we might say: 

God is dead; Long live the Ubermensch

Or, as Redbeard himself might have put it: 

Nietzsche said: God is dead’.
Ragnar Redbeard says: God is dead. Long Live Ragnar Redbeard!’ 

Racialism 

A particularly troubling aspect of ‘Might is Right’ for many modern readers of Redbeard’s treatise, even those otherwise attracted to his radical individualism and rampant social Darwinism, is Redbeard’s extreme racialism

Yet Redbeard’s racialism, though as overblown and exaggerated as everything else in his writing, is actually largely tangential his philosophy.[35]

Indeed, given that it was first published in 1896, when notions of white racial superiority were almost accepted as given (at least among whites themselves), one suspects that Redbeard’s racialism, overblown and exaggerated though it is, was, for contemporaries, among the least controversial aspects of his thought. 

It is often objected that Redbeard’s racialism is incompatible with, and contradicts, his individualism.

However, I think this is a misreading of Redbeard. 

While individualism is indeed incompatible with nationalism (see above), it is not incompatible with racialism per se, only with racial nationalism

Thus, no individualist would sacrifice his own interests for those of his race or nation. However, an individualist is quite capable of also believing that different races differ in their innate aptitudes, temperaments and ability – including to such an extent as to make only individuals of certain races capable of true individualism, just as certain species (e.g. the social insects) are surely incapable of individualism. 

Thus, in my reading, Redbeard comes across as consistently individualist, but simply regards his individualism as applicable to, and within the capability of, individuals of only one particular race

Moreover, though he clearly regards black Africans, for example, as an inferior subspecies fit only for enslavement, this remarkably racist claim is, in the context of Redbeard’s overall philosophy, actually not quite as racist as it sounds (though admittedly it is still extremely fucking racist), since he also thinks the same of the vast majority of all peoples, white Europeans very much included, at least in their current ostensibly degraded form. 

Thus, of his (white) American contemporaries, he writes: 

Never having enjoyed genuine personal freedom (except on the Indian border) being for the most part descendants of hunted-out European starvelings and fanatics (defeated battlers) they now stupidly thought they had won freedom at last by the patent device of selecting a complete outfit of new tax-gatherers every fourth year.” 

Yet, here, Redbeard is again rather inconsistent and contradictory. 

Thus, he often seems to suggest that all white Nordic Europeans, or at least all white Nordic European men, were once at least capable of the heroism and ruthlessness that he so extols. 

Thus, writing of the now almost universally-reviled Cecil Rhodes, one of the few contemporaries to earn his unreserved admiration, he claims: 

In days long gone by, such men were the norms of Anglo-Saxondom. Now! Alas! They are astounding exceptions.” 

Yet the entire thrust of Redbeard’s philosophy is that always, at all times, all societies are composed of, on the one hand, the conquerors and, on the other, those whom they conquer, the latter invariably vastly outnumbering the former and very much deserving of their fate. 

Yet, if this is true universally, then it must also be true of the indigenous societies of the Nordic European peoples themselves, before they came into contact with, and were hence able to conquer and enslave all those ostensibly inferior non-Nordic untermensch

Inevitably, then, at this time in history, or prehistory, they must have conquered, subjugated and enslaved only one another. Like all other peoples, then, the vast majority of Nordic Europeans must have been slaves, serfs or vassals

This suggests that the vast majority of all peoples, including Nordic Europeans themselves, have always been slaves, and that the superior class of man is to be found, only in the minority, if at all, among all peoples, Nordic Europeans very much included. 

Anti-Semitism, Philo-Semitism and Self-Contradiction 

Yet, if Redbeard’s racialism is peripheral to his broader themes, the same is not true of his anti-Semitism, which represents a recurrent theme throughout his writing. 

Yet, here again we encounter another of many contradictions in Redbeard’s thought. 

For, in addition to other anti-Semitic canards, Redbeard endorses the familiar anti-Semitic trope whereby it is claimed that, through nefarious political and financial machinations, and especially through usury or moneylending, Jews have come to secretly control entire western economies, governments and indeed the world. 

Thus, in one particularly dramatic passage, Redbeard declares: 

The Jew has been supinely permitted to do — what Alexander, Caesar, Nusherwan, and Napoleon failed to accomplish — crown himself Emperor of the World; and collect his vast tributes from ‘the ends of the earth’.

Yet, if Jews do indeed control the world, including the West, as Redbeard so dramatically asserts, then this surely seems to suggest Jews are anything but inferior to the white western Gentile goyim whom they have ostensibly so successfully subjugated, hence contradicting any basis for Redbeard’s anti-Semitism

Moreover, applying the ‘Might-is-Right’ thesis of Redbeard himself, the inescapable conclusion is that Jewish domination is necessarily right and just. 

Thus, anti-Semitism leads almost inexorably to its opposite – philo-Semitism and Jewish supremacism.[36]

Thus, in the footnote accompanying this passage in the Underworld Amusements Authoritative Edition, editor Trevor Blake observes: 

Not a few pages earlier, Redbeard wrote: ‘Among the vertebrates, the king of the herd (or pack), selects himself by his battle-prowess—upon the same ‘general principles’ that induced Napoléon to place the Iron Crown upon his own brow with his own hand.’ By Redbeard’s own words and reasoning ‘the Jew’ is not only Emperor of the World but justly so. A significant challenge to both those who consider ‘Might is Right’ to be antisemitic and those who consider ‘Might is Right’ to be consistent.[37]

Yet Redbeard himself is not, it seems, himself entirely oblivious of this necessary implication, since, on various occasions he comes close to accepting this very conclusion. 

Take, for example, the following stanza from The Philosophy of Power (aka The Logic of Today): 

What are the lords of horded gold—the silent Semite rings
What are the plunder patriots—High pontiffs, priests and kings?
What are they but bold masterminds, best fitted for the fray
Who comprehend and vanquish by—the Logic of Today.” 

Here, “the lords of horded gold” and, more specifically, “the silent Semite rings” are explicitly equated with “bold masterminds, best suited to the fray” who “comprehend and vanquish” in accordance with the tenets of Redbeard’s own philosophy of power. 

Likewise, Redbeard does not exclude from his pantheon of heroes those military leaders, historical or mythological, who conquered and vanquished in accordance with Redbeard’s theory merely on account merely of their Jewish ethnicity

Thus, in The Philosophy of Power he is unapologetic in declaring, “Might is Right when Joshua led his hordes o’er Jordan’s foam” and when “Gideon led the ‘chosen’ tribes of old”, just as much as when “Titus burnt their temple roofed with gold”.[38]

Yet, elsewhere, Redbeard evades the inescapable conclusion of his own arguments—namely that, if Jews do indeed control the world, this surely demonstrates that they are indeed the master race and hence that, according to Redbeard’s own philosophy, that their rule is just and right. He does so by asserting the current social, economic and political order, in which Jews are supposedly supreme, is a perversion of the natural order. 

Thus, he avers: 

What is viler than a government of slaves and usurious Jews? What is grander than a government of the Noblest and the Best – who have proved their Fitness on the plains of Death?” 

Thus, while he views democracy and Christian morality as merely a façade for a thinly veiled exploitation, inequality and subjugation no less insidious than that of the ancients, nevertheless Redbeard yearns for the return of a more naked manifestation of authority and exploitation. 

In other words, he seems to be saying: Might is Right—but only so long as the right ‘Might’ is currently in power! 

The Coming (Long Overdue) Armageddon? 

Yet as well as calling for the overthrow of the current corrupt social, political and economic system, Redbeard also believes we may not have long to wait—for, being unnatural, the current system is also, he insists, inherently unsustainable. 

Thus, a recurrent theme throughout ‘Might is Right’ is the coming collapse of Western civilization, which is, according to Redbeard, both inevitable and long overdue. Thus, he writes: 

The Philosophy of Power has slumbered long but whenever men of sterling are found, it must again sweep away the ignoble dollar-damned pedlarisms of today and openly, as of old, dominate the destiny of an emancipated and all-conquering race.” 

Over a century after Redbeard penned these words, this collapse has conspicuously yet to occur. On the contrary, the ostensibly decadent liberal democratic polities and capitalist economies that Redbeard so disparages have only continued to flourish and spread—and, in the process, become ever more weak, and decadent. 

Against Civilization 

Yet, although he anticipates the coming collapse of Western civilization, Redbeard is far from pessimistic about this outcome. On the contrary, it is something he, not only anticipates, but very much welcomes and, indeed, regards as long overdue. 

This then demonstrates, in case we still harbored any doubts, just how radical and transgressive Redbeard’s philosophy truly is. 

Thus, whereas conservatives and white nationalists usually pose as defenders of western civilization, Redbeard himself evinces no such conceit. 

He does not want to restore Western civilization or, to adopt a famous political slogan, Make America Great Again. Rather, he wants to do away with civilization altogether, American capitalist democracy very much included.

Civilization is, for Redbeard, inherently decadent and effeminate – democracy and capitalism doubly so. Thus, he laments of contemporary society: 

This world is too peaceful, too acquiescent, too tame. It is a circumcised world. Nay! – a castrated world! It must be made fiercer, before it can become grander and better and – more natural.” 

Redbeard’s posited utopia is, then, any other man’s dystopia—a Hobbesian State of Nature or ‘war of all against all’. 

Thus, although, as we have seen, Redbeard views naked self-interest as underlying the façades of liberal democracy and Christian morality, he nevertheless pines for a government that relies openly on naked force rather than a pretense of democracy or egalitarianism

Thus, where Bertrand Russell famously disparaged Nietzsche’s philosophy as amounting to nothing more than, I wish I had lived in the Athens of Pericles or the Florence of the Medici, Redbeard prefers, not the civilization of Athens, but rather the barbarism of Vikingdom. 

Redbeard’s Racialism Revisited – and Debunked! 

Yet Redbeard’s preference for barbarism over civilization also, ironically, undercuts any plausible basis for his racialism and absurd Nordic supremacism

After all, the main evidence cited by white supremacists in support of the theory that whites are superior to other races is the achievements of whites in the spheres of science, technology, art, democracy, human rights, architecture, mathematics and metallurgy (and on IQ tests) – in short, their achievements in all the spheres that contribute towards creating and maintaining successful, peaceful, stable and technologically-advanced civilization

Yet, if one rejects civilization as an ideal, then on what grounds can whites still be held up as superior? 

After all, blacks are quite as capable of being barbarians as are Nordic Vikings and Teutons. Indeed, these days they seem to be better at it! 

Thus, the high crime rates of blacks, and abysmal state of civilization (or what passes for civilization) in so much of sub-Saharan Africa, not to mention Haiti, Baltimore and Detroit, so often cited by racialists as evidence of black pathology, is, from the perspective of Redbeard’s inverted morality, converted into positive evidence for black supremacy! 

Blacks are, today, better barbarians than are whites. Therefore, from the perspective of Redbeard’s inverse morality, they must be the true Herrenrasse

Against Intellectualism 

Finally, rejecting civilization leads Redbeard ultimately to reject intellectualism too: 

Intellectualism renders more sensitive. Sensitive persons are very excitable, timid, and liable to disease. Over cultivation of the brain cells undoubtedly produces… physical decay and leads on towards insanity.” 

Perhaps this excuses the intellectual inadequacy of, and rampant internal contradictions within, his own philosophical treatise⁠. 

However, it also begs the question as to why Redbeard ever chose to write a philosophical treatise in the first place—an inherently intellectual endeavour. 

Indeed, had Redbeard, whoever this pseudonymous author really was, truly believed in and followed the precepts of his own philosophy, then he surely would never have put pen to paper, since he would be far too busy waging wars of conquest and enslaving inferior peoples. 

Indeed, his writing of the book would not merely have been a distraction from more important activities (e.g. war, conquest), but also positively counterproductive—because the more people learn the truth from his book and are inspired to lead conquests of their own, then the less willing they will be to be conquered and enslaved by Redbeard, and the more competition he will have in his envisaged conquests.[39]

Yet, whatever the true identity of the pseudonymous author who wrote under the pen-name of “Ragnar Redbeard”, he was surely neither Napoleon nor Alexander the Great. (For one thing, the dates don’t match up.) 

Therefore, Redbeard, whoever he was, did not live, or die, by his own philosophy, which, like the diametrically-opposed Christian morality he so detests, sets impossibly high standards for its adherents. 

Indeed, even leaving aside the contradictions and inconsistencies, Redbeard’s philosophy is so extreme that almost no one could ever truly live by it. Indeed, it would be far easier to die by Redbeard’s theory than it would be to live by it. 

Thus, Redbeard admonishes readers in his poem, The Philosophy of Power (aka The Logic of Today): 

You must prove your Right by deeds of Might of splendor and renown.
If need-be march through flames of hell, to dash opponents down.
If need-be die on scaffold high on the morning’s misty gray is still
For Liberty or Death is still the Logic of To-Day.” 

Thus, for Redbeard, the only truly honourable outcomes are either an endless succession of conquests and victory, or death in the pursuit thereof. 

Far better for a free animal to be killed outright, than to be mastered, subordinated, and enchained.” 

Thus, inevitably failing to live up to his own impossibly high ideals, Redbeard, whoever he was, must, to the extent he truly believed in the ideals he espoused, have been consumed by insecurity and self-hate. 

Ragnar Redbeard”: An Alter-Ego or a Fictional Character of Arthur Desmond’s Invention? 

This leads me to consider again a possibility that I first dismissed offhand—namely that ‘Might is Right’ is indeed, as some have claimed, a work of satire, a kind of reductio ad absurdum of the worst excesses of social Darwinism and Nietzscheanism

Yet this simply cannot be true. The very power of Redbeard’s words demonstrates that the author was at the very least sympathetic to the ideas he espouses.

I cannot believe any writer, howsoever gifted, could ever write such brilliant poetry, nor coin such memorable aphorisms, in support of a theory to which he was himself wholly opposed and had no attachment whatsoever.  

This leads me to a third possibility. Perhaps the author, almost certainly one Arthur Desmond, was adopting a persona, namely that of “Ragnar Redbeard”, in order to explore, and take to their logical if remorseless conclusion, ideas with which he had developed a fascination, but to which he was nevertheless unwilling to put his own name. 

Thus, other philosophers, notably the proto-existentialist Søren Kierkegaard, have written under pseudonyms in order to explore alternative, often mutually contradictory, viewpoints. 

Perhaps then, in adopting the persona of Ragnar Redbeard, Arthur Desmond was doing the same thing. 

In other words, ‘Might is Right’ is neither an exposition of Desmond’s own views, nor still less a parody or critique of these views, but rather a kind of extended thought-experiment

Thus, just as Plato used his own fictionalized version of Socrates as a mouthpiece through which to expound ideas what were, in reality, almost certainly very much Plato’s own, so Arthur Desmond invented the entirely fictional figure, or alter-ego, of “Ragnar Redbeard” to espouse ideas which were, again, very much Desmond’s own, but to which he was nevertheless as yet unwilling to put his own name or entirely commit himself. 

This would make sense given the extremely controversial nature of the views expressed by Redbeard in his treatise. 

Clearly, Desmond was a bold, daring, radical, even extremist thinker, who could certainly never be accused of intellectual cowardice. However, to wholly commit himself to Redbeard’s severe and remorseless philosophy was perhaps a step too far even for him. 

After all, as we have seen, to truly live by Redbeard’s philosophy is almost an impossibility.  

Thus, by writing under pseudonym, Desmond would shield himself from the allegation that, in failing to lead any wars of conquest of his own, he was a hypocrite who failed to live up to the ideals of his own ideals. 

This idea, namely that Redbeard was, not so much a mere pseudonym or pen name, but an alter-ego or fictional character of Desmond’s own creation, might help explain why, although writing in a name other than his own, Desmond apparently made little if any effort to conceal his authorship, his own name often appearing on the same byline as that of his persona “Ragnar Redbeard” in the various obscure turn-of-the-century Nietzschean, anarchist and Egoist publications for which he wrote.[40]

Ragnar Redbeard” is not then a mere pen-name. Rather, he is an alter-ego or alternate persona, in whose voice the author chose to author this work. 

Thus, the views expressed are not necessarily, at least without reservation, those of the Desmond himself. But neither is there any evidence that Desmond opposed to these views either, let alone that he sought to parody or satirize such views. 

Rather, they are the views, not of Desmond, but of “Ragnar Redbeard”, a fictional character of Desmond’s own creation.

Endnotes

[1] To the extent it is remembered or widely read today, it is largely among, on the one hand, certain of the more intellectually-minded (and sociopathic) white nationalists, and, on the other, an equally marginal fringe of occultists and self-styled Satanists. Both associations are odd and actually contrary Redbeard’s philosophy.
On the one hand, Redbeard is, despite his racialism, actually an egoist and radical individualist, influenced at least as much by Stirner as by Nietzsche and hence opposed to nationalism of any guise (see above). On the other, Redbeard is nothing if not a trenchant materialist, opposed to all forms of supernaturalism religion, occultism presumably very much included.
Admittedly, he does aver, in Sayings of Redbeard that:

Christ is dead. Thor lives and reigns”.

However, this is clearly meant in a metaphoric sense, as when Nietzsche declared the death of God, rather than an actual endorsement of a theistic paganism.
The curious association of Redbeard’s work with occultism seems to derive from the championing of his work, then apparently largely forgotten, by Anton Lavey, founder of the Church of Satan. Indeed, Lavey stands accused of lifting large sections of his own Satanic Bible directly from ‘Might is Right’. Perhaps among the aspects of conventional morality rejected by Laveyian Satanists is the prohibition on plagiarism.
However, Lavey’s own so-called ‘Satanism is itself resolutely nontheistic and indeed almost as trenchantly materialist as Redbeard’s own severe philosophy. 

[2] Leo Tolstoy was also familiar with Redbeard’s treatise, referring to it in name in his essay What is Art?’ and accurately summarizing its key tenets. Like Wallace, he has little time for Redbeard‘s philosophy. However, despite his literary background, Tolstoy, unlike Wallace, fails to show any appreciation of the brilliance of Redbeard’s verse, perhaps on account of his lack of fluency in English (poetry does not tend to translate well), which has also been suggested as a reason for his failure to appreciate the work of Shakespeare.

[3] Indeed, given that he, at times, rejects the whole notion of morality, it is doubtful whether Redbeard would indeed welcome being described as a moral philosopher anyway. As will become clear in the course of this essay, although with regard to his moral philosophy Redbeard is vert self-contradictory, I feel that, in addition to the brilliance of his poetry, Redbeard has much to offer as a political theorist.

[4] Readers impressed by Redbeard’s verse in ‘Might is Right’ would do well to read Sayings of Redbeard, a collection of poetry and aphorisms by the same author that is seemingly even less well known and widely read than the former work yet contains much additional aphorism and verse. As an example, I quote a shorter (seemingly untitled) piece from  Sayings of Redbeard

‘Let lions cease to prowl and fight,
Let eagles clip their wings,
Let men of might give up their right’,
The foolish poet sings.

‘Let lords of gold and Caesars bold
Forever pass away,
Enrich the slaves; enthrone the knaves,’
The base-born prophets say.

But I maintain with hand and pen
The other side of things,
The bold man’s right to rule and reign,
The way of gods and kings.

So capture crowns of wealth and power
(If you’ve the strength and can)
For strife is life’s eternal dower,
And nothing’s under ban. 

Ye, lions wake and hunt and fight,
Ye, eagles spread your wings;
Ye, men of might, believe you’re right
For you indeed are kings.

[5] Interestingly, although it is usually assumed that Redbeard is a disciple of, or at last influenced by, Nietzsche, the latter is never actually mentioned by name throughout the text, nor, to my knowledge, in any of Redbeard’s other published writings. Neither does Redbeard adopt such tell-tale Neitzschean neologisms as übermensch, slave morality etc. The ostensible editor of the original 1896 edition, one “Douglas K Handyside M.D. Ph.D.” (itself likely a pseudonym) makes, on Redbeard’s behalf, the interesting admission that:

Through his inability to read German, he [Redbeard] very deeply regrets that he cannot search thoroughly into the famous works of Friedrich Nietzsche, Felix Dahn, Alexander Tille, Karl Gutzkow, Max Stirner and other missionaries of what Huxley names ‘The New Reformation’.”

At the time Redbeard was writing, of course, English translations of many of these works were not widely available. Redbeard’s themes, however, do often echo, or at least mirror, those of both Nietzsche and Stirner in particular.

[6] Strictly speaking, presumably, for Redbeard, an individual’s own personal morality need not necessarily be wholly different from that of every other person, so long as it is arrived at independently. A person could, purely by chance, or by rather convergent reasoning, arrive at the same moral ethos as his neighbour. However, this is acceptable to Redbeard only so long as the convergence occurred without any coercion or indoctrination.

[7] Perhaps this apparent contradiction could be reconciled by claiming that, although each man must, for the sake of his freedom, determine anew his own version of morality, nevertheless, given the power of Redbeard‘s arguments, any intelligent, rational individual will inevitably arrive at the same conclusion as Redbeard himself. Interestingly in this light, although Redbeard usually regarded as having been influenced by Nietzsche in his views on morality, the latter is never actually cited or otherwise mentioned by name, or quoted, at any point within Redbeard’s text. Perhaps Redbeard is thereby attempting to emphasize that, howsoever much his ideas may converge with those of Nietzsche, they are nevertheless very much of his own, derived by way of independent reasoning.

[8] ‘The editors, ‘A bogus book ‘The Survival or the Fittest or The Philosophy of Power’ By Ragnar Redbeard’Tocsin, Thursday 23 March 1899. 

[9] This is certainly true of communism. Communism’s apologists typically claim that “‘true communism’ has never been tried, but this only illustrates the fact that there is a reason why true communism has never been achieved – namely, it is simply impossible and unworkable and therefore never could be achieved. Watered-down socialism, in the form of what is today called social democracy has proven workable and sustainable, albeit at some economic cost, in, for example, the, perhaps not unconincidentally, (until recently) racially and ethnically homogenous Nordic economies.

[10] Such small utopian communes sometimes succeed for a generation, because those drawn to them are highly committed to the ideology of the group, which is why they choose to join the group, and are thus a highly self-selected sample. However, they typically either break down, or, as in the case of Israeli kibbutzim, abandon many aspects of the original ideology and practice of the group, in succeeding generations, as those born into the group, though raised according to its precepts, nevertheless lack the commitment to its ideals of their parents. On the contrary, they often inherit their parents’ rebellious streak, the very rebellious streak that led their parent to reject conventional society and instead join a commune, but which, in their offspring, leads them to rebel against the teaching of the commune in which they were raised. 

[11] Matthew 5 39-42; Luke 6: 27-31.

[12] Matthew 7.

[13] Giving up one‘s worldly possessions is explicitly commanded by Jesus in passages such as Mark 10:21Luke 14:33.

[14] In pinning their hopes on science for our liberation, secularists are, rather ironically, themselves following the biblical teaching that the truth shall set you free (John 8:32). In reality, the truth does not set us free: It merely reveals the truth of our own enslavement, namely precisely that which we were seeking to escape in the first place.

[15] Matthew 5 39-42Luke 6: 27-31.

[16] Matthew 22:21

[17] Wood, A (1990) ‘Marx Against Morality’,in Singer (ed.), A Companion to Ethics (pp. 511- 524) Oxford: Blackwell.

[18] For more on this interesting topic, see Wood, A (1990) ‘Marx Against Morality’, in Singer (ed.), A Companion to Ethics (pp. 511- 524) Oxford: Blackwell; Rosen, M. (2000). The Marxist Critique of Morality and the Theory of IdeologyMorality, Reflection and Ideology, 21-43.

[19] Thus, Nietzsche observed in The Anti-Christ

The anarchist and the Christian have the same ancestry” (The Anti-Christ). 

Hitler was later to reiterate the same point in his Table Talk, albeit with added (or at least more explicit) anti-Semitism, writing:

The heaviest blow that ever struck humanity was the coming of Christianity. Bolshevism is Christianity‘s illegitimate child. Both are inventions of the Jew” (Hitler’s Table Talk).

Here, Hitler directly echoes, and indeed combines, not only the quotation from Nietzsche in The Anti-Christ that I have quoted just above but also passage from The Anti-Christ, where Nietzsche anticipates Hitler by lamenting:

Christianity remains to this day the greatest misfortune of humanity” (The Anti-Christ).

Clearly, if Marxism, socialism, anarchism and Christianity share the same ancestry, so perhaps do Nietzsche and Hitler – and perhaps Redbeard too.

[20] After all, throughout history, conquest and subjugation has been a frequent occurrence. However, only rarely have states or peoples voluntarily entered into unions with other states or peoples in order to form a new state or people.
On the contrary, peoples, with their inevitable petty hatreds against even their close neighbours (indeed, especially against close neighbours) are almost always reluctant to surrender their own traditions and identity, howsoever petty and parochial, and be subsumed into larger monolithic ethnic grouping.
Moreover, when such unified polities have been voluntarily formed, this has typically been either to facilitate or forestall conquest, as when a group of smaller polities join together to protect themselves against a potential conqueror through force of numbers, or when they join together to facilitate the conquest of a third-party power. Even voluntary unions, then, are typically formed for the purposes of conquest or resisting conquest. Thus, as Herbert Spencer wrote: 

Only by imperative need for combination in war were primitive man led into cooperation” (quoted in: Nonzero: The Logic of Human Destiny: p56). 

Indeed, Robert Wright goes so far as to suggest: 

This is almost like a general law of history…formerly contentious Greek states form the Delian league to battle Persia, five previously warring tribes forming the Iroquois league (under Hiawatha’s deft diplomacy) in the sixteenth century after menacing white men arrived in America; American white men, two centuries later, merging thirteen colonies into a confederacy amid British hostility… The loosely confederated tribes [of Israel] transform[ing] themselves into a unified monarchy [under threat from the Philistines]” (Nonzero: The Logic of Human Destiny: p58). 

[21] Both of these quotations are taken, not from ‘Might is Right’, but rather from Sayings of Redbeard, a separate collection of aphorisms and poetry by the same author.

[22] This quotation comes from ‘Might is Right’. Another formulation on the same theme, and an extension of the same rhyming couplet, also quoted above, is found in Sayings of Redbeard, where the author writes: 

Poison lurks in pastor preachments,
Satan works through golden rules,
Hell is paved with law and justice,
Christs were made to fetter fools.”

[23] Perhaps first-generation immigrants are an exception, having chosen to migrate to the jurisdiction of their choosing and hence voluntarily agreed to be bound by its laws. However, even this decision is hardly made “in a state of… unrestrained liberty”, the stringent condition demanded by Redbeard. After all, there are only a limited number of jurisdictions to choose from, most of them with legal systems, and bodies of law, rather similar to one another, such that the actual choice available is very limited.
Incidentally, Arthur Desmond, the likely real person behind the pseudonymous Redbeard, was himself a migrant, having migrated from New Zealand to the USA at the time he authored this book. 

[24] Thus, Redbeard concludes:

The ‘Voice of the People’ can only be compared to the fearsome shrieks of agony that may now and then be heard, issuing forth from the barred windows of a roadside madhouse.” 

[25] For socialists to champion work is, however, altogether odder. Indeed, the very essence of leftist ideology implicitly presumes that work is something to be avoided. Thus, those who are obliged to work, through coercion or circumstance (i.e. slaves, wage-slaves, serfs and the aptly-named ‘working-classes) are, by virtue of this fact alone, presumed to be oppressed and exploited, while those who are exempt from work (the idle rich and leisure class) are regarded as privileged, if not as exploitative oppressors, on precisely this account. Yet somehow leftist agitation on behalf of workers was corrupted into a perverse and sentimental celebration of the working classes, and thence into a perverse and sentimental celebration of work itself as somehow ennobling.
A cynic, of course, would suggest that this curious transformation was deliberately engineered by the capitalist employers and government themselves, and would also observe that ostensibly socialist governments tend to be as exploitative of, and parasitic upon, the working population as are every other form of government. This would, of course, be the view of Redbeard himself.

[26] Interestingly, at one point in the same discussion, Redbeard seems to go yet further, rejecting not only the labour theory of property, but also the so-called labour theory of value. This is the idea, long discredited among serious economists, but still held to as a sacrosanct dogma by unreconstructed Marxists and other such ‘professional damned fools’, that the value or price of a commodity is determined by the labour expended in creating it. Thus, Redbeard seemingly attempts to argue that, not just ownership, but also economic value is somehow determined by force of arms:

“The sword, not labor, is the true creator of economic values.” 

I am, of course, like all right-thinking people, all in favour of gratuitous sideswipes at Marxism. Moreover, the labour theory of value is indeed largely discredited. However, the idea that value, in the economic sense, can be created by force of arms seems to me even more wrongheaded than the idea that value of a commodity is determined by the labour expended in creating it, and it is difficult to envisage how this idea would work in practice.
Value , in the economic sense, is usually understood as being based on the free exchange of goods and services, rather than their focible capture. Could value really be measured by, say, the security costs expended on behalf of protecting property (e.g. security guards, burglar alarms, barbed wire fences), or the expenses incurred in the forcible taking of such property, rather than the value of the property for which one would be willing to exchange that property? This seems problematic.
At any rate, whatever the merits of this admittedly novel and intiguing idea, to justify such a notion, some sort of sustained argument is clearly required. Redbeard’s single throwaway sentence clearly does not suffice.

[27] Of course, a Darwinian perspective is arguably no more flattering to males. If the quintessential female is specialized for making eggs, then the quintessential male is an organism specialized to compete to fertilize as many such eggs as possible. Males, therefore, are destined to compete for access to females. This, of course, does not mean that for a either a man or a woman, or a male or female of any other species, to devote their life to such an endeavour is necessarily the morally right thing to do, nor even that it is necessarily the most psychologically rewarding course of action. 

[28] For example, James Watson reports that, whereas 94% of the Y-chromosomes of contemporary Colombians are European, mitochondrial DNA shows a “range of Amerindian MtDNA types” (DNA: The Secret of Life: p257). Thus, he concludes, “the virtual absence of Amerindian Y chromosome types, reveals the tragic story of colonial genocide: indigenous men were eliminated while local women were sexually ‘assimilated’ by the conquistadors” (Ibid: p257). Similarly, the Anglo-Saxon and Viking invaders to Britain made a greater contribution to the Y-chromosomes of the English than they did to our mitochondrial DNA (see Blood of the Isles).

[29] I put the phrase ‘warrior genes’ in inverted commas because Redbeard was actually writing before the modern synthesis i.e. before importance of Mendel’s pioneering work regarding the mechanism of heredity, what is today called genetics, was widely recognized. Redbeard himself therefore does not refer to ‘genes’ as such. 

[30] Shaw GB (1903) Man and Superman, Maxims for Revolutionists

[31] Indeed, perhaps the only exception to this general principle is in respect of those sporting events in which women are also themselves the competitors, since most men are, in my experience, uninterested in female sports. Yet this is clearly totally contrary to Redbeard’s theory of sports as an arena for female mate choice. For a more sophisticated evolutionary theory of competitive sport, see Lombardo (2012)  On the Evolution of Sport Evolutionary Psychology 10(1).

[32] In referring to “male coercion”, I do not have in mind primarily outright rape, though this did indeed likely play some small part in the propagation of warrior genes as it is a recurrent feature of war and conquest. Rather, I have in mind more subtle and indirect mechanisms of coercion such as, for example, arranged marriages.

[33] Other predators rarely drive their prey to extinction, since, once the prey species starts to become rare, then the predator species either switches to a different source of food (e.g. a different prey species) or else, bereft of food, starts to dwindle in numbers itself, such that, one way or another, the prey species is able to recover somewhat in numbers. Humans are said to be an exception because, in human cultures, there is often prestige in successfully capturing an especially rare prey, such that humans continue to hunt an endangered species right up to the point of extinction even when, in purely nutritional terms, this is a sub-optimal foraging strategy: see Hawkes K (1991) Showing off: Tests of an hypothesis about men’s foraging goals Ethology and Sociobiology 12(1): 29-54.

[34] Quoted in: Brownworth, L The Sea Wolves: A History of the Vikings: p20.

[35] Thus, Anton Lavey, in lifting material from ‘Might is Right’ for his own so-called Satanic Bible, largely cut out the racialist and anti-Semitic content, and, in doing so, produced a philosophy that was at least as consistent and coherent as Redbeard’s own (which is to say, not very consistent or coherent at all).

[36] As Robert, a character from Michel Houellebecq’s Platform, observes: 

All anti-Semites agree that the Jews have a certain superiority. If you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead.” 

Indeed, even Hitler in Mein Kampf came close to conceding Jewish superiority, writing:

The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self-preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof. Where is the people which in the last two thousand years has been exposed to so slight changes of inner disposition, character, etc., as the Jewish people? What people, finally, has gone through greater upheavals than this one – and nevertheless issued from the mightiest catastrophes of mankind unchanged? What an infinitely tough will to live and preserve the species speaks from these facts” (Mein Kampf, Manheim translation).

Thus, Nazi propaganda claimed that Jews controlled banking, moneylending, whole swathes of the German economy and dominated the legal and medical professions. Yet, if Jews, who composed only a tiny fraction of the Weimar population, did indeed dominate the economy to the extent claimed by the Nazis then, this not only suggested that Jews were far from inferior to their ‘Aryan’ hosts, but also that the Germans themselves, in allowing themselves to be dominated by a group so small in number, were anything but the Aryan Übermensch and Herrenrasse of Hitler’s own demented imaginings.

[37] Might is Right: The Authoritative Edition: p259.

[38] In the authoritative edition, the editor, Trevor Blake, suggests that, just as Anton Lavey, in plagiarizing ‘Might is Right’, omitted the racialist and anti-Semitic elements, so, in editions produced by some white nationalist presses, these favourable references to Jewish figures are omitted. He does not, however, cite any specific examples of alterations from the text.
Interestingly, however, the first version of the poem The Philosophy of Power (aka The Logic of Today) with which I became familiar did just that, replacing “Might is Right when Joshua led his hordes o’er Jordan’s foam” with “Might is Right when Genghis led his hordes o’er Danube’s foam”. Indeed, a google search for this version reveals nearly as many hits as for the correct, original wording, perhaps because this version was also used as the lyrics for a song by (somewhat) popular nineties white power band, Rahowa in their album, ‘Cult of the Holy War. Perhaps, from a white nationalist perspective, praising a non-white Asian military conqueror is more acceptable than praising a mythical Jewish military conqueror.
However, another reason to actually prefer the altered version is that, in terms of the poem’s metre or rhythmical structure, the changed version actually scans rather better than the original, the extra syllable in “Joshua”, as compared to “Genghis”, both breaking the iambic pentametre of the verse and making this line one syllable longer than the preceding line and most of the other lines in the poem.

[39] From a social Darwinist perspective, however, this is perhaps to be welcomed, since it increases the competition between prospective despots and dictators, and hence ensures that only the greatest conqueror will prevail. However, among the many contradictions in ‘Might is Right’ is that Redbeard vacillates between championing a radical individualist egoist morality, and a social Darwinist ethos. 
Social Darwinism is actually, in a sense, a collectivist ideology, since, although it champions conflict between individuals, it does so only so that the only most superior individuals survive and reproduce, hence resulting in a eugenic benefit to the group or species as a whole.

[40] So, at least, it is claimed here, by Underworld Amusements, publishers of what purports to be, with no little justification, The Authoritative Edition of the book. 

The ‘Means of Reproduction’ and the Ultimate Purpose of Political Power

Laura Betzig, Despotism and Differential Reproduction: A Darwinian View of History (New Brunswick: AdelineTransation, 1983). 

Moulay Ismail Ibn Sharif, alias ‘Ismail the Bloodthirsty’, a late-seventeenth, early eighteenth century Emperor of Morocco is today little remembered, at least outside of his native Morocco. He is, however, in a strict Darwinian sense, possibly the most successful human ever to have lived. 

Ismail, you see, is said to have sired some 888 offspring. His Darwinian fitness therefore exceeded that of any other known person.[1]

Some have questioned whether this figure is realistic (Einon 1998). However, the best analyses suggest that, while the actual number of offspring fathered by Ismail may indeed be apocryphal, such a large progeny is indeed eminently plausible for a powerful ruler with access to a large harem of wives and/or concubines (Gould 2000; Oberzaucher & Grammer 2014).

Indeed, as Laura Betzig demonstrates in ‘Despotism and Differential Reproduction’, Ismail is exceptional only in degree.

Across diverse societies and cultures, and throughout human history, wherever individual males acquire great wealth and power, they convert this wealth and power into the ultimate currency of natural selection – namely reproductive success – by asserting and maintaining exclusive reproductive access to large harems of young female sex partners. 

A Sociobiological Theory of Human History 

Betzig begins her monograph by quoting a small part of a famous passage from the closing paragraphs of Charles Darwin’s seminal On the Origin of Species which she adopts as the epigraph to her preface. 

In this passage, the great Victorian naturalist tentatively extended his theory of natural selection to the question of human origins, a topic he conspicuously avoided in the preceding pages of his famous text. 

Yet, in this much-quoted passage, Darwin goes well beyond suggesting merely that his theory of evolution by natural selection might explain human origins in just the same way it explained the origin of other species. On the contrary, he also anticipated the rise of evolutionary psychology, writing of how: 

Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. 

Yet this is not the part of this passage quoted by Betzig. Instead, she quotes the next sentence, where Darwin makes another prediction, no less prophetic, namely that: 

Much light will be thrown on the origin of man and his history 

In this reference to “man and his history”, Darwin surely had in mind primarily, if not exclusively, the natural history and evolutionary history of our species.

Betzig, however, interprets Darwin more broadly, and more literally, and, in so doing, has both founded, and for several years, remained the leading practitioner of a new field – namely, Darwinian history.

This is the attempt to explain, not only the psychology and behaviour of contemporary humans in terms of sociobiology, evolutionary psychology and selfish gene theory, but also to explain the behaviour of people in past historical epochs in terms of the same theory.  

Her book length monograph, ‘Despotism and Differential Reproduction: A Darwinian View of History’ remains the best known and most important work in this field. 

The Historical and Ethnographic Record 

In making the case that, throughout history and across the world, males in positions of power have used this power so as to maximize their Darwinian fitness by securing exclusive reproductive access to large harems of fertile females, Betzig, presumably to avoid the charge of cherry picking, never actually even mentions Ismail the Bloodthirsty at any point in her monograph. 

Instead, Betzig uses ethnographic data taken from a random sample of cultures from across the world. Nevertheless, the patterns she uncovers are familiar and recurrent.

Powerful males command large harems of multiple fertile young females, to whom they assert, and defend, exclusive reproductive access. In this way, they convert their power into the ultimate currency of natural selection – namely, reproductive success or fitness.

Thus, citing and summarizing Betzig’s work, not only ‘Despotism and Differential Reproduction’, but also other works she has published on related topics, science writer Matt Ridley reports:

[Of] the six independent ‘civilizations’ of early history – Babylon, Egypt, India, China, the Aztecs and the Incas… the Babylonian king Hammurabi had thousands of slave ‘wives’ at his command. The Egyptian pharaoh Akhenaten procured three hundred and seventeen concubines and ‘droves’ of consorts. The Aztec ruler Montezuma enjoyed four thousand concubines. The Indian emperor Udayama preserved sixteen thousand consorts in apartments guarded by eunuchs. The Chinese emperor Fei-ti had ten thousand women in his harem. The Inca… kept virgins on tap throughout the kingdom” (The Red Queen: p191-2; see Betzig 1993a).

In a contemporary context, I wonder whether the ostensibly ‘elite’ all-female bodyguard of Arab socialist dictator, Colonel Gadaffi, his so-called ‘Amazonian Guard’ (aka ‘Revolutionary Nuns’), served a similar function.

Given the innate biological differences between the sexes, physical and psychological, women are unlikely to make for good bodyguards anymore than they do effective soldiers in wartime, and, judging from photographs, Gadaffi’s elite bodyguard seem to have been chosen at least as much on account of their youth and beauty as on the basis of any martial prowess. Certainly they did little to prevent his exection by rebels in 2011.

Moreover, since his overthrow and execution, accusations of sexual abuse have inevitably surfaced, though how much credence we should give to these claims is debatable.[2]

Such vast harems as those monopolized by ancient Egyptian pharaohs, Chinese emperors and Babylonian kings seem, at first, wholly wasteful. This is surely more fertile females than even the horniest, healthiest and most virile of emperors could ever hope to have even sex with, let alone successfully impregnate, in a single lifetime. However, as Betzig acknowledges: 

The number of women in such a harem may easily have prohibited the successful impregnation of each… but, their being kept from bearing children to others increased the monarch’s relative reproductive accomplishment” (p70). 

In other words, even if these rulers were unable to successfully impregnate every concubine in their harem, keeping them cloistered and secluded nevertheless prevented other males from impregnating them, which increased the relative representation of the ruler’s genes in subsequent generations.

To this end, extensive efforts also were made to ensure the chastity of these women. Thus, even in ancient times, Betzig reports: 

Evidence of claustration, in the form of a walled interior courtyard, exists for Babylonian Mai; and claustration in second story rooms with latticed, narrow windows is mentioned in the Old Testament” (p79). 

Indeed, Betzig even proposes an alternative explanation for early evidence of defensive fortifications

Elaborate fortifications erected for the purposes of defense may [also] have served the dual (identical?) function of protecting the chastity of women of the harem” (p79). 

Indeed, as Betzig alludes to in her parenthesis, this second function is arguably not entirely separate to the first. 

After all, if all male-male competition is ultimately based on competition over access to fertile females, then this surely very much includes warfare. As Napoleon Chagnon emphasizes in his studies of warfare and intergroup raiding among the Yąnomamö Indians of the Amazonian rainforest, warfare among primitive peoples tends to be predicated on the capture of fertile females from among enemy groups.[3]

Therefore, even fortifications erected for the purposes of military defence, ultimately serve the evolutionary function of maintaining exclusive reproductive access to the fertile females contained therein. 

Other methods of ensuring the chastity of concubines, and thus the paternity certainty of emperors, included the use of eunuchs as harem guards. Indeed, this seems to have been the original reason eunuchs were castrated and later became a key element in palace retinues (see The Evolution of Human Sociality: p45). 

Chastity belts, however, ostensibly invented for the wives of crusading knights while the latter were away on crusade, seem to be a modern myth.

The movements of harem concubines were also highly restricted. Thus, if permitted to venture beyond their cloisters, they were invariably escorted. 

For example in the African Kingdom of Dahomey, Betzig reports: 

The king’s wives’… approach was always signalled by the ringing of a bell by the women servant or slave who invariably preceded them [and] the moment the bell is heard all persons, whether male or female , turn their backs, but all the males must retire to a certain distance” (p79). 

Similarly, inmates of the Houses of Virgins maintained by Inca rulers:

Lived in perpetual seclusion to the end of their lives… and were not permitted to converse, or have intercourse with, or to see any man, nor any woman who was not one of themselves” (p81-2). 

Feminists tend to view such practices as evidence of the supposed oppression of women

However, from a sociobiological or evolutionary psychological perspective, the primary victims of such practices were, not the harem inmates themselves, but rather the lower-status men condemned to celibacy and ‘inceldom’ as a consequence of royal dynasties monopolizing sexual access to almost all the fertile females in the society in question. 

The encloistered women might have been deprived of their freedom of movement – but many lower-status men in the same societies were deprived of almost all access to fertile female sex partners, and hence any possibility of passing on their genes, the ultimate evolutionary function of any biological organism. 

In contrast, the concubines secluded in royal harems were not only able to reproduce, but also lived lives of relative comfort, if not, in some cases, outright luxury, often being: 

Equipped with their own household and servants, and probably lived reasonably comfortable lives in most respects, except… for a lack of liberal masculine company” (p80). 

Indeed, seclusion, far from evidencing oppression, was primarily predicted on safety and protection. In short, to be imprisoned is not so bad when one is imprisoned in a palace! 

Finally, methods were also sometimes employed specifically to enhance their fertility of the women so confined. Thus, Ridley reports: 

Wet nurses, who allow women to resume ovulation by cutting short their breast-feeding periods, date from at least the code of Hammurabi in the eighteenth century BC… Tang dynasty emperors of China kept careful records of dates of menstruation and conception in the harem so as to be sure to copulate only with the most fertile concubines… [and] Chinese emperors were also taught to conserve their semen so as to keep up their quota of two women a day” (The Red Queen: p192). 

Corroborating Betzig’s conclusions but subsequent to the publication of her work, researchers have now uncovered genetic evidence of the fecundity of one particular powerful ruler (or ruling male lineage) – namely, a Y chromosome haplogroup, found in 8% of males across a large region of Asia and in one in two hundred males across the whole world – the features of which are consistent with its having spread across the region thanks to the exceptional prolificity of Genghis Khan, his male siblings and descendants (Zerjal et al 2003). 

Female Rulers? 

In contrast, limited to only one pregnancy every nine months, a woman, howsoever rich and powerful, can necessarily bear far fewer offspring than can be sired by a man enjoying equivalent wealth, power and access to multiple fertile sex partners, even with the aid of evolutionary novelties like wet nurses, bottle milk and IVF treatment. 

As a female analogue of Ismail the Bloodthirsty, it is sometimes claimed that a Russian woman gave birth to 69 offspring in the nineteenth-century. She was also supposedly, and very much unlike Ismail the Bloodthirsty, not a powerful and polygamous elite ruler, but rather a humble, monogamously married peasant woman. 

However, this much smaller figure is both physiologically implausible and poorly sourced. Indeed, even her name is unknown, and she is referred to only as the wife of Feodor Vassilyev. It is, in short, almost certainly an urban myth.[4]

Feminists have argued that the overrepresentation of males in positions of power is a consequence of such mysterious and non-existent phenomena as patriarchy or male dominance or the oppression of women.

In reality, however, it seems that, for women, seeking positions of power and wealth simply doesn’t have the same reproductive payoff as for men – because, no matter how many men a woman copulates with, she can usually only gestate, and nurse, one (or, in the case of twins or triplets, occasionally two or three) offspring at a time. 

This is the essence of Bateman’s Principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972).

This, then, in Darwinian terms, explains why women are less likely to assume positions of great political power.

It is not necessarily that they wouldn’t want political power if it were handed to them, but rather that they are less willing to make the necessary effort, or take the necessary risks to attain power.

Indeed, among women, there may even be a fitness penalty associated with assuming political power or acquiring a high status job. Thus, such jobs tend to be, not only high status, but also usually high stress and not easily combined with motherhood.

Indeed, even among baboons, it has been found that high-ranking females actually suffer reduced fertility and higher rates of miscarriages, possibly on account of hormonal factors (Packer et al 1995).

Kingsley Browne, in his excellent book, Biology at Work: Rethinking Sexual Equality (which I have reviewed here), noting that female executives also tend to have fewer children, tentatively proposes that a similar mechanism may be at work among humans:

Women who succeed in business tend to be relatively high testosterone, which can result in lower female fertility, whether because of ovulatory irregularities or reduced interest in having children. Thus, rather than the high-powered career being responsible for the high rate of childlessness, it may be that high testosterone levels are responsible for both” (Biology at Work: p124).

Therefore, it may well be to woman’s advantage to marry a male with a high status, powerful job, but not to do such a job for herself. That way, she obtains the same wealth and status as her husband, and the same wealth and status for her offspring, but without the hard work it takes to achieve this status.

What is certainly true is that social status and political power does not have the same potential reproductive payoff for women as it did for, say, Ismail the Bloodthirsty.

This calculus, then, rather than the supposed oppression of women, explains, not only the cross-culturally universal over-representation of men in positions of power, but also much of the so-called gender pay gap in our own societies (see Kingsley Browne’s Biology at Work: reviewed here). 

Perhaps the closest women can get to producing such a vast progeny is maneuver their sons into having the opportunity to do so.

This might explain why such historical figures as Agrippina the Younger, the mother of Nero, and Olympias, mother of Alexander the Great, are reported as having been so active, and instrumental, in securing the succession on behalf of their sons. 

The Purpose of Political Power? 

The notion that powerful rulers often use their power to gain access to multiple nubile sex partners is, of course, hardly original to sociobiology. On the contrary, it accords with popular cynicism regarding men who occupy positions of power. 

What a Darwinian perspective adds is the ultimate explanation of why political leaders do so – and why female political rulers, even when they do assume power, usually adopt a very different reproductive strategy. 

Moreover, a Darwinian perspective goes beyond popular cynicism in suggesting that access to multiple sex partners is not merely yet another perk of power. On the contrary, it is the ultimate purpose of power and reason why men evolved to seek power in the first place. 

As Betzig herself concludes: 

Political power in itself may be explained, at least in part, as providing a position from which to gain reproductively” (p85).[5]

After all, from a Darwinian perspective, political power in and of itself has no intrinsic value. It is only if power can be used in such a way as to maximize a person’s reproductive success or fitness that it has evolutionary value. 

Thus, as Steven Pinker has observed, the recurrent theme in science fiction film and literature of robots rebelling against humans to take over the world and overthrow humanity is fundamentally mistaken. Robots would have no reason to rebel against humans, simply because they would not be programmed to want to take over the world and overthrow humanity in the first place. 

On the other hand, humans have been programmed to seek wealth and power – and to resist oppression and exploitation. This is why revolutions are a recurrent feature of human societies and history.

But we have been programmed, not by a programmer or god-like creator, but rather by natural selection.

We have been programmed by natural selection to seek wealth and power only because, throughout human evolutionary history, those among our ancestors who achieved political power tended, like Ismail the Bloodthirsty, also to achieve high levels of reproductive success as a consequence. 

Darwin versus Marx 

In order to test the predictive power of her theory, Betzig contrasts the predictions made by sociobiological theory with a rival theory – namely, Marxism

The comparison is apposite since, despite repeated falsification at the hands of both economists and of history, Marxism remains, among both social scientists and laypeople, perhaps the dominant paradigm when it comes to explaining social structure, hierarchy and exploitation in human societies.  

Certainly, it has proven far more popular than any approach to understanding human dominance hierarchies grounded in ethology, sociobiology, evolutionary psychology or selfish gene theory

There are, it bears emphasizing, several similarities between the two approaches. For one thing, each theory traces its origins ultimately to a nineteenth-century Victorian founder resident in Britain at the time he authored his key works, namely Charles Darwin and Karl Marx respectively.  

More importantly, there are also substantive similarities in the content and predictions of both these alternative theoretical paradigms. 

In particular, each is highly cynical in its conclusions. Indeed, at first glance, Marxist theory appears superficially almost as cynical as Darwinian theory. 

Thus, like Betzig, Marx regarded most societies in existence throughout history as exploitative – and as designed to serve the interests, neither of society in general nor of the population of that society as a whole, but rather of the dominant class within that society alone – namely, in the case of capitalism, the bourgeoisie or capitalist employers. 

However, sociobiological and Marxist theory depart in at least three crucial respects. 

First, Marxists propose that exploitation will be absent in future anticipated communist utopias

Second, Marxists also claim that such exploitation was also absent among hunter-gatherer groups, where so-called primitive communism supposedly prevailed. 

Thus, the Marxist, so cynical with regard exploitation and oppression in capitalist (and feudal) society, suddenly turns hopelessly naïve and innocent when it comes to envisaging future unrealistic communist utopias, and when contemplating ‘noble savages’ in their putative ‘Eden before the fall’.

Unfortunately, however, in her critique of Marxism, Betzig herself nevertheless remains somewhat confused in respect of this key issue. 

On the one hand, she rightly dismisses primitive communism as a Marxist myth. Thus, she demonstrates and repeatedly emphasizes that:

Men accrue reproductive rights to wives of varying numbers and fertility in every human society” (p20).

Therefore, Betzig, contrary to the tenets of Marxism, concludes:

Unequal access to the basic resource which perpetuates life, members of the opposite sex, is a condition in [even] the simplest societies” (p32; see also Chagnon 1979).

Neither is universal human inequality limited only to access to fertile females. On the contrary, Betzig observes:

Some form of exploitation has been in evidence in even the smallest societies… Conflicts of interest in all societies are resolved with a consistent bias in favor of men with greater power” (p67).

On the other hand, however, Betzig takes a wrong turn in refusing to rule out the possibility of true communism somehow arising in the future. Thus, perhaps in a misguided effort to placate the many leftist opponents of sociobiology in academia, she writes:

Darwinism… [does not] preclude the possibility of future conditions under which individual interests might become common interests: under which individual welfare might best be served by serving the welfare of society… [nor] preclude… the possibility of the evolution of socialism” (p68). 

This, however, seems obviously impossible. 

After all, we have evolved to seek to maximize the representation of our own genes in subsequent generations at the expense of those of other individuals. Only a eugenic reengineering of human nature itself could ever change this. 

Thus, as Donald Symons emphasized in his seminal The Evolution of Human Sexuality (which I have reviewed here), reproductive competition is inevitable – because, whereas there is sometimes sufficient food that everyone is satiated and competition for food is therefore unnecessary and counterproductive, reproductive success is always relative, and therefore competition over women is universal. 

Thus, Betzig quotes Confucius as observing:

Disorder does not come from heaven, but is brought about by women” (p26). 

Indeed, Betzig herself elsewhere recognizes this key point, namely the relativity of reproductive success, when she observes, in a passage quoted above, that a powerful monarch benefits from sequestering huge numbers of fertile females in his harem because, even if it is unfeasible that he would ever successfully impregnate all of them himself, he nevertheless thereby prevents other males from impregnating them, and thereby increases the relative representation of his own genes in subsequent generations (p70). 

It therefore seems inconceivable that social engineers, let alone pure happenstance, could ever engineer a society in which individual interests were identical to societal interests, other than a society of identical twins or through the eugenic reingineering of human nature itself (see Peter Singer’s A Darwinian Left, which I have reviewed here).[6]

Marx and the Means of Reproduction

The third and perhaps most important conflict between the Darwinist and Marxist perspectives concerns what Betzig terms: 

The relative emphasis on production and reproduction” (p67).

Whereas Marxists view control of what they term the means of production as the ultimate cause of societal conflict, socioeconomic status and exploitation, for Darwinians conflict and exploitation instead focus on control over what we might term the means of reproduction – in other words fertile females, their wombs, ova and vaginas. 

Thus, Betzig observes: 

Marxism makes no explicit prediction that exploitation should coincide with reproduction” (p68). 

In other words, Marxist theory is silent on the crucial issue of whether high-status individuals will necessarily convert their political and economic power into the ultimate currency of Darwinian selection – namely, reproductive success

On this view, powerful male rulers might just as well be celibate as control and assert exclusive reproductive access to large harems of young fertile wives and concubines. 

In contrast, for Darwinians, the effort to maximize one’s reproductive success is the very purpose, and ultimate end, of all political power. 

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here): 

The ultimate measure of human success is no production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon: p165). 

Thus, production is, from a sociobiological perspective, just another means of gaining the resources necessary for reproduction. 

On the other hand, reproduction is, from a biological perspective, the ultimate purpose of life. 

Therefore, it seems that, for all his ostensible radicalism, Karl Marx was, in his emphasis on economics rather than sex, just another nineteenth-century Victorian sexual prude

The Polygyny Threshold Model Applied to Humans? 

One way of conceptualizing the tendency of powerful males to attract (or perhaps commandeer) multiple wives and concubines is the polygyny threshold model

This way of conceptualizing male and female reproductive and ecological competition was first formulated by ornithologist-ecologist Gordon Orians in order to model the mating systems of passerine birds (Orians 1969). 

Here, males practice so-called resource defence polygyny – in other words, they defend territories containing valuable resources (e.g. food, nesting sites) necessary for successful reproduction and provisioning of offspring. 

Females then distribute themselves between males in accordance with size and quality of male territories. 

On this view, if the territory of one male is twice as resource-abundant as that of another, he would, all else being equal, attract twice as many mates; if it is three times as resource-abundant, he would attract three times as many mates; etc. 

The result is rough parity in resource-holdings and reproductive success among females, but often large disparities among males. 

Applying the Polygyny Threshold Model to Modern America

Thus, applying the polygyny threshold model to humans, and rather simplistically substituting wealth for territory size and quality, we might predict that, if Jeff Bezos is a hundred thousand times richer than Joe Schmo, then, if Joe has only one wife, then Jeff should have around 100,000 wives.

But, of course, Jeff Bezos does not have 100,000 wives, nor even a mere 100,000 concubines. 

Instead, he has only one solitary meagre ex-wife, and she, even when married to him, was not, to the best of my knowledge, ever guarded by any eunuchs – though perhaps he would have been better off if she had been, since they might have prevented her from divorcing him and taking an enormous share of his wealth with her in the ensuing divorce settlement.[7]

Indeed, with the sole exception of the magnificent John McAfee, founder of the first commercially available antivirus software, who, after making his millions, moved to a developing country where he obtained for himself a harem of teenage concubines, with whom he allegedly never actually had sex, instead preferring to have them defecate into his mouth while sitting in a hammock, but with whom he is nevertheless reported to have somehow fathered some forty-seven children, most modern millionaires, and billionaires, despite their immense wealth and the reproductive opportunities it offers, seemingly live lives of stultifyingly bland bourgeois respectability.

The same is also true of contemporary political leaders. 

Indeed, if any contemporary western political leader does attempt to practice polygyny, even on a comparatively modest scale, then, if discovered, a so-called sex scandal almost invariably results. 

Yet, viewed in historical perspective, the much-publicized marital infidelities of, say, Bill Clinton, though they may have outraged the sensibilities of the mass of monogamously-married Middle American morons, positively pale into insignificance besides the reproductive achievements of someone like, say, Ismail the Bloodthirsty

Indeed, Clinton’s infidelities don’t even pack much of a punch beside those of a politician from the same nation and just a generation removed, namely John F Kennedy – whose achievements in the political sphere are vastly overrated on account of his early death, but whose achievements in the bedroom, while scarcely matching those of Ismail the Bloodthirsty or the Aztec emperors, certainly put the current generation of American politicians to shame. 

Why, then, does the contemporary west represent such a glaring exception to the general pattern of elite polygyny that Betzig has so successfully documented throughout so much of the rest of the world, and throughout so much of history? And what has become of the henpecked geldings who pass for politicians in the contemporary era? 

Monogamy as Male Compromise? 

According to Betzig, the moronic mass media moral panic that invariably accompanies sexual indiscretions on the part of contemporary Western political leaders and other public figures is no accident. Rather, it is exactly what her theory predicts. 

According to Betzig, the institution of monogamy as it operates in Western democracies represents a compromise between low-status and high status males. 

According to the terms of this compromise, high-status males agree to forgo polygyny in exchange for the cooperation of low status males in participating in the complexly interdependent economic systems of modern western polities (p105) – or, in biologist Richard Alexander’s alternative formulation, in exchange for serving as necessary cannon-fodder in wars (p104).[8]

Thus, whereas, under polygyny, there are never enough females to go around, under monogamy, at least assuming that there is a roughly equal sex ratio (i.e. a roughly equal numbers of men and women), then virtually almost all males are capable of attracting a wife, howsoever physically repugnant, ugly and just plain unpleasant

This is important, since it means that all men, even the relatively poor and powerless, nevertheless have a reproductive stake in society. This, then, in evolutionary terms, provides them with an incentive both:

1) To participate in the economy to support and thereby provide for their wife and family; and

2) To defend these institutions in wartime, if necessary with their lives.

The institution of monogamy has therefore been viewed as a key factor, if not the key factor, in both the economic and military ascendency of the West (see Scheidel 2008). 

Similarly, it has recently been argued that the increasing rates of non-participation of young males in the economy and workforce (i.e. the so-called NEET’ phenomenon) is a direct consequence of the reduction in reproductive opportunities to young males (Binder 2021).[9]

Thus, on this view, then, the media scandal and hysteria that invariably accompanies sexual infidelities by elected politicians, or constitutional monarchs, reflects outrage that the terms of this implicit agreement have been breached. 

This idea was anticipated by Irish playwright and socialist George Bernard Shaw, who observed in Man and Superman: Maxims for Revolutionaries, the preface to his play Man and Superman

Polygyny, when tried under modern democratic conditions, as by the Mormons is wrecked by the revolt of the mass of inferior men who are condemned to celibacy by it” (Shaw 1903). 

Socially Imposed Monogamy’?

Consistent with this theory of socially imposed monogamy, it is indeed the case that, in all Western democratic polities, polygyny is unlawful, and bigamy a crime. 

Yet these laws are seemingly in conflict with contemporary western liberal democratic principles of tolerance and inclusivity, especially in respect of ‘alternative lifestyles’ and ‘non-traditional relationships’.

Thus, for example, we have recently witnessed a successful campaign for the legalization of gay marriage in most western jurisdictions. However, strangely, polygynous marriage seemingly remains anathema – despite the fact that most cultures across the world and throughout history have permitted polygynous marriage, whereas few if any have ever accorded any state recognition to homosexual unions.

Indeed, strangely, whereas the legalization of gay marriage was widely perceived as ‘progressive’, polygyny is associated, not with sexual liberation with rather with highly traditional and sexually repressive groups such as Mormons and Muslims.[10]

Polygynous marriage was also, rather strangely, associated with the supposed oppression of women in traditional societies such as under Islam

However, most women actually do better, at least in purely economic terms, under polygyny than under monogamy, at least in highly stratified societies with large differences in resource-holdings as between males. 

Thus, if, as we have seen, Jeff Bezos is 100,000 times richer than Joe Schmo, then a woman is financially better off becoming the second wife, or the tenth wife (or even the 99,999th wife!), of Jeff Bezos rather than the first wife of poor Joe. 

Moreover, women also have another incentive to prefer Jeff to Joe. 

If she is impregnated by a polygynous male like Jeff, then her male descendants may inherit the traits that facilitated their father’s wealth, power and polygyny, and hence become similarly reproductively successful themselves, aiding the spread of the woman’s own genes in subsequent generations. 

Biologists call this good genes sexual selection or, more catchily, the sexy son hypothesis

Once again, however, George Bernard Shaw beat them to it when he observed in the same 1903 essay quoted above: 

Maternal instinct leads a woman to prefer a tenth share in a first rate man to the exclusive possession of a third rate one” (Shaw 1903). 

Thus, Robert Wright concludes: 

In sheerly Darwinian terms, most men are probably better off in a monogamous system, and most women worse off” (The Moral Animal: p96). 

Thus, women generally should welcome polygyny, while the only people opposed to polygyny should be: 

1) The women currently married to men like Jeff Bezos, and greedily unwilling to share their resource-abundant ‘alpha-male’ providers with a whole hundred-fold harem of co-wives and concubines; and

2) A glut of horny sexually-frustrated bachelor-‘incels’ terminally condemned to celibacy, bachelorhood and inceldom by promiscuous lotharios like Jeff Bezos and Ismail the Bloodthirsty greedily hogging all the hot chicks for themselves.

Who Opposes Polygyny, and Why? 

However, in my experience, the people who most vociferously and puritanically object to philandering male politicians are not low-status men, but rather women. 

Moreover, such women typically affect concern on behalf, not of the male bachelors and ‘incels’ supposedly indirectly condemned to celibacy by such behaviours, but rather the wives of such politicians – though the latter are the chief beneficiaries of monogamy, while these other women, precluded from signing up as second or third-wives to alpha-male providers, are themselves, at least in theory, among the main losers. 

This suggests that the ‘male compromise theory’ of socially-imposed monogamy is not the whole story. 

Perhaps then, although women benefit in purely financial terms under polygyny, they do not do so well in fitness terms. 

Thus, one study found that, whereas polygynous males (unsurprisingly) had more offspring than monogamously-mated males, they (perhaps also unsurprisingly) had fewer offspring per wife. This suggests that, while polygynously-married males benefit from polygyny, their wives incur a fitness penalty for having to share their husband (Strassman 2000). 

This probably reflects the fact that even male reproductive capacity is limited, as, notwithstanding the Coolidge effect (which has, to my knowledge, yet to be demonstrated in humans), males can only manage a certain number of orgasms per day. 

Women’s distaste for polygynous unions may also reflect the fact that even prodigiously wealthy males will inevitably have a limited supply of one particular resource – namely, time – and time spent with offspring by a loving biological father may be an important determinant of offspring success, which paid child-minders, and stepfathers, lacking a direct genetic stake in offspring, are unable to perfectly replicate.[11]

Thus, if Jeff Bezos were able to attract for himself the 100,000 wives that the polygyny threshold model suggests is his due, then, even if he were capable of providing each woman with the two point four children that is her own due, it is doubtful he would have enough time on his hands to spend much ‘quality time’ with each of his 240,000 offspring – just as one doubts Ismail the Bloodthirsty was himself an attentive father his own more modest mere 888. 

Thus, one suspects that, contrary to the polygyny threshold model, polygyny is not always entirely a matter of female choice (Sanderson 2001).

On the contrary, many of the women sequestered into the harems of rulers like Ismail the Bloodthirsty likely had little say in the matter. 

The Central Theoretical Problem of Human Sociobiology’ 

Yet, if this goes some way towards explaining the apparent paradox of socially imposed monogamy, there is, today, an even greater paradox with which we must wrestle – namely, why, in contemporary western societies, is there apparently an inverse correlation between wealth and number of offspring.

After all, from a sociobiological or evolutionary psychological perspective, this represents something of a paradox. 

If, as we have seen, the very purpose of wealth and power (from a sociobiological perspective) is to convert these advantages into the ultimately currency of natural selection, namely reproductive success, then why are the wealthy so spectacularly failing to do so in the contemporary west?[12]

Moreover, if status is not conducive to high reproductive success, then why have humans evolved to seek high-status in the first place? 

This anomaly has been memorably termed the ‘The central theoretical problem of human sociobiology’ in a paper by University of Pennsylvania demographer and eugenicist Daniel Vining (Vining 1986). 

Socially imposed monogamy can only go some way towards explaining this anomaly. Thus, in previous centuries, even under monogamy, wealthier families still produced more surviving offspring, if only because their greater wealth enabled them to successfully rear and feed multiple successive offspring to adulthood. In contrast, for the poor, high rates of infant mortality were the order of the day. 

Yet, in the contemporary west, it seems that the people who have the most children and hence the highest fitness in the strict Darwinian sense, are, at least according to popular stereotype, single mothers on government welfare. 

De Facto’ Polygyny 

Various solutions have been proposed to this apparent paradox. A couple amount to claiming that the west is not really monogamous at all, and, once this is factored in, then, at least among males, higher-status men do indeed have greater numbers of offspring than lower-status men. 

One suggestion along these lines is that perhaps wealthy males sire additional offspring whose paternity is misassigned, via extra-marital liaisons (Betzig 1993b). 

However, despite some sensationalized claims, rates of misassigned paternity are actually quite low (Khan 2010; Gilding 2005; Bellis et al 2005). 

If it is lower-class women who are giving birth to most of the offspring, then it is probably mostly males of their own socioeconomic status who are responsible for impregnating them, if only because it is the latter with whom they have the most social contact. 

Perhaps a more plausible suggestion is that wealthy high-status males are able to practice a form of disguised polygyny by through repeated remarriage. 

Thus, wealthy men are sometimes criticized for divorcing their first wives to marry much younger second- and sometimes even third- and fourth-wives. In this way, they manage monopolize the peak reproductive years of multiple successive young women. 

This is true, for example, of recent American President Donald Trump – the ultimate American alpha male – who has himself married three women, each one younger than her predecessor

Thus, science journalist Robert Wright contends: 

The United States is no longer a nation of institutionalized monogamy. It is a nation of serial monogamy. And serial monogamy in some ways amounts to polygyny.” (The Moral Animal: p101). 

This, then, is not so much ‘serial monogamy’ as it is ‘sequential’ or non-concurrent polygyny’. 

Evolutionary Novelties

Another suggestion is that evolutionary novelties – i.e. recently developed technologies such as contraception – have disrupted the usual association between status and fertility. 

On this view, natural selection has simply not yet had sufficient time (or, rather, sufficient generations) over which to mold our psychology and behaviour in such a way as to cause us to use these technologies in an adaptive manner – i.e. in order to maximize, not restrict, our reproductive success. 

An obvious candidate here is safe and effective contraception, which, while actually somewhat older than most people imagine, nevertheless became widely available to the population at large only over the course of the past century, which is surely not enough generations for us to have become evolutionarily adapted to its use.  

Thus, a couple of studies have found that that, while wealthy high-status males may not father more offspring, they do have more sex with a greater number of partners – i.e. behaviours that would have resulted in more offspring in ancestral environments prior to the widespread availability of contraception (Pérusse 1993: Kanazawa 2003). 

This implies that high-status males (or their partners) use contraception either more often, or more effectively, than low-status males (or their partners), probably because of their greater intelligence and self-control, namely the very traits that enabled them to achieve high socioeconomic status in the first place (Kanazawa 2005). 

Another evolutionary novelty that may disrupt the usual association between social status and number of surviving offspring is the welfare system

Welfare payments to single mothers undoubtedly help these families raise to adulthood offspring who would otherwise perish in infancy. 

In addition, by reducing the financial disincentives associated with raising additional offspring, they probably increase the number of offspring these women choose to have in the first place. 

While it is highly controversial to suggest that welfare payments to single mothers actually give the latter an actual financial incentive to bear additional offspring, they surely, at the very least, reduce the financial disincentives otherwise associated with bearing additional children. 

Therefore, given that the desire for offspring is probably innate, women would rationally respond by having more children.[13]

Feminist ideology also encourages women in particular to postpone childbearing in favour of careers. Moreover, it is probably higher-status females who are more exposed to feminist ideology, especially in universities, where feminist ideology is thoroughly entrenched and widely proselytized

In contrast, lower-status women are not only less exposed to feminist ideology encouraging them to delay motherhood in favour of career, but also likely have fewer appealing careers available to them in the first place. 

Finally, even laws against bigamy and polygyny might be conceptualized as an evolutionary novelty that disrupts the usual association between status and fertility. 

However, whereas technological innovations such as effective contraception were certainly not available until recent times, ideological constructs and religious teachings – including ideas such as feminism, prohibitions on polygyny, and the socialist ideology that motivated the creation of the welfare state – have existed ever since we evolved the capacity to create such constructs (i.e. since we became fully human). 

Therefore, one would expect that humans would have evolved resistance to ideological and religious teachings that go against their genetic interests. Otherwise, we would be vulnerable to indoctrination (and hence exploitation) at the hands third parties. 

Dysgenics? 

Finally, it must be noted that these issues are not only of purely academic interest. 

On the contrary, since socioeconomic status correlates with both intelligence and personality traits such as conscientiousness, and these traits are, in turn, substantially heritable, and moreover determine, not only individual wealth and prosperity, but also at the aggregate level, the wealth and prosperity of nations, the question of who has the offspring is surely of central concern to the future of society, civilization and the world. 

In short, what is at stake is the very genetic posterity that we bequeath to future generations. It is simply too important a matter to be delegated to the capricious and irrational decision-making of individual women. 

__________________________

Endnotes

[1] Actually, the precise number of offspring Ismail fathered is unclear. The figure I have quoted in the main body of the text comes from various works on evolutionary psychology (e.g. Cartwright, Evolution and Human Behaviour: p133-4; Wright, The Moral Animal: p247). However, another earlier work on human sociobiology, David Barash’s The Whisperings Within gives an even higher figure, of “1,056 offspring” (The Whisperings Within: p47). Meanwhile, an article produced by the Guinness Book of Records gives an even higher figure of at least 342 daughters and 700 sons, while a scientific paper by Elisabeth Oberzaucher and Karl Grammer gives a figure of 1171 offspring in total. The precise figure seems to be unknown and is probably apocryphal. Nevertheless, the general point – namely that a powerful male with access to a large harem and multiple wives and concubines, is capable of fathering many offspring – is surely correct.

[2] Thus, it is important to emphasise that sexual abuse allegations should certainly not automatically be accepted as credible, given the prevalence of false rape allegations, and indeed their incentivization, especially in this age of me too’ hysteria and associated witch-hunts. Indeed, western mainstream media is likely to be especially credulous respect to allegations in respect of a dictator which it and the political establishment it serves had long reviled and demonized.
Moreover, although, as noted above, given the innate psychological and physiological differences between the sexes, women are unlikely to be effective as conventional bodyguards any more than they are effective as soldiers in wartime, it has nevertheless been suggested that they may have provided a very different form of protection the Libyan dictator – namely as a highly effective ‘human shield’.
On this view, under the pretence of feminism, Gaddaffi may actually have been shrewdly taking advantage of misguided male chivalry and female privilege, not unreasonably surmising that any potential assassins and unsurgents would almost certainly be male, and hence chivalrous, paternalistic and protective towards women, especially since these assassins are also likely to be conservative Muslims, who formed the main bulk of the domestic opposition to his regime, and the deliberate killing of women is explicitly forbidden under Islamic law (Sahih Muslim 19: 4320; cf. Sihah Muslim 19: 4321).

[3] The capture of fertile females from among enemy groups is by no means restricted to the Yąnomamö. On the contrary, it may even form the ultimate evolutionary basis for intergroup conflict and raiding among troops of chimpanzees, our species’ closest extant relative. It is also alluded to, and indeed explicitly commanded, in the Hebrew Bible (e.g. Deuteronomy 20: 13-14; Numbers 31: 17-18), and was formerly prevalent in western culture as well.
It is also very much apparent, for example, in the warfare and raiding formerly endemic in the Gobi Desert of what is today Mongolia. Thus, the mother of Genghis Khan was, at least according to legend, herself kidnapped by the Great Khan’s father. Indeed, this was apparently an accepted form of courtship on the Mongolian Steppe, as Genghis Khan’s own wife was herself stolen from him on at least one occasion by rival Steppe nomads, resulting in a son of disputed paternity (whom the great Khan perhaps tellingly named Jochi, which is said to translate as ‘guest) and a later succession crisis.
Many anthropologists, it ought to be noted, dismiss Chagnon’s claim that Yanomami warfare is predicated on the capture of women. Perhaps the most famous is Chagnon’s own former student, Kenneth Good, whose main claim to fame is to have himself married a (by American standards, underage) Yąnomamö girl – who, in a dramatic falsification of her husband’s theory that would almost be amusing were it not so tragic, was then herself twice abducted and raped by raiding Yanomami war parties.

[4] It is ironic that John Cartwright, author of Evolution and Human Behaviour, an undergraduate level textbook on evolutionary psychology, is skeptical regarding the claim that Ismail the Bloodthirsty fathered 888 offspring, but nevertheless apparently takes at face value that claim that a Russian peasant woman had 69 offspring, a biologically far more implausible claim (Evolution and Human Behaviour: p133-4).

[5] However, here, Betzig is perhaps altogether overcautious. Thus, whether or not “political power in itself” is explained in this way (i.e. “as providing a position from which to gain reproductively”), certainly the human desire for political power must surely be explained in this way.

[6] The prospect of eugenically reengineering human nature itself so as to make utopian communism achievable, and human society less conflictual, is also unrealistic. As John Gray has noted in Straw Dogs: Thoughts on Humans and Other Animals (reviewed here), if human nature is eugenically reengineered, then it will be done, not in the interests of society, let alone humankind, as a whole, but rather in the interests of those responsible for ordering or undertaking the project – namely, scientists and, more importantly, those from whom they take their orders (e.g. government, politicians, civil servants, big business, managerial elites). Thus, Gray concludes:

“[Although] it seems feasible that over the coming century human nature will be scientifically remodelled… it will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

[7] Here, it is important to emphasize that what is exceptional about western societies is not monogamy per se. On the contrary, monogamy is common in relatively egalitarian societies (e.g. hunter-gatherer societies), especially those living at or near subsistence levels, where no male is able to secure access to sufficient resources so as to provision multiple wives and offspring (Kanazawa and Still 1999). What is exceptional about contemporary western societies is the combination of:

1) Large differentials of resource-holdings between males (i.e. social stratification); and

2) Prescriptive monogamy (i.e. polygyny is not merely not widely practised, but also actually unlawful).

[8] Quite when a degree of de facto monogamy originated in the west seems to be a matter of some dispute. Betzig views it as very much a recent phenomenon, arising with the development of complex, interdependent industrial economies, which required the cooperation of lower-status males in order to function. Here, Betzig perhaps underestimates the extent to which even pre-industrial economies required the work and cooperation of low-status males in order to function.
Thus, Betzig argues that, in ancient Rome, nominally monogamous marriages concealed rampantly de facto polygyny, with emperors and other powerful males fathering multiple offspring with both slaves and other men’s wives (Betzig 1992). As evidence, she largely relies on salacious gossip about a few eminent Roman political leades.
Similarly, in medieval Europe, she argues that, despite nominal monogamy, wealthy men fathered multiple offspring through servant girls (Betzig 1995a; Betzig 1995b). In contrast, Kevin Macdonald persuasively contends that medieval monogamy was no mere myth and most illegitimate offspring born to servant girls were fathered by men of roughly their own station (Macdonald 1995a; Macdonald 1995b).

[9] Certainly, the so-called NEET and incel phenomena seem to be correlated with one another. NEETs are disproportionately likely to be incels, and incels are disproportionately likely to be NEETs. However, the direction of causation is unclear and probably works in both directions.
On the one hand, since women are rarely attracted to men without money or the prospects of money, men without jobs are rarely able to attract wives or girlfriends. However, on the other hand, men who, for whatever reason, perceive themselves as unable to attract a wife or girlfriend even if they did have a job, may see little incentive to getting a job in the first place or keeping the one they do have.
In addition, certain aspects of personality, and indeed psychopathology, likely predispose a man both to joblessness and inability to obtain a wife or girlfriend. These include mental illness, mental and physical disabilities, and conditions such as autism.
Finally, the NEET phenomenon cannot be explained solely by the supposed decline in marriage opportunities for young men, as might be suggested by simplistic reading of Binder (2021). Another factor is surely the increased affluence of society at large. In previous times, and in much of the developing world today, remaining voluntarily would likely result in penury and destitution for all but a tiny minority of the economic elite.

[10] Indeed, during the debates surrounding the legalization of gay marriage, the prospect of the legalization of polygynous marriage was rarely discussed, and, when it was raised, it was usually invoked by the opponents of gay marriage, as a sort of reductio ad absurdum of changes in marriage laws to permit gay marriage, something champions of gay marriage were quick to dismiss as preposterous scaremongering. In short, both sides in the acrimonious debates regarding gay marriage seem to have been agreed that legalizing polygynous unions was utterly beyond the pale.

[11] Thus, father absence is a known correlate of criminality and other negative life outcomes. In fact, however, the importance of paternal investment in offspring outcomes, and indeed of parental influence more generally, has yet to be demonstrated, since the correlation between father-absence and negative life-outcomes could instead reflect the heritability of personality, including those aspects of personality that cause people to have offspring out of wedlock, die early, divorce, abandon their children or have offspring by a person who abandons their offspring or dies early (see Judith Harris’s The Nurture Assumption, which I have reviewed here). 

[12] This paradox is related to another one – namely, why it is that people in richer societies tend to have lower fertility rates than poorer societies? This recent development, often referred to as the demographic transition, is paradoxical for the exact same reason that it is paradoxical for relatively wealthier people within western societies to have have fewer offspring than relatively poorer people within these same societies, namely that it is elementary Darwinism 101 that an organism with access to greater resources should channel those additional resources into increased reproduction. Interestingly, this phenomenon is not restricted to western societies. On the contrary, other wealthy industrial and post-industrial societies, such as Japan, Singapore and South Korea, have, if anything, even lower fertility rates than Europe, Australasia and North America.

[13] Actually, it is not altogether clear that women do have an innate desire to bear children. After all, in the EEA, there was no need for women to evolve a desire to bear children. All they required to a desire to have sexual intercourse (or indeed a mere willingness to acquiesce in the male desire for intercourse). In the absence of contraception, offspring would then naturally result. Indeed, other species, including presumably most of our pre-human ancestors, are surely wholly unaware of the connection between sexual intercourse and reproduction. A desire for offspring would then serve no adaptive function for these species at all. However, this did not stop these species from seeking out sexual opportunities and hence reproducing their kind. However, given anecdotal evidence of so-called ‘broodiness’ among women, I suspect women do indeed have some degree of innate desire for offspring.

References

Bateman (1948), Intra-sexual selection in Drosophila, Heredity, 2 (Pt. 3): 349–368.
Bellis et al (2005) Measuring Paternal Discrepancy and its Public Health Consequences. Journal of Epidemiology and Community Health 59(9):749.
Betzig 1992 Roman Polygyny. Ethology and Sociobiology 13(5-6): 309-349.
Betzig 1993a. Sex, succession, and stratification in the first six civilizations: How powerful men reproduced, passed power on to their sons, and used power to defend their wealth, women and children. In Lee Ellis, ed. Social Stratification and Socioeconomic Inequality, pp. 37-74. New York: Praeger.
Betzig 1993b. Where are the bastards’ daddies? Comment on Daniel Pérusse’s ‘Cultural and reproductive success in industrial societies’. Behavioral and Brain Sciences, 16: 284-85.
Betzig 1995a Medieval Monogamy. Journal of Family History 20(2): 181-216.
Betzig 1995b Wanting Women Isn’t New; Getting Them Is: Very. Politics and the Life Sciences 14(1): 24-25.
Binder (2021) Why Bother? The Effect of Declining Marriage Market Prospects on Labor-Force Participation by Young Men (March 1, 2021). Available at SSRN: https://ssrn.com/abstract=3795585 or http://dx.doi.org/10.2139/ssrn.3795585
Chagnon N (1979) Is reproductive success equal in egalitarian societies. In: Chagnon & Irons (eds) Evolutionary Biology and Human Social Behavior: An Anthropological Perspective pp.374-402 (MA: Duxbury Press).
Einon, G (1998) How Many Children Can One Man Have? Evolution and Human Behavior, 19(6):413–426.
Gilding (2005) Rampant Misattributed Paternity: The Creation of an Urban Myth. People and Place 13(2): 1.
Gould (2000) How many children could Moulay Ismail have had? Evolution and Human Behavior 21(4): 295 – 296.
Khan (2010) The paternity myth: The rarity of cuckoldry, Discover, 20 June, 2010.
Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.
Kanazawa (2003) Can Evolutionary Psychology Explain Reproductive Behavior in the Contemporary United States? Sociological Quarterly. 44: 291–302.
Kanazawa (2005) An Empirical Test of a Possible Solution to ‘the Central Theoretical Problem of Human Sociobiology’. Journal of Cultural and Evolutionary Psychology. 3: 255–266.
Macdonald 1995a The establishment and maintenance of socially imposed monogamy in Western Europe, Politics and the Life Sciences, 14(1): 3-23.
Macdonald 1995b Focusing on the group: further issues related to western monogamy, Politics and the Life Sciences, 14(1): 38-46.
Oberzaucher & Grammer (2014) The Case of Moulay Ismael – Fact or Fancy? PLoS ONE 9(2): e85292.
Orians (1969) On the Evolution of Mating Systems in Birds and Mammals. American Naturalist 103 (934): 589–603.
Packer et al (1995) Reproductive constraints on aggressive competition in female baboons. Nature 373: 60–63.
Pérusse (1993). Cultural and Reproductive Success in Industrial Societies: Testing the Relationship at the Proximate and Ultimate Levels.” Behavioral and Brain Sciences 16:267–322.
Sanderson (2001) Explaining Monogamy and Polygyny in Human Societies: Comment on Kanazawa and Still. Social Forces 80(1):329-335.
Scheidel (2008) Monogamy and Polygyny in Greece, Rome, and World History, (June 2008). Available at SSRN: https://ssrn.com/abstract=1214729 or http://dx.doi.org/10.2139/ssrn.1214729
Shaw GB (1903) Man and Superman, Maxims for Revolutionists.
Strassman B (2000) Polygyny, Family Structure and Infant Mortality: A Prospective Study Among the Dogon of Mali. In Cronk, Chagnon & Irons (Ed.), Adaptation and Human Behavior: An Anthropological Perspective (pp.49-68). New York: Aldine de Gruyter.
Trivers, R. (1972). Parental investment and sexual selection. Sexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago.
Vining D 1986 Social versus reproductive success: The central theoretical problem of human sociobiology Behavioral and Brain Sciences 9(1): 167- 187.
Zerjal et al. (2003) The Genetic Legacy of the Mongols, American Journal of Human Genetics, 72(3): 717–721.

‘The Bell Curve’: A Book Much Read About, But Rarely Actually Read

The Bell Curve: Intelligence and Class Structure in American Life by Richard Herrnstein and Charles Murray (New York: Free Press, 1994). 

There’s no such thing as bad publicity’ – or so contends a famous adage of the marketing industry. 

The Bell Curve: Intelligence and Class Structure in America’ by Richard Herrnstein and Charles Murray is perhaps a case in point. 

This dry, technical, academic social science treatise, full of statistical analyses, graphs, tables, endnotes and appendices, and totalling almost 900 pages, became an unlikely nonfiction bestseller in the mid-1990s on a wave of almost universally bad publicity in which the work was variously denounced as racist, pseudoscientific, fascist, social Darwinist, eugenicist and sometimes even just plain wrong. 

Readers who hurried to the local bookstore eagerly anticipating an incendiary racialist polemic were, however, in for a disappointment. 

Indeed, one suspects that, along with ‘The Bible’ and Stephen Hawkins’ A Brief History of Time, ‘The Bell Curve’ became one of those bestsellers that many people bought, but few managed to finish. 

The Bell Curve’ thus became, like another book that I have recently reviewed, a book much read about, but rarely actually read – at least in full. 

As a result, as with that other book, many myths have emerged regarded the content of ‘The Bell Curve’ that are quite contradicted when one actually takes the time and trouble to read it for oneself. 

Subject Matter 

The first myth of ‘The Bell Curve’ is that it was a book about race differences, or, more specifically, about race differences in intelligence. In fact, however, this is not true. 

Thus, ‘The Bell Curve’ is a book so controversial that the controversy begins with the very identification of its subject-matter. 

On the one hand, the book’s critics focused almost exclusively on subject of race. This led to the common perception that ‘The Bell Curve’ was book about race and race differences in intelligence.[1]

Ironically, many racialists seem to have taken these leftist critics at their word, enthusiastically citing the work as support for their own views regarding race differences in intelligence.  

On the other hand, however, surviving co-author Charles Murray insisted from the outset that the issue of race, and of race differences in intelligence, was always peripheral to he and co-author Richard Herrnstein’s primary interest and focus, which was, he claimed, on the supposed emergence of a ‘Cognitive Elite’ in modern America. 

Actually, however, both these views seem to be incorrect. While the first section of the book does indeed focus on the supposed emergence of a ‘Cognitive Elite’ in modern America, the overall theme of the book seems to be rather broader. 

Thus, the second section of the book focuses on the association between intelligence and various perceived social pathologies, such as unemployment, welfare dependency, illegitimacy, crime and single-parenthood. 

To the extent the book has a single overarching theme, one might say that it is a book about the social and economic correlates of intelligence, as measured by IQ tests, in modern America.  

Its overall conclusion is that intelligence is indeed a strong predictor of social and economic outcomes for modern Americans – high intelligence with socially desirable outcomes and low intelligence with socially undesirable ones. 

On the other hand, however, the topic of race is not quite as peripheral to the book’s themes as sometimes implied by Murray and some of his defenders. 

Thus, it is sometimes claimed only a single chapter dealt with race. Actually, however, two chapters focus on race differences, namely chapters 13 and 14, respectively titled ‘Ethnic Differences in Cognitive Ability’ and ‘Ethnic Inequalities in Relation to IQ’. 

In addition, a further two chapters, namely chapters 19 and 20, entitled respectively ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, deal with the topic of affirmative action, as does the final appendix, entitled ‘The Evolution of Affirmative Action in the Workplace’ – and, although affirmative action has been employed to favour women as well as racial minorities, it is with racial preferences that Herrnstein and Murray are primarily concerned. 

However, these chapters represent only 142 of the book’s nearly 900 pages. 

Moreover, in much of the remainder of the book, the authors actually explicitly restrict their analysis to white Americans exclusively. They do so precisely because the well documented differences between the races in IQ as well as in many of the social outcomes whose correlation with IQ the book discusses would mean that race would have represented a potential confounding factor that they would otherwise have to take steps to control for. 

Herrnstein and Murray therefore took to decision to extend their analysis to race differences near the end of their book, in order to address the question of the extent to which differences in intelligence, which they have already demonstrated to be an important correlate of social and economic outcomes among whites, are also capable of explaining differences in achievement as between races. 

Without these chapters, the book would have been incomplete, and the authors would have laid themselves open to the charge of political-correctness and of ignoring the elephant in the room

Race and Intelligence 

If the first controversy of ‘The Bell Curve’ concerns whether it is a book primarily book about race and race differences in intelligence, the second controversy is over what exactly the authors concluded with respect to this vexed and contentious issue. 

Thus, the same leftist critics who claimed that ‘The Bell Curve’ was primarily a book about race and race differences in intelligence, also accused the authors of concluding that black people are innately less intelligent than whites

Some racists, as I have already noted, evidently took the leftists at their word, and enthusiastically cite the book as support and authority for this view. 

However, in subsequent interviews, Murray always insisted he and Herrnstein had actually remained “resolutely agnostic” on the extent to which genetic factors underlay the IQ gap. 

In the text itself, Herrnstein and Murray do indeed declare themselves “resolutely agnostic” with regard to the extent of the genetic contribution to the test score gap (p311).

However, just couple of sentences before they use this very phrase, they also appear to conclude that genes are indeed at least part of the explanation, writing: 

It seems highly likely to us that both genes and the environment have something to do with racial differences [in IQ]” (p311). 

This paragraph, buried near the end of chapter 13, during an extended discussion of evidence relating to the causes of race differences in intelligence, is the closest the authors come to actually declaring any definitive conclusion regarding the causes of the black-white test score gap.[2]

This conclusion, though phrased in sober and restrained terms, is, of course, itself sufficient to place its authors outside the bounds of acceptable opinion in the early-twenty-first century, or indeed in the late-twentieth century when the book was first published, and is sufficient to explain, and, for some, justify, the opprobrium heaped upon the book’s surviving co-author from that day forth. 

Intelligence and Social Class

It seems likely that races which evolved on separate continents, in sufficient reproductive isolation from one another to have evolved the obvious (and not so obvious) physiological differences between races that we all observe when we look at the faces, or bodily statures, of people of different races (and that we indirectly observe when we look at the results of different athletic events at the Olympic Games), would also have evolved to differ in psychological traits, including intelligence

Indeed, it is surely unlikely, on a priori grounds alone, that all different human races have evolved, purely by chance, the exact same level of intelligence. 

However, if races differ in intelligence are therefore probable, the case for differences in intelligence as between social classes is positively compelling

Indeed, on a priori grounds alone, it is inevitable that social classes will come to differ in IQ, if one accepts two premises, namely: 

1) Increased intelligence is associated with upward social mobility; and 
2) Intelligence is passed down in families.

In other words, if more intelligent people tend, on average, to get higher-paying jobs than those of lower intelligence, and the intelligence of parents is passed on to their offspring, then it is inevitable that the offspring of people with higher-paying jobs will, on average, themselves be of higher intelligence than are the offspring of people with lower paying jobs.  

This, of course, follows naturally from the infamous syllogism formulated by ‘Bell Curve’ co-author Richard Herrnstein way back in the 1970s (p10; p105). 

Incidentally, this second premise, namely that intelligence is passed down in families, does not depend on the heritability of IQ in the strict biological sense. After all, even if heritability of intelligence were zero, intelligence could still be passed down in families by environmental factors (e.g. the ‘better’ parenting techniques of high IQ parents, or the superior material conditions in wealthy homes). 

The existence of an association between social class and IQ ought, then, to be entirely uncontroversial to anyone who takes any time whatsoever to think about the issue. 

If there remains any room for reasoned disagreement, it is only over the direction of causation – namely the question of whether:  

1) High intelligence causes upward social mobility; or 
2) A privileged upbringing causes higher intelligence.

These two processes are, of course, not mutually exclusive. Indeed, it would seem intuitively probable that both factors would be at work. 

Interestingly, however, evidence demonstrates the occurrence only of the former. 

Thus, even among siblings from the same family, the sibling with the higher childhood IQ will, on average, achieve higher socioeconomic status as an adult. Likewise, the socioeconomic status a person achieves as an adult correlates more strongly with their own IQ score than it does with the socioeconomic status of their parents or of the household they grew up in (see Straight Talk About Mental Tests: p195). 

In contrast, family, twin and adoption studies and of the sort conducted by behavioural geneticists have concurred in suggesting that the so-called shared family environment (i.e. those aspects of the family environment shared by siblings from the same household, including social class) has but little effect on adult IQ. 

In other words, children raised in the same home, whether full- or half-siblings or adoptees, are, by the time they reach adulthood, no more similar to one another in IQ than are children of the same degree of biological relatedness brought up in entirely different family homes (see The Nurture Assumption: reviewed here). 

However, while the direction of causation may still be disputed by intelligent (if uninformed) laypeople, the existence of an association between intelligence and social class ought not, one might think, be in dispute. 

However, in Britain today, in discussions of social mobility, if children from deprived backgrounds are underrepresented, say, at elite universities, then this is almost invariably taken as incontrovertible proof that the system is rigged against them. The fact that children from different socio-economic backgrounds differ in intelligence is almost invariably ignored. 

When mention is made of this incontrovertible fact, leftist hysteria typically ensues. Thus, in 2008, psychiatrist Bruce Charlton rightly observed that, in discussion of social mobility: 

A simple fact has been missed: higher social classes have a significantly higher average IQ than lower social classes (Clark 2008). 

For his trouble, Charlton found himself condemned by the National Union of Students and assorted rent-a-quote academics and professional damned fools, while even the ostensibly ‘right-wing’ Daily Mail newspaper saw fit to publish a headline Higher social classes have significantly HIGHER IQs than working class, claims academic, as if this were in some way a controversial or contentious claim (Clark 2008). 

Meanwhile, when, in the same year, a professor at University College a similar point with regard the admission of working-class students to medical schools, even the then government Health Minister, Ben Bradshaw, saw fit to offer his two cents worth (which were not worth even that), declaring: 

It is extraordinary to equate intellectual ability with social class” (Beckford 2008). 

Actually, however, what is truly extraordinary is that any intelligent person, least of all a government minister, would dispute the existence of such a link. 

Cognitive Stratification 

Herrnstein’s syllogism leads to a related paradox – namely that, as environmental conditions are equalized, heritability increases. 

Thus, as large differences in the sorts of environmental factors known to affect IQ (e.g. malnutrition) are eliminated, so differences in income have come to increasingly reflect differences in innate ability. 

Moreover, the more gifted children from deprived backgrounds who escape their humble origins, then, given the substantial heritability of IQ, the fewer such children will remain among the working-class in subsequent generations. 

The result is what Herrnstein and Murray call the ‘Cognitive Stratification’ of society and the emergence of what they call a ‘Cognitive Elite’. 

Thus, in feudal society, a man’s social status was determined largely by ‘accident of birth’ (i.e. he inherited the social station of his father). 

Women’s status, meanwhile, was determined, in addition, by what we might call ‘accident of marriage’ – and, to a large extent, it still is

However, today, a person’s social status, at least according to Herrnstein and Murray, is determined primarily, and increasingly, by their level of intelligence. 

Of course, people are not allocated to a particular social class by IQ testing itself. Indeed, the use of IQ tests by employers and educators has been largely outlawed on account of its disparate impact (or indirect discrimination’, to use the equivalent British phrase) with regard to race (see below). 

However, the skills and abilities increasingly valued at a premium in western society (and, increasingly, many non-western societies as well), mean that, through the operation of the education system and labour market, individuals are effectively sorted by IQ, even without anyone ever actually sitting an IQ test. 

In other words, society is becoming increasingly meritocratic – and the form of ostensible ‘merit’ upon which attainment is based is intelligence. 

For Herrnstein and Murray, this is a mixed blessing: 

That the brightest are identified has its benefits. That they become so isolated and inbred has its costs” (p25). 

However, the correlation between socioeconomic status and intelligence remains imperfect. 

For one thing, there are still a few highly remunerated, and very high-status, occupations that rely on skills that are not especially, if at all, related to intelligence.  I think here, in particular, of professional sports and the entertainment industry. Thus, leadings actors, pop stars and sports stars are sometimes extremely well-remunerated, and very high-status, but may not be especially intelligent.  

More importantly, while highly intelligent people might be, by very definition, the only ones capable of performing cognitively-demanding, and hence highly remunerated, occupations, this is not to say all highly intelligent people are necessarily employed in such occupations. 

Thus, whereas all people employed in cognitively-demanding occupations are, almost by definition, of high intelligence, people of all intelligence levels are capable of doing cognitively-undemanding jobs.

Thus, a few people of high intellectual ability remain in low-paid work, whether on account of personality factors (e.g. laziness), mental illness, lack of opportunity or sometimes even by choice (which choice is, of course, itself a reflection of personality factors). 

Therefore, the correlation between IQ and occupation is far from perfect. 

Job Performance

The sorting of people with respect to their intelligence begins in the education system. However, it continues in the workplace. 

Thus, general intelligence, as measured by IQ testing, is, the authors claim, the strongest predictor of occupational performance in virtually every occupation. Moreover, in general, the higher paid and higher status the occupation in question, the stronger the correlation between performance and IQ. 

However, Herrnstein and Murray are at pains to emphasize, intelligence is a strong predictor of occupational performance even in apparently cognitively undemanding occupations, and indeed almost always a better predictor of performance than tests of the specific abilities the job involves on a daily basis. 

However, in the USA, employers are barred from using testing to select among candidates for a job or for promotion unless they can show the test has ‘manifest relationship’ to the work, and the burden of proof is on the employer to show such a relationship. Otherwise, given their disparate impact’ with regard to race (i.e. the fact that some groups perform worse), the tests in question are deemed indirectly discriminatory and hence unlawful. 

Therefore, employers are compelled to test, not general ability, but rather the specific skills required in the job in question, where a ‘manifest relationship’ is easier to demonstrate in court. 

However, since even tests of specific abilities almost invariably still tap into the general factor of intelligence, races inevitably score differently even on these tests. 

Indeed, because of the ubiquity and predictive power of the g factor, it is almost impossible to design any type of standardized test, whether of specific or general ability or knowledge, in which different racial groups do not perform differently. 

However, if some groups outperform others, the American legal system presumes a priori that this reflects test bias rather than differences in ability. 

Therefore, although the words all men are created equal are not, contrary to popular opinion, part of the US constitution, the Supreme Court has effectively decided, by legal fiat, to decide cases as if they were. 

However, just as a law passed by Congress cannot repeal the law of gravity, so a legal presumption that groups are equal in ability cannot make it so. 

Thus, the bar on the use of IQ testing by employers has not prevented society in general from being increasingly stratified by intelligence, the precise thing measured by the outlawed tests. 

Nevertheless, Herrnstein and Murray estimate that the effective bar on the use of IQ testing makes this process less efficient, and cost the economy somewhere between 80 billion to 13 billion dollars in 1980 alone (p85). 

Conscientiousness and Career Success

I am skeptical of Herrnstein and Murray’s conclusion that IQ is the best predictor of academic and career success. I suspect hard work, not to mention a willingness to toady, toe the line, and obey orders, is at least as important in even the most cognitively-demanding careers, as well as in schoolwork and academic advancement. 

Perhaps the reason these factors have not (yet) been found to be as highly correlated with earnings as is IQ is that we have not yet developed a way of measuring these aspects of personality as accurately as we can measure a person’s intelligence through an IQ test. 

For example, the closest psychometricians have come to measuring capacity for hard work is the personality factor known as conscientiousness, one of the Big Five factors of personality revealed by psychometric testing. 

Conscientiousness does indeed correlate with success in education and work (e.g. Barrick & Mount 1991). However, the correlation is weaker than that between IQ and success in education and at work. 

However, this may be because personality is less easily measured by current psychometric methods than is intelligence – not least because personality tests generally rely on self-report, rather than measuring actual behaviour

Thus, to assess conscientiousness, questionnaires ask respondents whether they ‘see themselves as organized’, ‘as able to follow an objective through to completion’, ‘as a reliable worker’, etc. 

This would be the equivalent of an IQ test that, instead of directly testing a person’s ability to recognize patterns or manipulate shapes by having them do just this, simply asked respondents how good they perceived themselves as being at recognizing patterns, or manipulating shapes. 

Obviously, this would be a less accurate measure of intelligence than a normal IQ test. After all, some people lie, some are falsely modest and some are genuinely deluded. 

Indeed, according to the Dunning Kruger effect, it is those most lacking in ability who most overestimate their abilities – precisely because they lack the ability to accurately assess their ability (Kruger & Dunning 1999). 

In an IQ test, on the other hand, one can sometimes pretend to be dumber than one is, by deliberately getting questions wrong that one knows the answer to.[3]

However, it is not usually possible to pretend to be smarter than one is by getting more questions right simply because one would not know what are the right answers. 

Affirmative Action’ and Test Bias 

In chapters nineteen and twenty, respectively entitled ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, the authors discuss so-called affirmative action, an American euphemism for systematic and overt discrimination against white males. 

It is well-documented that, in the United States, blacks, on average, earn less than white Americans. On the other hand, it is less well-documented that whites, on average, earn less than people of IndianChinese and Jewish ancestry

With the possible exception of Indian-Americans, these differences, of course, broadly mirror those in average IQ scores. 

Indeed, according to Herrnstein and Murray, the difference in earnings between whites and blacks, not only disappears after controlling for differences in IQ, but is actually partially reversed. Thus, blacks are actually somewhat overrepresented in professional and white-collar occupations as compared to whites of equivalent IQ. 

This remarkable finding Herrnstein and Murray attribute to the effects of affirmative action programmes, as black Americans are appointed and promoted beyond what their ability merits because through discrimination. 

Interestingly, however, this contradicts what the authors wrote in an earlier chapter, where they addressed the question of test bias (pp280-286). 

There, they concluded that testing was not biased against African-Americans, because, among other reasons, IQ tests were equally predictive of real-world outcomes (e.g. in education and employment) for both blacks and whites, and blacks do not perform any better in the workplace or in education than their IQ scores predict. 

This is, one might argue, not wholly convincing evidence that IQ tests are not biased against blacks. It might simply suggest that society at large, including the education system and the workplace, is just as biased against blacks as are the hated IQ tests. This is, of course, precisely what we are often told by the television, media and political commentators who insist that America is a racist society, in which such mysterious forces as ‘systemic racism’ and ‘white privilege’ are pervasive. 

In fact, the authors acknowledge this objection, conceding:  

The tests may be biased against disadvantaged groups, but the traces of bias are invisible because the bias permeates all areas of the group’s performance. Accordingly, it would be as useless to look for evidence of test bias as it would be for Einstein’s imaginary person traveling near the speed of light to try to determine whether time has slowed. Einstein’s traveler has no clock that exists independent of his space-time context. In assessing test bias, we would have no test or criterion measure that exists independent of this culture and its history. This form of bias would pervade everything” (p285). 

Herrnstein and Murray ultimately reject this conclusion on the grounds that it is simply implausible to assume that: 

“[So] many of the performance yardsticks in the society at large are not only biased, they are all so similar in the degree to which they distort the truth-in every occupation, every type of educational institution, every achievement measure, every performance measure-that no differential distortion is picked up by the data” (p285). 

In fact, however, Nicholas Mackintosh identifies one area where IQ tests do indeed under-predict black performance, namely with regard to so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in a ‘normal’ manner). 

Blacks with low IQs are generally much more functional in these respects than whites or Asians with equivalent low IQs (see IQ and Human Intelligence: p356-7).[4]

Yet Herrnstein and Murray seem to have inadvertently, and evidently without realizing it, identified yet another sphere where standardized testing does indeed under-predict real-world outcomes for blacks. 

Thus, if indeed, as Herrnstein and Murray claim, blacks are somewhat overrepresented among professional and white-collar occupations relative to their IQs, this suggests that blacks do indeed do better in real-world outcomes than their test results would predict and, while Herrnstein and Murray attribute this to the effect of discrimination against whites, it could instead surely be interpreted as evidence that the tests are biased against blacks. 

Policy Implications? 

What, then, are the policy implications that Herrnstein and Murray draw from the findings that they report? 

In The Blank Slate: The Modern Denial of Human Nature, cognitive science, linguist and popular science writer Steven Pinker popularizes the notion that recognizing the existence of innate differences between individuals and groups in traits such as intelligence does not necessarily lead to ‘right-wing’ political implications. 

Thus, a leftist might accept the existence of innate differences in ability, but conclude that, far from justifying inequality, this is all the more reason to compensate the, if you like, ‘cognitively disadvantaged’ for their innate deficiencies, differences which are, being innate, hardly something for which they can legitimately be blamed. 

Herrnstein and Murray reject this conclusion, but acknowledge it is compatible with their data. Thus, in an afterword to later editions, Murray writes: 

If intelligence plays an important role in determining how well one does in life, and intelligence is conferred on a person through a combination of genetic and environmental factors over which that person has no control, the most obvious political implication is that we need a Rawlsian egalitarian state, compensating the less advantaged for the unfair allocation of intellectual gifts” (p554).[5]

Interestingly, Pinker’s notion of a ‘hereditarian left’, and the related concept of Bell Curve liberals, is not entirely imaginary. On the contrary, it used to be quite mainstream. 

Thus, it was the radical leftist post-war Labour government that imposed the tripartite system on schools in the UK in 1945, which involved allocating pupils to different schools on the basis of their performance in what was then called the 11-plus exam, conducted at with children at age eleven, which tested both ability and acquired knowledge. This was thought by leftists to be a fair system that would enable bright, able youngsters from deprived and disadvantaged working-class backgrounds to achieve their full potential.[6]

Indeed, while contemporary Cultural Marxists emphatically deny the existence of innate differences in ability as between individuals and groups, Marx himself, laboured under no such delusion

On the contrary, in advocating, in his famous (plagiarized) aphorism From each according to his ability; to each according to his need, Marx implicitly recognized that individuals differ in “ability”, and, given that, in the unrealistic communist utopia he envisaged, environmental conditions were ostensibly to be equalized, these differences he presumably conceived of as innate in origin. 

However, a distinction must be made here. While it is possible to justify economic redistributive policies on Rawlsian grounds, it is not possible to justify affirmative action

Thus, one might well reasonably contend that the ‘cognitively disadvantaged’ should be compensated for their innate deficiencies through economic redistribution. Indeed, to some extent, most Western polities already do this, by providing welfare payments and state-funded, or state-subsidized, care to those whose cognitive impairment is such as to qualify as a disability and hence render them incapable of looking after or providing for themselves. 

However, we are unlikely to believe that such persons should be given entry to medical school such that they are one day liable to be responsible for performing heart surgery on us or diagnosing our medical conditions. 

In short, socialist redistribution is defensible – but affirmative action is definitely not! 

Reception and Readability 

The reception accorded ‘The Bell Curve’ in 1994 echoed that accorded another book that I have also recently reviewed, but that was published some two decades earlier, namely Edward O. Wilson’s Sociobiology: The New Synthesis

Both were greeted with similar indignant moralistic outrage by many social scientists, who even employed similar pejorative soundbites (‘genetic determinism’, reductionism, ‘biology as destiny’), in condemning the two books. Moreover, in both cases, the academic uproar even spilled over into a mainstream media moral panic, with pieces appearing the popular press attacking the two books. 

Yet, in both cases, the controversy focused almost exclusively on just a small part of each book – the single chapter in Sociobiology: The New Synthesis focusing on humans and the few chapters in ‘The Bell Curve’ discussing race. 

In truth, however, both books were massive tomes of which these sections represented only a small part. 

Indeed, due to their size, one suspects most critics never actually read the books in full for themselves, including, it seemed, most of those nevertheless taking it upon themselves to write critiques. This is what led to the massive disconnect between what most people thought the books said, and their actual content. 

However, there is a crucial difference. 

Sociobiology: The New Synthesis was a long book of necessity, given the scale of the project Wilson set himself. 

As I have written in my review of that latter work, the scale of Wilson’s ambition can hardly be exaggerated. He sought to provide a new foundation for the whole field of animal behaviour, then, almost as an afterthought, sought to extend this ‘New Synthesis’ to human behaviour as well, which meant providing a new foundation, not for a single subfield within biology, but for several whole disciplines (psychology, sociology, economics and cultural anthropology) that were formerly almost unconnected to biology. Then, in a few provocative sentences, he even sought to provide a new foundation for moral philosophy, and perhaps epistemology too. 

Sociobiology: The New Synthesis was, then, inevitably and of necessity, a long book. Indeed, given that his musings regarding the human species were largely (but not wholly) restricted to a single chapter, one could even make a case that it was too short – and it is no accident that Wilson subsequently extended his writings with regard to the human species to a book length manuscript

Yet, while Sociobiology was of necessity a long book, ‘The Bell Curve: Intelligence and Class Structure in America’ is, for me, unnecessarily overlong. 

After all, Herrnstein and Murray’s thesis was actually quite simple – namely that cognitive ability, as captured by IQ testing, is a major correlate of many important social outcomes in modern America. 

Yet they reiterate this point, for different social outcomes, again and again, chapter after chapter, repeatedly. 

In my view, Herrnstein and Murray’s conclusion would have been more effectively transmitted to the audience they presumably sought to reach had they been more succinct in their writing style and presentation of their data. 

Had that been the case then perhaps rather more of the many people who bought the book, and helped make it into an unlikely nonfiction bestseller in 1994, might actually have managed to read it – and perhaps even been persuaded by its thesis. 

For casual readers interested in this topic, I would recommend instead Intelligence, Race, And Genetics: Conversations With Arthur R. Jensen (which I have reviewed herehere and here). 

Endnotes

[1] For example, Francis Wheen, a professional damned fool and columnist for the Guardian newspaper (which two occupations seem to be largely interchangeable) claimed that: 

The Bell Curve (1994), runs to more than 800 pages but can be summarised in a few sentences. Black people are more stupid than white people: always have been, always will be. This is why they have less economic and social success. Since the fault lies in their genes, they are doomed to be at the bottom of the heap now and forever” (Wheen 2000). 

In making this claim, Wheen clearly demonstrates that he has read few if any of those 800 pages to which he refers.

[2] Although their discussion of the evidence relating to the causes, genetic or environmental, of the black-white test score gap is extensive, it is not exhaustive. For example, Phillipe Rushton, the author of Race Evolution and Behavior (reviewed here and here) argues that, despite the controversy their book provoked, Herrnstein and Murray actually didn’t go far enough on race, omitting, for example, any real discussion, save a passing mention in Appendix 5, of race differences in brain size (Rushton 1997). On the other hand, Herrnstein and Murray also did not mention studies that failed to establish any correlation between IQ and blood groups among African-Americans, studies interpreted as supporting an environmentalist interpretation of race differences in intelligence (Loehlin et al 1973Scarr et al 1977). For readers interested in a more complete discussion of the evidence regarding the relative contributions of environment and heredity to the differences in IQ scores of different races, see my review of Richard Lynn’s Race Differences in Intelligence: An Evolutionary Analysis, available here.

[3] For example, some of those accused of serious crimes have been accused of deliberately getting questions wrong on IQ tests in order to qualify as mentally subnormal when before the courts for sentencing in order to be granted mitigation of sentence on this ground, or, more specifically, in order to evade the death penalty

[4] This may be because whites or Asians with such low IQs are more likely to have such impaired cognitive abilities because of underlying conditions (e.g chromosomal abnormalitiesbrain damage) that handicap them over and above the deficit reflected in IQ score alone. On the other hand, blacks with similarly low IQs are still within the normal range for their own race. Therefore, rather than suffering from, say, a chromosomal abnormality or brain damage, they are relatively more likely to simply be at the tail-end of the normal range of IQs within their group, and hence normal in other respects.

[5] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ‘veil of ignorance’.

[6] The tripartite system did indeed enable many working-class children to achieve a much higher economic status than their parents, although this was partly due to the expansion of the middle-class sector of the economy over the same time-period. It was also later Labour administrations who largely abolished the 11-plus system, not least because, unsurprisingly given the heritability of intelligence and personality, children from middle-class backgrounds tended to do better on it than did children from working-class backgrounds.

References 

Barrick & Mount 1991 The big five personality dimensions and job performance: a meta-analysis. Personnel Psychology 44(1):1–26 
Beckford (2008) Working classes ‘lack intelligence to be doctors’, claims academicDaily Telegraph, 04 Jun 2008. 
Clark 2008 Higher social classes have significantly HIGHER IQs than working class, claims academic Daily Mail, 22 May 2008. 
Kruger & Dunning (1999) Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-AssessmentsJournal of Personality and Social Psychology 77(6):1121-34 
Loehlin et al (1973) Blood group genes and negro-white ability differencesBehavior Genetics 3(3): 263-270  
Rushton, J. P. (1997). Why The Bell Curve didn’t go far enough on race. In E. White (Ed.), Intelligence, political inequality, and public policy (pp. 119-140). Westport, CT: Praeger. 
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 . 
Wheen (2000) The ‘science’ behind racismGuardian, 10 May 2000. 

John R Baker’s ‘Race’: “A Reminder of What Was Possible Before the Curtain Came Down”

‘Race’, by John R. Baker, Oxford University Press, 1974.

John Baker’s ‘Race’ represents a triumph of scholarship across a range of fields, including biology, ancient history, archaeology, history of science, psychometrics and anthropology.

First published by Oxford University Press in 1974, it also marks a watershed in Western thought – the last time a major and prestigious publisher put its name to an overtly racialist work.

As science writer Marek Kohn writes:

Baker’s treatise, compendious and ponderous, is possible the last major statement of traditional race science written in English” (The Race Gallery: p61).

Inevitably for a scientific work first published over forty years ago, ‘Race’ is dated. In particular, the DNA revolution in population genetics has revolutionized our understanding of the genetic differences and relatedness between different human populations.

Lacking access to such data, Baker had only indirect phenotypic evidence (i.e. the morphological similarities and differences between different peoples), as well as historical and geographic evidence, with which to infer such relationships and hence construct his racial phylogeny and taxonomy.

Phenotypic similarity is obviously a less reliable method of determining the relatedness between groups than is provided by genome analysis, since there is always the problem of distinguishing homology from analogy and hence misinterpreting a trait that has independently evolved in different populations as evidence of relatedness.[1]

However, I found only one case of genetic studies decisively contradicting Baker’s conclusions. Thus, whereas Baker classes the Ainu People of Japan as Europid (p158; p173; p424; p625), recent genetic studies suggest that the Ainu have little or no genetic affinities to Caucasoid populations and are most closely related to other East Asians.[2]

On the other hand, however, Baker’s omission of genetic data means that, unusually for a scientific work, in the material he does cover, ‘Race’ scarcely seems to have dated at all. This is because the primary focus of Baker’s book – namely, morphological differences between races – is a field of study that has become politically suspect and in which new research has now all but ceased.[3]

Yet in the nineteenth- and early-twentieth century, when the discipline of anthropology first emerged as a distinct science, the study of race differences in morphology was the central focus of the entire science of anthropology.

Thus, Baker’s ‘Race’ can be viewed as the final summation of the accumulated findings of the ‘old-stylephysical anthropology of the nineteenth and early-twentieth centuries, published at the very moment this intellectual tradition was in its death throes.

Accessibility

Baker’s ‘Race’ is indeed a magnum opus. Unfortunately, however, at over 600 pages, embarking on reading ‘Race’ might seem almost like a lifetime’s work in and of itself.

Not only is it a very long book, but, in addition, much of the material, particularly on morphological race differences and their measurement, is highly technical, and will be readily intelligible only to the dwindling band of biological anthropologists who, in the genomic age, still study such things.

This inaccessibility is exacerbated by the fact that Baker does not use endnotes, except for his references, and only very occasionally uses footnotes. Instead, he includes even technical and peripheral material in the main body of his text, but indicates that material is technical or peripheral by printing it in a smaller font-size.[4]

Baker’s terminology is also confusing.[5] He prefers the ‘-id’ suffix to the more familiar ‘-oid’ and ‘-ic’ (e.g. ‘Negrid‘ and ‘Nordid‘ rather than ‘Negroid’ and ‘Nordic‘) and eschews the familiar terms Caucasian or Caucasoid, on the grounds that:

The inhabitants of the Caucasus region are very diverse and very few of them are typical of any large section of Europids” (p205).

However, his own preferred alternative term, ‘Europid’, is arguably equally misleading as it contributes to the already common conflation of Caucasian with white European, even though, as Baker is at pains to emphasize elsewhere in his treatise, populations from the Middle East, North Africa and even the Indian subcontinent are also ‘Europid’ (i.e. Caucasoid) in Baker’s judgement.

In contrast, the term Caucasoid, or even Caucasian, causes little confusion in my experience, since it is today generally understood as a racial term and not as a geographical reference to the Caucasus region.[6]

At any rate, a similar criticism could surely be levelled at the term ‘Mongoloid’ (or, as Baker prefers, ‘Mongolid’), since Mongolian people are similarly quite atypical of other East Asian populations, and, despite the brief ascendancy of the Mongol Empire, and its genetic impact (as well as that previous waves of conquest by horse peoples of the Eurasian Steppe), were formerly a rather marginal people confined to the arid fringes of the indigenous home range of the so-called Mongoloid race, which had long been centred in China, the self-styled Middle Kingdom.[7]

Certainly, the term ‘Caucasoid’ makes little etymological sense. However, this is also true of a lot of words which we nevertheless continue to make use of. Indeed, since all words change in meaning over time, the original meaning of a word is almost invariably different to its current accepted usage.[8]

Yet we continue to use these words so as to make ourselves intelligible to others, the only alternative being to invent an entirely new language all of our own which only we would be capable of understanding.

Unfortunately, however, too many racial theorists, Baker included, have insisted on creating entirely new racial terms of their own coinage, or sometimes new entire lexicons, which, not only causes confusion among readers, but also leads the casual reader to underestimate the actual degree of substantive agreement between different authors, who, though they use different terms, often agree regarding both the identity of, and relationships between, the major racial groupings.[9]

Historical Focus

Another problem is the book’s excessive historical focus.

Judging the book by its contents page, one might imagine that Baker’s discussion of the history of racial thought is confined to the first section of the book, titled “The Historical Background” and comprising four chapters that total just over fifty pages.

However, Baker acknowledges in the opening page of his preface that:

Throughout this book, what might be called the historical method has been adopted as a matter of deliberate policy” (p3).

Thus, in the remainder of the book, Baker continues to adopt an historical perspective, briefly charting the history behind the discovery of each concept, archaeological discovery, race difference or method of measuring race differences that he introduces.

In short, it seems that Baker is not content with writing about science; he wants to write history of science too.

A case in point is Chapter Eight, which, despite its title (“Some Evolutionary and Taxonomic Theories”), actually contains very little on modern taxonomic or evolutionary theory, or even what would pass for ‘modern’ when Baker wrote the book over forty years ago.

Instead, the greater part of the chapter is devoted to tracing the history of two theories that were, even at the time Baker was writing, already wholly obsolete and discredited (namely, recapitulation theory and orthogenesis).

Let me be clear, Baker himself certainly agrees that these theories are obsolete and discredited, as this is his conclusion at the end of the respective sections devoted to discussion of these theories in his chapter on “Evolutionary and Taxonomic Theories”.

However, this only begs the question as to why Baker chooses to devote so much space in this chapter to discussing these theories in the first place, given that both theories are discredited and also of only peripheral relevance to his primary subject-matter, namely the biology of race.

Anyone not interested in these topics, or in history of science more generally, is well advised to skip the majority of this chapter.

The Historical Background

Readers not interested in the history of science, and concerned only with contemporary state-of-the-art science (or at least the closest an author writing in 1974 can get to modern state-of-the-art science) may also be tempted to skip over the whole first section of the book, entitled, as I have said, “The Historical Background”, and comprised of four chapters or, in total, just over fifty pages.

These days, when authoring a book on the biology of race, it seems to have become almost de rigueur to include an opening chapter, or chapters, tracing the history of race science, and especially its political misuse during nineteenth and early twentieth-centuries (e.g. under the Nazis).[10]

The usual reason for including these chapters is for the author or authors to thereby disassociate themselves from the earlier supposed misuse of race science for nefarious political purposes, and emphasize how their own approach is, of course, infinitely more scientific and objective than that of their sometimes less than illustrious intellectual forebears.

However, Baker’s discussion ofThe Historical Background” is rather different, and refreshingly short on disclaimers, moralistic grandstanding and benefit-of-hindsight condemnations that one usually finds in such potted histories.

Instead, Baker strives to give all views, howsoever provocative, a fair hearing in as objective and sober a tone as possible.[11]

Only Lothrop Stoddard, strangely, is dismissed altogether. The latter is, for Baker, an “obviously unimportant” thinker, whose book “contains nothing profound or genuinely original” (p58-9).

Yet this is perhaps unfair. Whatever the demerits of Stoddard’s racial taxonomy (“oversimplified to the point of crudity,” according to Baker: p58), Stoddard’s geopolitical and demographic predictions have proven prescient.[12]

Overall, Baker draws two general conclusions regarding the history of racial thought in the nineteenth and early twentieth century.

First, he observes how few of the racialist authors whom he discusses were anti-Semitic. Thus, Baker reports:

Only one of the authors, Lapouge, strongly condemns the Jews. Treitschke is moderately anti-Jewish; Chamberlain, Grant and Stoddard mildly so; Gobineau is equivocal” (p59).

The rest of the authors whom he discusses evince, according to Baker, “little or no interest in the Jewish problem”, the only exception being Friedrich Nietzsche, who is “primarily an anti-egalitarian, but [who] did not proclaim the inequality of ethnic taxa”, and who, in his comments regarding the Jewish people, or at least those selectively quoted by Baker, is positively gushing in his praise.

In fact, however, Nietzsche’s views regarding the Jewish people are rather more complex than Baker allows, including as they do both critical comments and no few backhanded complements, since he primarily blames the Jews for the invention of Christianity and of the slave morality that he sees as its legacy.

Indeed, anti-Semitism often goes hand-in-hand with philosemitism. Thus, both Nietzsche and Count de Gobineau indeed wrote passages that, at least when quoted in isolation, seem highly complementary regarding the Jewish people. However, it is well to bear in mind that Hitler did as well, the latter writing in Mein Kampf:

The mightiest counterpart to the Aryan is represented by the Jew. In hardly any people in the world is the instinct of self- preservation developed more strongly than in the so-called ‘chosen’. Of this, the mere fact of the survival of this race may be considered the best proof” (Mein Kampf, Manheim translation).[13]

Thus, as a character from a Michel Houellebecq novel observes:

All anti-Semites agree that the Jews have a certain superiorityIf you read anti-Semitic literature, you’re struck by the fact that the Jew is considered to be more intelligent, more cunning, that he is credited with having singular financial talents – and, moreover, greater communal solidarity. Result: six million dead” (Platform: p113) 

Baker’s second general observation is similarly curious, namely that:

None of the authors mentioned in these chapters claims superiority for the whole of the Europid race: it is only a subrace, or else a section of the Europid race not clearly defined in terms of physical anthropology, that is favoured” (p59).

In retrospect, this seems anomalous, especially given that the so-called Nordic race, on whose behalf racial supremacy was most often claimed, actually came relatively late to civilization, which began in the Middle East, North Africa and South Asia, arriving in Europe only with the Mediterranean civilizations of Greece and Rome, and in Northern Europe later still.

However, this focus on the alleged superiority of certain European subraces rather than Caucasians as a whole likely reflects the fact that, during the time period in which these works were written, European peoples and nations were largely in competition and conflict with other European peoples and nations.

Only in European overseas colonies were Europeans in contact and conflict with non-European races, and, even here, the main obstacle to imperial expansion was, not so much the opposition of the often primitive non-European races whom the Europeans sought to colonize, but rather that of rival colonizers from other European nations.

Therefore, it was the relative superiority of different European populations which was naturally of most concern to Europeans during this time period.

In contrast, the superiority of the Caucasian race as a whole was of comparably little interest, if only because it was something that these writers already took very much for granted, and hence hardly worth wasting ink or typeface over.

The Rise of Racial Egalitarianism

There are two curious limitations that Baker imposes on his historical survey of racial thought. First, at the beginning of Chapter Three (From Gobineau to Houston Chamberlain’), he announces:

The present chapter and the next [namely, those chapters dealing with the history of racial thinking from the mid-nineteenth century up until the early-twentieth century] differ from the two preceding ones… in the more limited scope. It is are concerned only with the growth of ideas that favoured belief in the inequality of ethnic taxa or are supposedrightly or wronglyto have favoured this belief” (p33).

Given that I have already criticised ‘Race’ as overlong, and as having an excessive historical focus, I might be expected to welcome this restriction. However, Baker provides no rationale for this self-imposed restriction.

Certainly, it is rare, and enlightening, to read balanced, even sympathetic, accounts of the writings of such infamous racialist thinkers as Gobineau, Galton and Chamberlain, whose racial views are today usually dismissed as so preposterous as hardly to merit serious consideration. Moreover, in the current political climate, such material even acquires a certain allure of the forbidden’.

However, thinkers championing racial egalitarianism have surely proven more influential, at least in the medium-term. Yet such enormously influential thinkers as Franz Boas and Ashley Montagu pass entirely unmentioned in Baker’s account.[14]

Moreover, the intellectual antecedents of Nazism have already been extensively explored by historians. In contrast, however, the rise of the dogma of racial equality has passed largely unexamined, perhaps because to examine its origins is to expose the weakness of its scientific basis and its fundamentally political origins.[15]

Yet the story of how the theory of racial equality was transformed from a maverick, minority opinion among scientists and laypeople alike into a sacrosanct contemporary dogma which a person, scientist or layperson, can question only at severe cost to their career, livelihood and reputation is surely one worth telling.

The second restriction that Baker imposes upon his history is that he concludes it, prematurely, in 1928. He justifies closing his survey in this year on the grounds that this date supposedly:

Marks the close of the period in which both sides in the ethnic controversy were free to put forward their views, and authors who wished to do so could give objective accounts of the evidence pointing in each direction” (p61).

Yet this cannot be entirely true, for, if it were, then Baker’s own book could never have been published – unless, of course, Baker regards his own work as something other than an “objective account of the evidence pointing in each direction”, which seems doubtful.

Certainly, the influence of what is now called political correctness is to be deplored for impact on science, university appointments, the allocation of research funds and the publishing industry. However, there has surely been no abrupt watershed but rather a gradual closing of the western mind over time.

Thus, it is notable that other writers have cited dates a little later than that quoted by Baker, often coinciding with the defeat of Nazi Germany and exposure of the Nazi genocide, or sometimes the defeat of segregation in the American South.

Indeed, not only was this process gradual, it has also proceeded apace in the years since Baker’s ‘Race’ first came off the presses, such that today such a book would surely never would have been published in the first place, certainly not by as prestigious a publisher as Oxford University Press (who, surely not uncoincidently, soon gave up the copyright).[16]

Moreover, Baker is surely wrong to claim that it is impossible:

To follow the general course of controversy on the ethnic problem, because, for the reason just stated [i.e. the inability of authors of both sides to publicise their views], there has been no general controversy on the subject” (p61).

On the contrary, the issue remains as incendiary as ever, with the bounds of acceptable opinion seemingly ever narrowing and each year a new face falling before the witch hunters of the contemporary racial inquisition.

Biology

Having dealt in his first section with what he calls “The Historical Background”, Baker next turns to what he calls “The Biological Background”. He begins by declaring, rightly, that:

Racial problems cannot be understood by anyone whose interests and field of knowledge stop short at the limit of purely human affairs” (p3).

This is surely true, not just of race, but of all issues in human biology, psychology, sociology, anthropology and political science, as the recent rise of sociobiology and evolutionary psychology attests. Indeed, Baker even coins a memorable and quotable aphorism to this effect, when he declares:

No one knows Man who knows only Man” (p65).

However, Baker sometimes takes this thinking rather too far, even for my biologically-inclined tastes.

Certainly, he is right to emphasise that differences among human populations are analogous to those found among other species. Thus, his discussion of racial differences among our primate cousins are of interest, but also somewhat out-of-date.[17]

However, his intricate and fully illustrated nine-page description of race differences among the different subspecies of crested newt stretched the patience of this reader (p101-109).

Are Humans a Single Species?

Whereas Baker’s seventh chapter (“The Meaning of Race”) discusses the race concept, the preceding two chapters deal with the taxonomic class immediately above that of race, namely ‘species’.

For sexually-reproducing organisms, ‘species’ is usually defined as the largest group of organisms capable of breeding with one another and producing fertile offspring in the wild.

However, as Baker explains, things are not quite so simple.

For one thing, over evolutionary time, one species transforms into another gradually with no abrupt dividing line where one species suddenly becomes another (p69-72). Hence the famous paradox, Which came first: the chicken or the egg?.

Moreover, in respect of extinct species, it is often impossible to know for certain whether two ostensible ‘species’ interbred with one another (p72-3). Therefore, in practice, the fossils of extinct organisms are assigned to either the same or different species on morphological criteria alone.

This leads Baker to distinguish different species concepts. These include:

  • Species in the paleontological sense” (p72-3);
  • Species in the morphological sense” (p69-72); and
  • Species in the genetical sense”, i.e. as defined by the criterion of interfertility (p72-80).

On purely morphological criteria, Baker questions humanity’s status as a single species:

Even typical Nordids and typical Alpinids, both regarded as subraces of a single race (subspecies), the Europid, are very much more different from one another in morphological characters—for instance in the shape of the skull—than many species of animals that never interbreed with one another in nature, though their territories overlap” (p97).

Thus, later on, Baker claims:

Even a trained anatomist would take some time to sort out correctly a mixed collection of the skulls of Asiatic jackals (Canis aureus) and European red foxes (vulpes vulpes), unless he had made a special study of the osteology of the Canidae; whereas even a little child, without any instruction whatever, could instantly separate the skulls of Eskimids from those of Lappids” (p427).

That morphological differences between human groups do indeed often exceed those between closely-related but non-interbreeding species of non-human animal has recently been quantitatively confirmed by Vincent Sarich and Frank Miele in their book, Race the Reality of Human Differences (which I have reviewed here, here and here).

However, even if one defines ‘species’ strictly by the criterion of interfertility (i.e. in Baker’s terminology, “species in the genetical sense”) matters remain less clear than one might imagine.

For one thing, there are the phenomena of ring species, such as the herring gull and lesser black-backed gull.

These two ostensible species (or subspecies), both found in the UK, do not interbreed with one another, but each does interbreed with intermediaries that, in turn, interbreed with the other, such that there is some indirect gene-flow between them. Interestingly, the species ranges of the different intermediaries form a literal ring around the Arctic, such that genes will travel around the Artic before passing from lesser black-backed gull to herring gull or vice versa (p76-79).[18]

Indeed, even the ability to produce fertile offspring is a matter of degree. Thus, some pairings produce fertile offspring only rarely.

For example, often, Baker reports, “sterility affects [only] the heterogametic sex [i.e. the sex with two different sex chromosomes]” (p95). Thus, in mammals, sterility is more likely to affect male offspring. Indeed, this pattern is so common that it even has its own name, being known as Haldane’s Rule, after the famous Marxist-biologist JBS Haldane who first noted this pattern.

Other times, Baker suggests, interfertility may depend on the sex of the respective parents. For example, Baker suggests that, whereas sheep may sometimes successfully reproduce with he-goats, rams may be unable to successfully reproduce with she-goats (p95).[19]

Moreover, the fertility of offspring is itself a matter of degree. Thus, Baker reports, some hybrid offspring are not interfertile with one another, but can reproduce with one or other of the parental stocks. Elsewhere, the first generation of hybrids are interfertile but not subsequent generations (p94).

Indeed, though it was long thought impossible, it has recently been confirmed that, albeit only very rarely, even mules and hinnies can successfully reproduce, despite donkeys and horses, the two parental stocks, having, like goats and sheep, a different number of chromosomes (Rong et al 1985; Kay 2002).

Yet, as Darwin observed as far back as 1871 when himself discussing the question as to whether human races are to be regarded as belonging to entirely separate species:

Even a slight degree of sterility between any two forms when first crossed, or in their offspring, is generally considered as a decisive test of their specific distinctness” (The Descent of Man).

Thus, Baker concludes:

There is no proof that hybridity among human beings is invariably eugenesic, for many of the possible crosses have not been made, or if they have their outcome does not appear to have been recorded. It is probable on inductive grounds that such marraiges would not be infertile, but it is questionable whether the hybridity would necessarily be eugenesic. For instance, statistical study might reveal a preponderance of female offpsring” (p97-8).

However, any degree of infertility among human interracial couples is likely to be very slight. After all, today interracial relationships are increasingly common in Britain and America, and not noticeably less fecund than other unions. On the contrary, the number of biracial people, the products of such relationships, are themselves growing precipitously in number in both countries.

In practice, a very slight degree of reduced fertility among phenotypically distinct forms, as might conceivably occur among human interracial couples, would be unlikely to cause biologists to assign the different forms to different species, not least since, in the absense of close study, the slight degree of reduced fertility would probably never be detected in the first place.

Is there then any evidence of reduced fertility among mixed-race couples? Not a great deal.

As noted above, interracial relationships are increasingly common, and the the number of biracial people growing precipitously in Britain and America.

On the other hand, possibly blood type incompatibility between mother and developing foetus might be more common in interracial unions due to racial variation in the prevalence of different blood groups.

Also, one study did find a greater prevalence of birth complications, more specifically caesarean deliveries, among Asian women birthing offspring fathered by white men (Nystrom et al 2008).

However, this is a simple reflection of the differences in physical size between whites and Asians, with smaller-framed Asian women having difficulty birthing larger half-white offspring. Thus, the same study also found that white women birthing offspring fathered by Asian men actually have lower rates of caesarean delivery than did women bearing offspring fathered by men of the same race as themselves (Stanford University Medical Center 2008).[20]

Indeed, one study from Iceland rather surprisingly found that the highest pregnancy rates were found among couples who were actually quite closely related to one another, namely equivalent to third- or fourth-cousins, with less closely related spouses enjoying reduced pregnancy rates (Helgason et al 2008; see also Labouriau & Amorim 2008).

On the other hand, however, David Reich, in Who We Are and How We Got Here reports that, whereas there was evidence of selection against Neanderthal genes in the human genome (that had resulted from ancient hybridization between anatomically modern humans and Neanderthals) owing to the deleterious effects of these genes, there was no evidence of selection against European genes (or African genes) among African-Americans, a racially-mixed population:

In African Americans, in studies of about thirty thousand people, we have found no evidence for natural selection against African or European ancestry” (Who We Are and How We Got Here: p48; Bhatia et al 2014).

This lack of selection against either European-derived (or African-derived) genes in African-Americans suggests that discordant genes did not result in reduced fitness among African-Americans.[21] 

Humans – A Domesticated Species?

A final complication in defining species is that some species of nonhuman animal, wildly recognised as separate species because they do not interbreed in the wild, nevertheless have been known to successfully interbreed in captivity.

A famous example are lions and tigers. While they have never been known to interbreed in the wild, if only because they rarely if ever encounter one another, they have interbred in captivity, producing hybrid offspring in the form of so-called ligers and tigons.

This is, for Baker, of especial relevance to question of human races since, according to Baker, we ourselves are a domesticated species. Thus, he approvingly quotes Blumenbach’s claim that:

Man is ‘of all living beings the most domesticated” (p95).

Thus, with regard to the question of whether humans represent a single species, Baker reaches the following controversial conclusion:

The facts of human hybridity do not prove that all human races are to be regarded as belonging to a single ‘species’. The whole idea of species is vague because the word is used with such different meanings, none of which is of universal application. When it is used in the genetical sense [i.e. the criterion of interfertility] some significance can be attached to it, in so far as it applies to animals existing in natural conditions… but it does not appear to be applicable to human beings, who live under the most extreme conditions of domestication” (p98).

Thus, Baker goes so far as to question whether:

Any two kinds of animals, differing from one another so markedly in morphological characters (and in odour) as, for instance, the Europid and Sanid…, and living under natural conditions, would accept one another as sexual partners” (p97).

Certainly, in our ‘natural environment’ (what evolutionary psychologists call the environment of Evolutionary adaptedness or EEA), many human races would never have interbred, if only for the simple reason that they would never come into contact with one another.

On the contrary, they were separated from one another by the very geographic obstacles (oceans, deserts, mountain-ranges) that reproductively isolated them from one another and hence permitted their evolution into distinct races.

Thus, Northern Europeans surely never mated with sub-Saharan Africans for the simple reason that the former were confined to Northern Europe and surrounding areas while the latter were largely confined to sub-Saharan Africa, such that they are unlikely ever to have interacted.

Only with the invention of technologies facilitating long-distance travel (e.g. ocean-going ships, aeroplanes) would this change.

However, if Northern Europeans never interbred with sub-Saharan Africans, both groups surely did interbreed with their immediate neighbours, who, in turn, interbred with their intermediate neighbours who may, in turn, have interbred indirectly with the other group, since even the Sahara Desert, formerly regarded as the boundary between what were then called the Caucasiod and Negroid races, was far from a complete barrier to gene flow, even in ancient times.

Indeed, there may even have been gene flow between Eurasia and the Americas at the Bering Strait. Only perhaps Australian Aboriginals may to have been completely reproductively isolated for millennia.

There may therefore have been some indirect gene flow even between even distantly related populations as Northern Europeans and sub-Saharan Africans, even if no Nordic European ever encountered, let alone mated with, a black African. This, together with the continuous clinal nature of racial differentiation across the world that resulted from this interbreeding, was the key point emphasized by Darwin in The Descent of Man in support of his conclusion that all human races ought indeed to be considered a single species.

Moreover, Baker’s assertion that modern humans are a domesticated species, although a fashionable viewpoint today, is questionable.

Whether humans can indeed be said to be domesticated depends on how one defines domesticated. If we are domesticated, then humans are surely unique in having domesticated ourselves (or at least one another).[22]

Defining Race

Ultimately then, the question of whether the human race is a single species is a purely semantic dispute. It depends how one defines the word ‘species’.

Likewise, whether human races can be said to exist ultimately depends on one’s definition of the word ‘race.

Using the word ‘race’ interchangeably with that of ‘subspecies’, Baker provides no succinct definition. Instead, he simply explains:

If two populations [within a species] are so distinct that one can generally tell from which region a specimen was obtained, it is usual to give separate names to the two races” (p99).

Neither does he provide a neat definition of any particular race. On the contrary, he is explicit in emphasizing:

The definition of any particular race must be inductive in the sense that it gives a general impression of the distinctive characters, without professing to be applicable in detail to every individual” (p99).

Is Race Real?

At the conclusion of his chapter on “Hybridity and The Species Question”, Baker seems to reach what was, even in 1974, an incendiary conclusion – namely that, whether using morphological criteria or the criterion of interfertility, it is not possible to conclusively prove that all extant human populations belong to a single species (see above).

Nevertheless, in the remainder of the book, Baker proceeds on the assumption that differences among human groups are indeed subspecific (i.e. racial) in nature and that we do indeed form a single species.

Indeed, Baker criticises the notion that the existence persons of mixed racial ancestry, and the existence of clinal variation between races, disproves the existence of human races by observing that, if races did not interbreed with one another, then they would not be mere different races, but rather entirely separate species, according to the usual definition of this term. Thus, Baker explains:

Subraces and even races sometimes hybridise where they meet, but this almost goes without saying: for if sexual revulsion against intersubracial or interracial marriages were complete, one set of genes would have no chance of intermingling with the other, and the ethnic taxa would be species by the commonly accepted definition. It cannot be too strongly stressed that intersubracial and interracial hybridization is so far from indicating the unreality of subraces and races, that it is actually a sine qua non of the reality of these ethnic taxa” (p12).

This, Baker argues, is because:

It is the fact that intermediaries do occur that defines the race” (p99).

Thus, in nonhuman species among whom subspecies are recognized, there usually exist similar hybrid or intermediary populations around the boundaries of each distinct subspecies. Indeed, this phenomenon is so recurrent that there is even a biological term for it namely intergradation.

Yet this does not cause biologists to conclude that the subspecies in question either do not exist or that their boundaries are somehow arbitrarily delineated and artificial, let alone that subspecies is a biologically meaningless term.

Some people seem to think that, since races tend to blend into one another and hence have blurred boundaries (i.e. what biologists refer to as clinal variation), they do not really exist. Yet Baker objects:

In other matters, no one questions the reality of categories between which intermediaries exist. There is every graduation, for instance, between green and blue, but no one denies these words should be used” (p100).

However, this is perhaps an unfortunate example, since, as psychologists and physicists agree, colours, as such, do not exist.

Instead, the spectrum of light varies continuously. Distinct colours are imposed on this continuous variation only by the human brain and visual system.[23]

Using colour as an analogy for race is also potentially confusing because colour is already often conflated with race. Thus, races are referred to by their ostensible colours (e.g. blacks, whites, browns etc.) and the very word ‘colour’ is sometimes even used as a synonym, or perhaps euphemism, for race, even though, as Baker is at pains to emphasize, races differ in far more than skin colour.

Using colour as an analogy for race differences is only likely to exacerbate this confusion.

Yet Baker’s other examples are similarly problematic. Thus, he writes:

“The existence of youths and human hermaphrodites does not cause anyone to disallow the use of the words, ‘boy’, ‘man’ and ‘woman’” (p100).

However, hermaphrodites, unlike racial intermediaries, are extremely rare. Meanwhile, words such as ‘boy’ and ‘youth’ are colloquial terms, not really scientific ones. As anthropologist John Relethford observes:

We tend to use crude labels in everyday life with the realization that they are fuzzy and subjective. I doubt anyone thinks that terms such as ‘short’, ‘medium’ and ‘tall’ refer to discrete groups, or that humanity only comes in three values of height” (Relethford 2009: p21).

In short, we often resort to vague and impressionistic language in everyday conversation. However, for scientific purposes, we must surely try, wherever possible, to be more precise.

Rather than alluding to colour terms or hermaphrodites, perhaps a better counterexample, if only because it is certain to provoke annoyance, cognitive dissonance and doublethink among leftist race-denying sociologists, is that of social class. Thus, as biosocial criminologist Anthony Walsh demands:

Is social class… a useless concept because of its cline-like tendency to merge smoothly from case to case across the distribution, or because its discrete categories are determined by researchers according to their research purposes and are definitely not ‘pure’” (Race and Crime: A Biosocial Analysis: p6).

However, the same leftist social scientists who insist the race concept is an unscientific social construction, nevertheless continue to employ the concept of social class almost as if it were entirely unproblematic.

However, the objection that races do not exist because races are not discrete categories, but rather have blurred boundaries, is not entirely fallacious.

After all, sometimes intermediaries can be so common that they can no longer be said to be intermediaries at all and all that can be said to exist is continuous clinal variation, such that wherever one chose to draw the boundary between one race and another would be entirely arbitrary.

With increased migration and intermarriage, we may fast be approaching this point.[24]

However, just because the boundaries between racial groups are blurred, this does not mean that the differences between them, whether physiological or psychological, do not exist. To assume otherwise would represent a version of the continuum fallacy or sorties paradox, also sometimes called the fallacy of the heap or fallacy of the beard.

Thus, even if races do not exist, race differences still surely do – and, just as skin colour varies on a continuous, clinal basis, so might average IQbrain-size and personality!

Anticipating Jared Diamond

Remarkably, Baker even manages to anticipate certain erroneous objections to the race concept that had not, to my knowledge, even been formulated at the time of his writing, perhaps because they are so obviously fallacious to anyone without an a priori political commitment to the denying the validity of the race concept.

In particular, Jared Diamond (1994), in an influential and much-cited paper, argues that racial categories are meaningless because, rather than being classified by skin colour, races could just as easily be grouped on the basis of traits such as the prevalence of genes for sickle-cell or lactose tolerance, which would lead us to adopting very different classifications.

Actually, Baker argues, the importance of colour for racial classification has been exaggerated.

In the classification of animals, zoologists lay little emphasis on differences of colour… They pay far more attention to differences in grosser structure” (p159).

Indeed, he quotes no lesser authority than Darwin himself as observing:

Colour is generally esteemed by the systematic naturalist as unimportant (p148).

African_albino
A Negro albino: Proof that race is more than ‘skin deep’

Certainly, he is at pains to emphasise that, among humans, differences between racial groups go far beyond skin colour. Indeed, he observes, one has only to look at an African albino to realize as much:

An albino… Negrid who is fairer than any non-albino European, [yet] appears even more unlike a European than a normal… Negrid” (p160).

Likewise, some populations from the Indian subcontinent are very dark in skin tone, yet they are, according to Baker, predominantly Caucasoid (p160), as, he claims, are the Aethiopid subrace of the Horn of Africa (p225).[25]

Thus, Baker laments how:

An Indian, who may show close resemblance to many Europeans in every structural feature of his body, and whose ancestors established a civilization long before the inhabitants of the British Isles did so, is grouped as ‘coloured’ with persons who are very different morphologically from any European or Indian, and whose ancestors never developed a civilization” (p160).

Yet, in contrast, of the San Bushmen of Southern Africa, he remarks:

The skin is only slightly darker than that of the Mediterranids of Southern Europe and paler than that of many Europids whose ancestral home is in Asia or Africa” (p307).

But no one would mistake them for Caucasoid.

What then of the traits, namely the prevalence of the sickle-cell gene or of lactose tolerance, that would, according to Diamond, produce very different taxonomies?

For Baker, these are what he calls “secondary characters” that cannot be used for the purposes of racial classification because they are not present among all members of any group, but differ only in their relative prevalence (p186).

Moreover, he observes, the sickle-cell gene is likely to have “arisen independently in more than one place” (p189). It is therefore evidence, not of common ancestry, but of convergent evolution, or what Baker refers to as “independent mutation” (p189).

It is therefore irrelevant from the perspective of cladistic taxonomy, whereby organisms are grouped, not on the basis of shared traits as such, but rather of shared ancestry. From the perspective of cladistic taxonomy, shared traits are relevant only to the extent they are (interpreted as) evidence of shared ancestry.

The same is true for lactose tolerance, which seems to have evolved independently in different populations in concert with the development of dairy farming, in a form of gene-culture co-evolution.

Indeed, lactose tolerance appears to have evolved through somewhat different genetic mechanisms (i.e. mutations in different genes) in different populations, seemingly a conclusive demonstration that it evolved independently in these different lineages (Tishkoff et al 2007).

As Baker warns:

One must always be on the lookout for the possibility of independent mutation wherever two apparently unrelated taxa resemble one another by the fact that some individuals in both groups reveal the presence of the same gene” (p189).

In evolutionary biology, this is referred to as distinguishing analogy from homology.

Thus, for example, authors Vincent Sarich and Frank Miele, in their book Race: The Reality of Human Differences (which I have reviewed here) observe:

There are two groups of people [i.e. races] with the conbination of dark skin and frizzy hair—sub-Saharan Africans and Melanesians. The latter have often been called Oceanic Negroes,’ implying a special relationship with Africans. The blood-group data, however, show that they are about as different from Africans as they could be” (Race: The Reality of Human Differences: p134).

But Diamond’s proposed classification is even more preposterous than these early pre-Darwinian non-cladistic taxonomic schemes, since he proposes to classify races on the basis of a single trait in isolation, the trait in question (either lactose tolerance or the sickle-cell gene) being chosen either arbitrarily or, more likely, to illustrate the point that Diamond is attempting to make.

Yet even pre-Darwinian taxonomies proposed to classify species, not on the basis of a single trait, but rather on the basis of a whole suit of traits that intercorrelate together.

In short, Diamond proposes to classify races on the basis of a single character that has evolved independently in distantly related populations, instead of a whole suite of inter-correlated traits indicative of common ancestry.

Interestingly, a similar error may underlie an even more frequently cited paper by Marxist-geneticist Richard Lewontin, which argued the vast majority of genetic variation was within-group rather than between-group – since Lewontin, like Diamond, also relied on ‘secondary characters’ such as blood-groups to derive his estimates (Lewontin 1972).[26]

The reason for the recurrence of this error, Baker explains, is that:

Each of the differences that enable one to distinguish all the most typical individuals of any one taxon from those of another is due, as a general rule, to the action of polygenes, that is to say, to the action of numerous genes, having small cumulative effects” (p190).

Yet, unlike traits resulting from a few alleles, polygenes are not amenable to simple Mendelian analysis.

Therefore, this leads to the “unfortunate paradox” whereby:

The better the evidence of relationship or distinction between ethnic taxa, the less susceptible are the facts to genetic analysis” (p190).

As a consequence, Baker laments:

Attention is focussed today on those ‘secondary differences’… that can be studied singly and occur in most ethnic taxa, though in different proportions in different taxa… The study of these genes… has naturally led, from its very nature, to a tendency to minimise or even disregard the extent to which the ethnic taxa of man do actually differ from one another” (p534).

Finally, Baker even provides a reductio ad absurdum of Diamond’s approach, observing:

From the perspective of taste-deficiency the Europids are much closer to the chimpanzee than to the Sinids and Paiwan people; yet no one would claim that this resemblance gives a true representation of relationship” (p188).

However, applying the logic of Diamond’s article, we would be perfectly justified and within our rights to use this similarity in taste deficiency in order to classify Caucasians as a sub-species of chimpanzee!

Subraces

The third section of Baker’s book, “Studies of Selected Human Groups”, focusses on the traditional subject-matter of physical anthropology – i.e. morphological differences between human groups.[27]

Baker describes the physiological differences between races in painstaking technical detail. These parts of the book makes for an especially difficult read, as Baker carefully elucidates both how anthropologists measure morphological differences, and the nature and extent of the various physiological differences between the races discussed revealed by these methods.

Yet, curiously, although many of his measures are quantitative in nature, Baker rarely discusses whether differences are statistically significant.[28] Yet without statistical analysis, all of Baker’s reports of quantitative measurements of differences in the shapes and sizes of the skulls and body parts of people of different races represent little more than subjective impressions.

This is especially problematic in his discussion of so-called ‘subraces’ (subdivisions within the major continental races, such as Nordics and the Meditaranean race, both supposed subdivisions within the Caucasiod race), where differences could easily be dismissed as, if not wholly illusory, then at least as clinal in nature and as not always breeding true.

Yet nowhere in his defence of the reality of subracial differences does Baker cite statistics. Instead, his argument is wholly subjective and qualitative in nature:

In many parts of the world where there have not been any large movements of population over a long period, the reality of subraces is evident enough” (p211).

One suspects that, given increased geographic mobility, those parts of the world are now reduced in number.

Thus, even if subracial differences were once real, with increased migration and intermarriage, they are fast disappearing, at least within Europe.

Is the ‘White Race’ a Social Construct?

One other interesting observation may be made with regard to Bakers proposed racial taxonomy. Save when quoting from other earlier authors who did use these terms, Baker himself never once refers to white people or the ‘the white race’. 

Not only does he, as we have seen, reject the use of colour for the purposes of racial classification, he also does not seem to recognize white people as constituting a useful racial category in the first place. Thus, not only do the terms white people’ or the white race’ receive no mention in his racial taxonomy either as a race or a subrace, neither is any synonym covering roughly the same set of people included (p624-5).

Of course, Baker’s Europid race might appear, from its name, to cover much the same ground, since the ancestral homelands of those today classed as white are roughly coextensive with the geographical boundaries of Europe.

In fact, however, its meaning is much broader, as Baker uses the word Europid to refer to what earlier anthropologists more typically called the Caucasian race, and, as he is himself at pains to emphasize, the indigenous inhabitants of North Africa, the Middle East and, at least according to Baker, even South Asia are all classified as Caucasoid/Europid (p160), and Baker even argues that those he terms the Aethiopids of the Horn of Africa are also predominantly Caucasoid/Europid (p225).

While indigenous Europeans are grouped together with North Africans, South Asians and Arabs as Europid, they are also subdivided among themselves into such supposed subraces as Nordid, Mediterranid, Osteuropid, Dinarid and Alpinid. Yet none of these terms is equivalent to what we today habitually call white people, and the indigenous homelands of at least some of these subraces, notably the Mediterranid, extend outside of the European continent into North Africa and the Middle East, and include some peoples whom we would today hesitate to call white, who are unlikely to themselves identify as such, and who would certainly not be recognized as white by most white racialists.

This conclusion seems to have been shared by most other early- to mid-twentieth century physical anthropologists. For example, Carleton Coon, the once-celebrated mid-twentieth century American phsycial anthropologist, in his book The Races of Europe, contended that:

The Mediterranean racial zone stretches unbroken from Spain across the Straits of Gibraltar to Morocco, and thence eastward to India. A branch of it extends far southward on both sides of the Red Sea into southern Arabia, the Ethiopian highlands, and the Horn of Africa (The Races of Europe: p401).

Unlike Baker, Coon does indeed use the phrase the white race’, and indeed regards his 1939 book as a study of this race. However, he clearly intends this phrase to carry a rather broader meaning than that with which it is usually invested today, since he regards, for example, even the Gallas, the Somalis, the Ethiopians, and the inhabitants of Eritrea as all being white or near white”, a view that would hardly endear him to most contemporary white racists (The Races of Europe: p445).   

Thus, while he would certainly reject the idea that race is a mere social construct as preposterous, I suspect that Baker, along with other early twentieth-century racial anthropologists, might actually agree with the race deniers that the concept of a white race, at least as it is defined and demarcated in the Anglosphere today, is indeed an artificial construct with little biological validity, which owes more to geographical and even religious factors (i.e. the traditional boundary between Chistendom and the Islamic world) than it does to measuable phenotypic, or, for that matter, genetic, differences.

In contrast, although the politcally correct orthodoxy holds that terms such as ‘Caucasian’ or ‘Caucasoid’ (or, to use Baker’s preferred term ‘Europid’) reflect a scientifically obsolete and discredited basis for racial classification, this racial category actually seems to have been broadly corroborated by modern studies in population genetics.

Thus, geneticist David Reich, in his 2018 book, Who We Are and How We Got Here, reports:

Today, the peoples of West Eurasia—the vast region spanning Europe, the Near East, and much of central Asia—are genetically highly similar. The physical similarity of West Eurasian populations was recognized in the eighteenth century by scholars who classified the people of West Eurasia as ‘Caucasoids’… The whole-genome data at first seem to validate some of the old categories… Populations within West Eurasia are typically around seven times more similar to one another than West Eurasians are to East Asians. When frequencies of mutations are plotted on a map, West Eurasia appears homogeneous, from the Atlantic façade of Europe to the steppes of central Asia. There is a sharp gradient of change in central Asia before another region of homogeneity is reached in East Asia” (Who We Are and How We Got Here: p93).[29]

This is probably because the term ‘Caucasoid’ was hardly an arbitrary invention of eighteenth- and nineteenth-century racists, but rather reflected, not only real phenotypic resemblance among populations, but also geographic factors, the indigenous homelands of the ostensible race being circumscribed by relatively impassable geographic obstacles – such as the Sahara Desert, Himalayas, Siberia and Atlantic Ocean – which represented barriers to human movement and hence gene flow throughout much of human history and prehistory.

In contrast, the ostensible boundaries of the indigenous homelands so-called ‘white race’ are, at least today, usually equated with the boundaries of the European continent. But, whereas the Sahara, Himalayas, Siberia and Atlantic were long barriers to gene-flow, at least some of the boundaries of the European continent – namely the Mediterranean Sea, Strait of Gibraltar and Turkish Straits – were long hubs of trade, migration, population movement and conquest. It is thus unsurprising that populations on either side of these boundaries, far from being racially distinct, resemble one another both phenotypically and genetically.

Studies of Selected Human Groups

This third section of the book focuses on certain specific selected human populations. These are presumably chosen because Baker feels that they are representative of certain important elements of human evolution, racial divergence, or are otherwise of particular interest.

Unfortunately, Baker’s choice of which groups upon which to focus seems rather arbitrary and he never explains why these groups were chosen ahead of others.

In particular, it is notable that Baker focuses primarily on populations from Europe and Africa. East Asians (i.e. Mongoloids), curiously, are entirely unrepresented.

The Jews

After a couple of introductory chapters, and one chapter focussing on “Europids” (i.e. Caucasians) as a whole, Baker’s next chapter discusses Jewish people.

In the opening paragraphs, he observes that:

In any serious study of the superiority or inferiority of particular groups of people one cannot fail to take note of the altogether outstanding contributions made to intellectual and artistic life, and to the world of commerce and finance, generation after generation by persons to whom the name of Jews is attached” (p232).

However, having taken due “note” of this, and hence followed his own advice, he says almost nothing further on the matter, either in this chapter or in those later chapters that deal specifically with the question of racial superiority (see below).

Instead, Baker first focuses on justifying the inclusion of Jews in a book about race, and hence arguing against the politically-correct notion that Jews are not a race, but rather mere practitioners of a religion.[30] Baker gives short-shrift to this notion:

There is no close resemblance between Judaism in the religious sense and a proselytizing religion such as the Roman Catholic” (p326).

In other words, Baker seems to be saying, because Judaism is not a religion that actively seeks out converts (but rather one that, if anything, discourages conversion), Jews have retained an ethnic character distinct from the host populations alongside whom they reside, without having their racial traits diluted by the incorporation of large numbers of converts of non-Jewish ancestry.

Yet, actually, even proselytizing religions like Christianity, Catholicism and Islam that do actively seek to convert nonbelievers, often come to take on an ethnic character, since, despite the possibility of conversion, offspring usually inherit (i.e. are indoctrinated in) the faith of their parents, apostates are persecuted, conversion remains, in practice, rare, and people are admonished to marry within the faith.

Thus, in polities beset by ethnic conflict, like Northern Ireland, Lebanon or the former Yugoslavia, religions often comes to represent markers for ethnicity or even something akin to ethnicities in and of themselves – i.e. reproductively-isolated, endogamous breeding populations.

Having concluded, then, that there is a racial as well as a religious component to Jewish identity, Baker nevertheless stops short of declaring the Jews a race or even what he calls a subrace.

Dismissing the now discredited Khazar hypothesis in a sentence,[31] Baker instead classes them bulk of the world’s Jewish population (i.e. the Ashkenazim) as merely part of “Armenid subrace” of the Europid race” with some “Orientalid” (i.e. Arab) admixture (p242).[32]

Thus, Baker claims:

Persons of Ashkennazic stock can generally be recognised by certain physical characters that distinguish them from other Europeans” (p238).

Jewish_Nose
Baker’s delightfully offensive illustration of Jewish nose shape, taken from Jacobs (1886).

These include a short but wide skull and a nose that is “large in all dimensions” (p239), the characteristic shape of which Baker even purports to illustrate with a delightfully offensive diagram (p241).[33]

Likewise, Baker claims that Sephardic Jews, the other main subgroup of European Jews, are likewise “distinguishable from the Ashkenazim by physical characters”, being slenderer in build, with straighter hair, narrower noses, and different sized skulls, approximately more to the Mediterranean racial type (p245-6).

But, if Sephardim and Ashkenazim are indeed “distinguishable” or “recognisable” by “physical characters”, either from one another or from other European Gentiles, as Baker claims, then with what degree of accuracy is he claiming such distinctions can be made? Surely far less than 100%.[34]

Moreover, are the alleged physiological differences that Baker posits between Ashkenazi, Sephardi, and other Europeans based on recorded quantitative measurements, and, if so, are the differences in question statistically significant? On this, Baker says nothing.

The Celts

The next chapter concerns The Celts, a term surrounding which there is so much confusion and which has been used in so many different senses – racial, cultural, ethnic, territorial and linguistic (p183) – that some historians have argued that it should be abandoned altogether.

Baker, himself British, is keen to dispel the notion that the indigenous populations of the British Isles were, at the time of the Roman invasion, a primitive people, and is very much an admirer of their artwork.

Thus, Baker writes that:

Caesar… nowhere states that any of the Britons were savage (immanis), nor does he speak specifically of their ignorance (ignorantia), though he does twice mention their indiscretion (imprudentia) in parleying” (p263).

Of course, Caesar, though hardly unbiased in this respect, did regard the indigenous Britons as less civilized than the Romans themselves. However, I suppose that barbarism is, like civilization (see below), a matter of degree.

Regarding the racial characteristics of those inhabitants of pre-Roman Britain who are today called Celts, Baker classifies them as Nordic, writing:

Their skulls scarcely differ from those of the Anglo-Saxons who subsequently dominated them, except in one particular character, namely, that the skull is slightly (but significantly) lower in the Iron Age man than in the Anglo-Saxon” (p257).[35]

Thus, dismissing the politically-correct notion that the English were, in the words of another author, “true multiracial society”, Baker claims:

“[The] Angles, Saxons, Jutes, Normans, Belgics and… Celts… were not only of one race (Europid) but of one subrace (Nordid).” (p267).

Citing remains found in an ancient cemetery in Berkshire supposedly containing the skeletons of Anglo-Saxon males but indigenous British females and hybrid offspring, he concludes that, rather than extermination, a process of intermarriage and assimilation occurred (p266). This is a conclusion largely corroborated by recent population genetic studies.

However, the indigenous pre-Celtic inhabitants of the British Isles were, Baker concludes, less Nordic than Mediterranid in phenotype.[36]

Such influences remain, Baker claims, in the further reaches of Wales and Ireland, as evidenced by the distribution of blood groups and of hair colour.

Thus, whereas the Celtic fringe is usually associated with red, auburn or ginger hair, Baker instead emphasizes the greater prevalence of dark hair among the Irish and Welsh:

The tendency towards the possession of dark hair was much more marked in Wales than in England, and still more marked in the western districts of Ireland” (p265).[37]

This conclusion is based upon the observations of nineteenth century English ethnologist John Beddoe, who travelled the British Isles recording the distribution of different hair and eye colours, reporting his findings in The Races of Britain, which was first published in 1862 and remains, to my knowledge, the only large body of data on the distribution of hair and eye colour in the British Isles to this day.

On this basis, Baker therefore concludes that:

The modern population of Great Britain probably derives mainly from the [insular] ‘Celts’… and Belgae, though a more ancient [i.e. Mediterranean] stock has left its mark rather clearly in certain parts of the country, and the Anglo-Saxons and other northerners made an additional Nordid contribution later on” (p269).

Yet recent population genetic studies suggest that even the so-called Celts, like the later Anglo-Saxons, Normans and Vikings, actually had only a quite minimal impact on the ancestry of the indigenous peoples of the British Isles.[38]

This, of course, further falsifies the politically correct, but absurd notion that the British are a nation of immigrants – which phrase is, of course, itself a recent immigrant from America, in respect of whose population the claim surely has more plausibility.

The Celts, moreover, likely arrived from on the British Isles from continental Europe by the same route as the later Anglo-Saxons and Normans – i.e. across the English channel (or perhaps the south-west corner of the North Sea), by way of Southern England. This is, after all, by far the easiest, most obvious and direct route.[39]

This leads Baker to conclude that the Celts, like the Anglo-Saxons after them, imposed their language on, but had little genetic impact on, the inhabitants of those parts of the British Isles furthest from this point of initial disembarkation (i.e. Scotland, Ireland, Wales). Thus, Baker concludes:

The Iron Age invaders transmitted the dialects of their Celtic language to the more ancient Britons whom they found in possession of the land [and] pushed back these less advanced peoples towards the west and north as they spread” (p264).

But these latter peoples, though adopting the Celtic tongue, were not themselves (primarily) descendants of the Celtic invaders. This leads Baker to conclude, following what he takes to also be the conclusion of Carleton Coon in the latter’s book The Races of Europe, that:

It is these people, the least Celtic—in the ethnic sense—of all the inhabitants of Great Britain, that have clung most obstinately to the language that their conquerors first taught them two thousand years ago” (p269).

In other words, in a racial and genetic, if not a linguistic, sense, the English are actually more Celtic than are the self-styled Celtic Nations of Scotland, Ireland and Wales!

Australian Aboriginals – a “Primitive” Race?

The next chapter is concerned with Australian Aboriginals, or, as Baker classes them, “Australids”.

In this chapter Baker is primarily concerned with arguing that Aboriginals are morphologically primitive.

Of course, the indigenous inhabitants of what is now Australia were, when Europeans first made contact with them, notoriously backward in terms of their technology and material culture.

For example, Australian Aboriginals are said the only indigenous people yet to have developed the simple bow or bow and arrow; while the neighbouring, and related, indigenous people of Tasmania, isolated from the Australian mainland by rising sea levels at the end of the last ice age but usually classed as of the same race, are said to have lacked even, arguably, the ability to make fire.

However, this is not what Baker means by referring to Aboriginals as retaining many “primitive traits. Indeed, unlike his later chapters on black Africans, Baker says nothing regarding the technology or material culture of indigenous Australians.

Instead, he talks exclusively about their morphology. In referring to them as retaining “primitive” characters, Baker is therefore using the word in the specialist phylogenetic sense. Thus, he argues that Australian Aboriginals:

Retain… physical characters that were possessed by remote ancestors but have been lost in the course of evolution by most members of the taxa that are related to it” (p272-3).

In other words, they retain traits characteristic of an earlier state of human evolution which have since been lost in other extant races.

Baker purports to identify twenty-eight such “primitive” characters in Australian aboriginals. These include prognathism (p281), large teeth (p289), broad noses (p282), and large brow ridges (p280).

Baker acknowledges that all extant races retain some primitive characters that have been lost in other races (p302). For example, unlike most other races (but not Aboriginals), Caucasoids retain scalp hair characteristic of early hominids and indeed other extant primates (p297).

However, Baker concludes:

The Australids are exceptional in the number and variety of their primitive characters and in the degree to which some of them are manifested” (p302).

Relatedly, Nicholas Wade observes that, whereas there is a general trend towards lighter and less robust bones and skulls over the course of human evolution, something referred to as gracialization, two populations at “the extremities of the human diaspora” seem to have been exempt, or isolated, from this process, namely Aboriginals and the “Fuegians at the tip of the South America” (A Troublesome Inheritance: p167-8).[40]

Of course, to be morphologically ‘primitive’ in this specialist phylogenetic sense entails no necessary pejorative imputations as are often associated with the word ‘primitive’.

However, some phylogentically primitive traits may indeed be linked to the primitive’ technology of indigenous Aboriginals at the time of first contact with Europeans.

For example, tooth size decreased over the course of human evolution as human invented technologies (e.g. cooking, tools for cutting) that made large teeth unnecessary. As science writer Marek Kohn puts it:

As the brain expanded in the course of becoming human, the teeth became smaller. Hominids lost their built-in weapons, but developed the possibility of building their own, all the way to the Bomb” (The Race Gallery: p63).

Indeed, Darwin himself observed, in The Descent of Man, that:

The early male forefathers of man were, as previously stated, probably furnished with great canine teeth; but as they gradually acquired the habit of using stones, clubs, or other weapons, for fighting with their enemies or rivals, they would use their jaws and teeth less and less. In this case, the jaws, together with the teeth, would become reduced in size” (The Descent of Man).

Therefore, it is possible, Kohn provocatively contends, that:

Aborigines have a biological adaptation to compensate for the primitiveness of their material culture… Teeth get smaller, the argument runs, when technology becomes more advanced” (The Race Gallery: p72-3).

On this view, the relatively large size of Aboriginal teeth could be associated with the primitive state of their technology.

Another phylogentically primitive Aboriginal trait that also, rather more obviously, implies lesser intelligence intelligence, is their relatively smaller brain size.

Indeed, Philippe Rushton posits a direct tradeoff between brain-size and the size of the jaw and teeth, arguing in Race, Evolution and Behavior (which I have reviewed here, here and here) that: 

As brain tissue expanded it did so at the expense of the temporalis muscles, whichclose the jaw. Since smaller temporalis muscles cannot close as large a jaw, jaw size was reduced. Consequently, there is less room for teeth” (Race, Evolution and Behavior: Preface to Third Edition: p20-1).

Thus, leading mid-twentieth century American physical anthropologist and racialist Carleton Coon reports:

The critical differences between [“the ancestors of our living races”] and us lie mostly in brain size versus jaw size – the balance between thinking thoughts and eating foods of various degrees of fineness” (Racial Adaptations: p113).

Thus, Aboriginals have, on average, Baker reports, not only larger jaws and teeth, but also smaller brains than those of Caucasians, weighing only about 85% as much (p292). The smaller average brain-size of Aboriginals is confirmed by more recent data (Beals et al 1984).

Baker also reviews some suggestive evidence regarding the internal structure of Aboriginal brains, as compared to that of Europeans, notably in the relative positioning of the lunate sulcus, again suggesting similarities with the brains of non-human primates.

In this sense, then, Australian Aboriginals ‘primitivebrains may indeed be linked to the primitive state, in the more familiar sense of the word ‘primitive’, of their technology and culture.

San Bushmen and Paedomorphy

Whereas Australian Aboriginals are morphologically “primitive” (i.e. retain characters of early hominids), the San Bushmen of Southern Africa (“Sanids”), together with the related Khoi (collectively Khoisan, or, in racial terms, Capoid) are, Baker contends, paedomorphic.

Bushman_penes
Bushmen’s paedomorphic penes

By this, Baker means that the San people retain into adulthood traits that are, in other taxa, restricted to infants or juveniles, and is more often referred to as neoteny.[41]

One example of this supposed paedomorphy is provided by the genitalia of the Sanid males:

The penis, when not erect, maintains an almost horizontal position… This feature is scarcely ever omitted in the rock art of the Bushmen, in their stylized representations of their own people. The prepuce is very long; it covers the glans completely and projects forward to a point. The scrotum is drawn up close to the root of the penis, giving the appearance that only one testis has descended, and that incompletely” (p319).[42]

Humans in general are known to be neotenous in many of our distinct characters, and we are also, of course, the most intelligent known species.

Indeed, as discussed by Desmond Morris in his 1960s human ethology classic The Naked Ape (which I have reviewed here), among the traits that have been associated with neotenty in humans are our brain size, growth patterns, hairlessness, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into adulthood.

However, Baker argues:

Although mankind as a whole is paedomorphous, those ethnic taxa (the Sanids among them) that are markedly more paedomorphious than the rest have never achieved the status of civilization, or anything approaching it, by their own initiative. It would seem that, when carried beyond a certain point, paedomorphosis is antagonistic to purely intellectual advance” (p324).

As to why this might be the case, he speculates in a later chapter:

Certain taxa have remained primitive or become paedomorphous in their general morphological characters and none of these has succeeded in developing a civilization. It is among these taxa in particular that one finds some indication of a possible cause of mental inferiority in the small size of the brain” (p428).

Yet this is a curious suggestion since neoteny is usually associated with increased brain growth in humans.[43]

Moreover, other authorities class East Asians as a paedomorphic race, and Baker himself classes the bulk of the population” of Japan as “somewhat paedomorphious” (p538).[44]

However, the Japanese, along with other Northeast Asians, not least the Chinese, have undoubtedly founded great civilizations and have brains as large as, or, after controlling for body-size, even larger than those of Europeans, and are generally reported to have somewhat higher IQs (see Lynn’s Race Differences in Intelligence: which I have reviewed here).

The Big Butts of Bushmen – or just of Bushwomen?

Bushman_buttocks
Bushwomen’s buttocks (or ‘steatopygia’)

Having discussed male genitalia, Baker also emphasizes the primary and secondary sexual characteristics of Sanid women – in particular their protruding buttocks (“steatopygia”) and alleged elongated labia.

The protruding buttocks of Sanid women are, Baker contends, qualitatively different in both shape and indeed composition from those of other populations, including the much-celebrated ‘big butts’ of contemporary African-Americans (p318).

Thus, whereas, among other populations, the shape of the buttocks, even if very large, are “rounded” in shape:

It is particular characteristic of the Khoisanids that the shape of the projecting part is that of a right-angled triangle, the upper edge being nearly horizontal … [and] internally… consist of masses of fat incorporated between criss-crossed sheets of connective tissue said to be joined to one another in a regular manner” (p318)

Although there is abundant photographic evidence for the character, proving that it is not a mere racist myth from nineteenth century anthropology, the trait does not appear to be universal among San women, as it is also easy to find images of San women who do not have exceptionally large or protruding buttocks, and it is possible that racist nineteenth century anthropologists exaggerated the ubiquity of the trait, just as politically correct modern anthropologists tend to ignore or play it down.

Regarding the function of these enlarged buttocks, Baker rejects any analogy with the humps of the camel, which evolved as reserves of fat upon which the animal could call in the event of famine or draught.

Unlike camels, which are, of course, adapted to a desert environment, Baker concludes:

The Hottentots, Korana, and Bushmen are not to be regarded as people adapted by natural selection to desert life” (p318).

However, today, San Bushmen are indeed largely restricted to a desert environment, namely the Kalahari desert.

However, although he does not directly discuss this, Baker presumably regards this as a recent displacement, resulting from the Bantu expansion, in the course of which the less advanced San were displaced from their traditional hunting grounds in southern Africa by Bantu agriculturalists, and permitted to eke out an undisturbed existence only in an arid desert environment of no use to Bantu agriculturalists.

Instead of having evolved as fat reserves in the event of famine, drought or scarcity, Baker instead suggests that Khoisan buttocks evolved through sexual selection.

As authority, he cites Darwin’s observation in The Descent of Man that, according to the reports of an earlier anthropologist, zoologist and explorer, this peculiarity is greatly admired by the men”, to such an extent that the latter reported observing:

[One] woman who was considered a beauty, and she was so immensely developed behind, that when seated on level ground she could not rise, and had to push herself along until she came to a slope” (The Descent of Man).

This theory – namely that these large protruding buttocks evolved through sexual selection – seems plausible given the sexual appeal of ‘big butts even among western populations. However, recent research suggest that it is actually lumbar curvature, or lordosis, an ancient mammalian mating signal, rather than fat deposits in the buttocks as such, that is primarily responsible for the perceived attractiveness of so-called ‘big butts’ (Lewis et al 2015).

This theory that this trait is a product of sexual selection is, of course, also consistent with the fact that large buttocks among the San seem to be largely, if not entirely, restricted to women.

However, Carleton Coon, in Racial Adaptations: A Study of the Origins, Nature, and Significance of Racial Variations in Humans, suggests alternatively that this sexual dimorphism could instead reflect the caloric requirements of pregnancy and lactation.[45]

The caloric demands of pregnancy and lactation are indeed the probable reason women of all races have greater fat deposits than do males.

Indeed, an analogy might be provided by female breasts, since these, unlike the mammary glands of other mammalian species, are present permanently, from puberty on, and, save during pregnancy and lactation, are composed predominantly of fatty tissues, not milk.[46]

Elusive Elongated Labia?

Hottentot apron
The only photographic evidence of the ‘Hottentot apron’?

In addition to their enlarged buttocks, Baker also discusses the alleged elongated labia of Sanid women, sometimes referred to, rather inaccurately in Baker’s view, as the “the Hottentot apron”.

Some writers have discounted this notion as a sort of nineteenth-century anthropological myth. However, Baker himself insists that the elongated labia of the San are indeed real.

His evidence, however, is less than compelling, the illustrations included in the text being limited to a full-body photograph in which the characteristic is barely visible (p311) and what seems to be a surely rather fanciful sketch (p315).

Likewise, although a Google image search produces abundant photographic evidence of Khoisan buttocks, their elongated labia prove altogether more elusive.

Perhaps the modesty of Khoisan women, or the prudery and puritanism of Victorian anthropologists and explorers, prevented the latter from recording photographic evidence for this characteristic.

However, it is perhaps telling that, even in this age of Rule 34 of the Internet (If it exists, there is porn of it. No exceptions), I have been unable to find photographic evidence for this trait.

Racial Superiority

The fourth and final section of ‘Race’ turns to the most controversial topic addressed by Baker in this most controversial of books, namely whether any racial group can be said to be superior or inferior to another, a question that Baker christens “the Ethnic Question”.

He begins by critiquing the very nature of the notion of superiority and inferiority, observing in a memorable and quotable aphorism:

Anyone who accepts it as a self-evident truth, in accordance with the American Declaration of Independence, that all men are created equal may properly be asked whether the meaning of the word ‘equal’ is self-evident” (p421).

Thus, if one is “concerned simply with the question whether the taxa are similar or different”, then, Baker concludes, “there can be no doubt as to the answer” (p421).

Indeed, this much is clear, not simply from the huge amount of data assembled by Baker himself in previous chapters, but also from simple observation.[47]

However, Baker continues:

The words ‘superior’ and ‘inferior’ are not generally used unless value judgements are concerned” (p421).

Any value judgement is, of course, necessarily subjective.

On objective criteria, each race can only be said to be, on average, superior in a specific endeavour (e.g. IQ tests, basketball, running, mugging, pimping, drug-dealing, tanning, making music, building civilizations). The value to be ascribed to these endeavours is, however, wholly subjective.

On these grounds, contemporary self-styled race realists typically disclaim any association between their theories and any notions of racial superiority.

Yet these race realists are often the very same individuals who emphasise the predictive power of IQ tests in determining many social outcomes (income, criminality, illegitimacy, welfare dependency) which are generally viewed in anything but value-neutral terms (see The Bell Curve: which I have reviewed here).

From a biological perspective, no species (or subspecies) is superior to any other. Each is adapted to its own ecological niche and hence presumably superior at surviving and reproducing within the specific environment in which it evolved.

Thus, sociobiologist Robert Trivers quotes his mentor Bill Druryf as observing during a discussion between the two regarding a possible biological basis for race prejudice:

Bob, once you’ve learnt to think of a herring gull as equal, the rest is easy” (Natural Selection and Social Theory: p57).

However, taken to its logical conclusion, or reductio ad absurdum, this suggests a dung beetle is equal to Beethoven!

From Physiology to Psychology

Although he alludes in passing to race differences in athletic ability, Baker, in discussing superiority, is concerned primarily with intellectual and moral achievement. Therefore, in this final section of the book, he turns from physiological differences to psychological ones.

Of course, the two are not entirely unconnected. All behaviour must have an ultimate basis in the brain, which is itself a part of an organism’s physiology. Thus:

Cranial capacity is, of course, directly relevant to the ethnic problem since it sets a limit to the size of the brain in different taxa; but all morphological differences are also relevant in an indirect way, since it is scarcely possible that any taxa could be exactly the same as one another in all the genes that control the development and function of the nervous and sensory systems, yet so different from one another in structural characters in other parts of the body” (p533-4).

Indeed, Baker observes:

Identity in habits is unusual even in pairs of taxa that are morphologically much more similar to one another than [some human races]. The subspecies of gorilla, for instance, are not nearly so different from one another as Sanids are from Europids, but they differ markedly in their modes of life” (426).

In other words, since human races differ significantly in their physiology, it is probable that they will also differ, to a roughly equivalent degree, in psychological traits, such as intelligence, temperament and personality.

Measuring Superiority?

In discussing the question of the intellectual and moral superiority of different racial groups, Baker focusses on two lines of evidence in particular:

  1. Different races’ performance in ability and attainment tests;
  2. Different races’ historical track record in founding civilizations.

Baker’s discussion of the former topic is now rather dated.

Recent findings unavailable to Baker include the discovery that East Asians score somewhat higher on IQ tests than do white Europeans (see Race Differences in Intelligence: reviewed here), and also that Ashkenazi Jews score higher still (see The Chosen People: review forthcoming).[48]

Evidence has also accumulated regarding the question of the relative contributions of heredity to racial differences in IQ, including the Minnesota transracial study (Scarr & Weinberg 1976; Weinberg et al 1992) and studies of the effects of racial admixture on IQ using blood-group data (Loehlin et al 1973; Scarr et al 1977), and, most recently, genome analysis (Lasker et al 2019). See also my review of Richard Lynns Race Difference in Intelligence: An Evolutionary Perspective’, posted here.

Readers interested in more recent research on this issue should consult Jensen and Rushton (2005) and Nisbett (2005); or Nicholas Mackintosh’s summary in Chapter Thirteen of his textbook, IQ and Human Intelligence (2nd Ed) (pp324-359); or indeed my own recent review of Richard Lynns Race Difference in Intelligence: An Evolutionary Perspective’, posted here.[49]

Criteria for Civilization and Moral Relativism

While his data on race differences in IQ is therefore now dated, Baker’s discussion of the track-record of different races in founding civilizations remains of interest today, if only because this is a topic studiously avoided by most contemporary authors, historians and anthropologists on account of its politically-incorrect nature – though Jared Diamond, in Guns, Germs and Steel (which I have reviewed here), represents an important recent exception to this trend.[50]

The first question, of course, is precisely how one is to define ‘civilizations’ in the first place, itself a highly contentious issue.[51]

Thus, Baker identifies twenty-one criteria for recognising civilizations (p507-8).[52]

In general, these can be divided into two types:

  1. Scientific/technological criteria;
  2. Moral criteria.[53]

However, the latter are inherently problematic. What constitutes moral superiority itself involves a moral judgement that is necessarily subjective.

In other words, whereas technological and scientific superiority can be demonstrated objectively, moral superiority is a mere matter of opinion.

Thus, the ancient Romans, transported to our times, would surely accept the superiority of our technology – and, if they did not, we would, as a consequence of the superiority of our technology, outcompete them both economically and militarily and hence prove it ourselves.

However, they would view our social, moral and political values as decadent and we would have no way of proving them wrong.

Take, for example, Baker’s first requirement for civilization, namely that:

In the ordinary circumstances of life in public places they [i.e. members of the society under consideration] cover the external genitalia and greater part of the trunk with clothes” (p507).

This criterium is not only curiously puritanical, but also blatantly biased against tropical cultures. Whereas in temperate and arctic zones clothing is essential for survival, in the tropics the decision to wear clothing represents little more than an arbitrary fashion choice.

Meanwhile, the requirement that the people in question “do not practice severe mutilation or deformation of the body”, another moral criterion, could arguably exclude contemporary westerners from the ranks of the ranks of the civilized’, given the increasing prevalence of tattooing, flesh tunnel ear plugs and other forms of extreme bodily modification (not to mention gender reassignment surgery and other non-consensual forms of  genital mutilation) – or perhaps it is merely those among us who succumb to such fads who are not truly civilized.

The requirement that a civilization’s religious beliefs not be “purely or grossly superstitious” (p507) is also problematic. As a confirmed atheist, I suspect that all religions are, by very definition, superstitious. If some forms of Buddhism and Confucianism are perhaps exceptions, then they are perhaps simply not religions at all in the western sense.

At any rate, Christian beliefs  regarding miracles, resurrection, the afterlife, the Holy Spirit and so on surely rival those of any other religion when it comes to “gross superstition”.

As for his complaint that the religion of the Mayansdid not enter into the fields of ethics” (p526), a complaint he also raises in respect of indigenous black African religions (p384), contemporary moral philosophers generally see this as a good thing, believing that religion is best kept of moral debates.[54]

In conclusion, any person seeking to rank cultures on moral criteria will, almost inevitably, rank his own society as morally superior to all others – simply because he is judging these societies by the moral standards of his own society that he has internalized and adopted as his own.

Thus, Baker himself views Western civilization as superior to such pre-Columbian mesoamerican civilizations as the Aztecs due to the latter’s practice of mass ritual human sacrifice and cannibalism (p524-5).

However, in doing so, he is judging the cultures in question by distinctly Western moral standards. The Aztecs, in contrast, may have viewed human sacrifice as a moral imperative and may therefore have viewed European cultures as morally deficient precisely because they did not butcher enough of their people in order to propitiate the gods.

Likewise, whereas Baker views cannibalism as incompatible with civilization (p507), I personally view cannibalism as, of itself, a victimless crime. A dead person, being dead, is incapable of suffering by virtue of being eaten. Indeed, in this secular age of environmental consciousness, one might even praise cannibalism as a highly ‘sustainable’ form of recycling.

For this reason, in my own discussion of the different cultures and civilizations founded by members of different races, I will confine my discussion exclusively to scientific and technological criteria for civilization.

Sub-Saharan African Cultures

Baker’s discussion of different groups’ capacity for civilization actually begins before his final section on “Criteria for Superiority and Inferiority” in his four chapters on the race whom Baker terms Negrids – namely, black Africans from south of the Sahara, excluding Khoisan and Pygmies (p325-417).

Whereas his previous chapters discussing specific selected human populations focussed primarily, or sometimes exclusively, on their morphological peculiarities, in the last four of these chapters, focussing on African blacks, his focus shifts from morphology to culture.

Thus, Baker writes:

The physical characters of the Negrids are mentioned only briefly. Members of this race are studied in Chapters 18-21 mainly from the point of view of the social anthropologist interested in their progress towards civilization at a time when they were still scarcely influenced over a large part of their territory, by direct contact with members of more advanced ethnic taxa” (p184).

Unlike some racialist authors,[55] Baker acknowledges the widespread adoption of advanced technologies throughout much of sub-Saharan Africa prior to modern times. However, he attributes the adoption of these technologies to contact with, and borrowings from, outside non-Negroid civilizations (e.g. Arabs, Egyptians, Moors, Berbers, Europeans).

Therefore, in order to distinguish the indigenous, homegrown capacity of black Africans to develop advanced civilization, Baker relies on the reports of seven nineteenth century explorers of what he terms “the secluded area” of Africa, by which term Baker seems to mean the bulk of inland Southern, Eastern and Central Africa, excluding the Horn of Africa, the coast of West Africa and the Gulf of Guinea (p334-5).[56]

In these parts of Africa, at the time these early European explorers visited the continent, the influence of outside civilizations was, Baker reports, “non-existent or very slight” (p335). The cultural practices observed by these explorers therefore, for Baker, provide a measure of black Africans indigenous capacity for social, cultural and technological advancement.

On this perhaps dubious basis, Baker thus concludes that there is no evidence black Africans in this area ever:

Also largely absent throughout ‘the secluded area’, according to Baker, were:

In respect of these last two indices of civilization, however, Baker admits a couple of partial, arguable exceptions, which he discusses in the next chapter (Chapter 21). These include the ruins of Great Zimbabwe (p401-9) and a script invented in the nineteenth century (p409-11).[58]

Domesticated Plants and Animals in Africa

Let’s review these claims in turn. First, it certainly seems to be true that few if any species of either animals or plants were domesticated in what Baker calls the “the secluded area” of sub-Saharan Africa.[59]

However, with respect to plants, there may be a reason for this. Many important, early domesticates were annuals. These are plants that complete their life-cycle within a single year, taking advantage of predictable seasonal variations in the weather.

As explained by Jared Diamond, annual plants are ideal for human consumption, and for domestication, because:

Within their mere one year of life, annual plants inevitably remain small herbs. Many of them instead put their energy into producing big seeds, which remain dormant during the dry season and are then ready to sprout when the rains come. Annual plants therefore waste little energy on making inedible wood or fibrous stems, like the body of trees and bushes. But many of the big seeds… are edible by humans. They constitute 6 of the modern world’s 12 major crops” (Guns, Germs and Steel: p136).

Yet sub-Saharan Africa, being located closer to the equator, experiences less seasonal variation in climate. As a result, relatively fewer plants are annuals.

However, it is far less easy to explain why sub-Saharan Africans failed to domesticate any wild species of animal, with the possible exception of guineafowl.[60]

After all, Africa is popular as a tourist destination today in part precisely because it has a relative abundance of large wild mammals of the sort seemingly well suited for domestication.[61]

Jared Diamond argues that the African zebra, a close relative of other wild equids that were domesticated, was undomesticable because of its aggression and what Diamond terms its nasty disposition” (Guns, Germs and Steel: p171-2).[62]

However, this is unconvincing when one considers that Eurasians succeeded in domesticating such formidably powerful and aggressive wild species as wolves and aurochs.[63]

Thus, even domesticated bulls remain a physically-formidable and aggressive animal. Indeed, they were favoured adversaries in blood sports such as bullfighting and bull-baiting for precisely this reason.

However, the wild auroch, from whom modern cattle derive, was undoubtedly even more formidable, being, not only larger, more muscled and with bigger horns, but also surely even more aggressive than modern bulls. After all, one of the key functions of domestication is to produce more docile animals that are more amenable to control by human agriculturalists.[64]

Compared to the domestication of aurochs, the domestication of the zebra would seem almost straight forward. Indeed, the successful domestication of aurochs in ancient times might even cause us to reserve our judgement regarding the domesticability of such formidable African mammals as hippos and African buffalo, the possibility of whose domestication Diamond dismisses a priori as preposterous.

Certainly, the domestication of the auroch surely stands as one of the great achievements of ancient Man.

Reinventing the Wheel?

Baker also seems to be correct in his claim that black Africans never invented the wheel.

However, it must be borne in mind that the same is also probably true of white Europeans, who, rather than independently inventing the wheel for themselves, had the easier option of simply copying the design of the wheel from other civilizations and peoples, namely those from the Middle East, probably Mesopotamia, where the wheel seems to be have first been developed

Indeed, most cultures with access to the wheel never actually invented it themselves, for the simple reason that it is far easier to copy the invention of a third-party through simple reverse engineering than to independently invent afresh an already existing technology all by oneself.

This then explains why the wheel has actually been independently invented, at most, only a few times in history.

The real question, then, is not why the wheel was never invented in sub-Saharan Africa, but rather why it failed to spread throughout that continent in the same way it did throughout Eurasia.

Thus, if the wheel was known, as Baker readily acknowledges it was, in those parts of sub-Saharan Africa that were in contact with outside civilizations (notably in the Horn of Africa), then this raises the question as to why it failed to spread elsewhere in Africa prior to the arrival of Europeans. This indeed is acknowledged to remain a major enigma within the field of African history and archaeology (Law 2011; Chavez et al 2012).

After all, there are no obvious insurmountable geographical barriers preventing the spread of technologies across Africa other than the Sahara itself, and, as Baker himself acknowledges, black Africans in the ‘penetrated’ area had proven amply capable of imitating technological advances introduced from outside.

Why then did the wheel not spread across Africa in the same way it did across Eurasia? Is it possible that African people’s alleged cognitive deficiencies were responsible for the failure of this technology to spread and be copied, since the ability to copy technologies through reverse engineering itself requires some degree of intellectual ability, albeit surely less than that required for original innovation?

One might argue instead that the African terrain was unsuitable for wheeled transport. However, one of the markers of a civilization is surely its very ability to alter the terrain by large, cooperative public works engineering projects, such as the building of roads.

Thus, most of Eurasia is now suitable for wheeled transport in large part only because we, or more specifically our ancestors, have made it so.

Another explanation sometimes offered for the failure of sub-Saharan Africans to develop wheeled transportation is that they lacked a suitable draft animal, horses in sub-Saharan Africa being afflicted with sleeping sickness spread by the tsetse fly.

However, as we have seen above, Baker argues a race’s track record in successfully domesticating wild animals is itself indicative of the intellectual ability and character of that race. For Baker, then, the failure of sub-Saharan African to successfully domesticate any suitable species of potential draft animal (e.g. the zebra: see above) is itself indicative of, and a factor in, their inability to successfully develop advanced civilization.

At any rate, even in the absence of a suitable draft animal, wheels are still useful.

On the one hand, they can be used for non-transport-related purposes (e.g. the spinning wheel, the potter’s wheel, even water wheels). Indeed, in Eurasia the invention of the potter’s wheel is actually thought to have preceded the use of wheels for the purposes of transportation.

Moreover, even in the absence of a suitable draft animal, wheels remain very useful for transportation purposes e.g. wheelbarrows, pulled rickshaws

In other words, humans can themselves be employed as a draft animal, whether by choice or by force, and, if there is one arguable marker for civilization for which Africa did not lack, and which did not await introduction by Europeans, Moors and Arabs, it was, of course, the institution of slavery.

African Writing Systems?

What then of the alleged failure of sub-Saharan Africans to develop a system of writing? Baker refers to only a single writing system indigenous to sub-Saharan Africa, namely the Vai syllabary, invented in what is today Liberia in the nineteenth century in imitation of foreign scripts. Was this indeed the only writing system indigenous to sub-Saharan Africa?

Of course, writing has long been known in North Africa, not least in ancient Egypt, whose famous hieroglyphs, not only form the ultimate basis for our own Latin alphabet, but are also claimed by some Egyptologists to represent the earliest form of writing developed anywhere in the world, although most archaeologists believe that they were beaten to the gun, once again, by Mesopotamia, with its cuneiform script.

However, this is obviously irrelevant to the question of black African civilization, since the populations of North Africa, including the ancient Egyptians, were largely Caucasoid.[65]

Thus, the Sahara Desert, as a relatively impassable obstacle to human movement throughout most of human history and prehistory (a geographic filter”, according to Sarich and Miele) that hence impeded gene flow, has long represented, and to some extent still represents, the boundary between the Caucasoid and Negroid races (Race: The Reality of Human Differences: p210).

What then of writing systems indigenous to sub-Saharan Africa? The wikipedia entry on writing systems of Africa lists several indigenous African writing systems of sub-Saharan Africa.

However, save for those of recent origin, almost all of these writing systems seem, from the descriptions on their respective wikipedia pages, to have been restricted to areas outside of ‘the secluded area’ of Africa as defined by Baker (p334-5).

Thus, excluding the writing systems of North Africa (i.e. Meroitic, Tifinagh and  ancient Egyptian hieroglyphs), Geze seems to have been restricted to the area around the Horn of Africa; Nsibidi to the area around the Gulf of Guinea in modern Nigeria; Adrinka to the coast of West Africa, while the other scripts mentioned in the entry are, like the Vai syllabary, of recent origin.

The only ancient writing system mentioned on this wikipedia page that was found in what Baker calls ‘the secluded area’ of Africa is Lusona. This seems to have been developed deep in the interior of sub-Saharan Africa, in parts of what is today eastern Angola, north-western Zambia and adjacent areas of the Democratic Republic of the Congo. Thus, it is almost certainly of entirely indigenous origin.

However, Lusona is described by its wikipedia article as only an ideographic tradition, that function[s] as mnemonic devices to help remember proverbs, fables, games, riddles and animals, and to transmit knowledge”.

It therefore appears to fall far short of a fully developed script in the modern sense.

Indeed, the same seems to be true, albeit to a lesser extent, of most of the indigenous writing systems of sub-Saharan Africa listed on the wikipedia page, namely Nsibidi and Adrinka, which each seem to represent only a form of proto-writing.

Only Geze seems to have been a fully-developed script, and this was used only in the Horn of Africa, which not only lies outside ‘the secluded area’ as defined by Baker, but whose population is, again according to Baker, predominantly Caucasoid (p225).

Also, Geze seems to have developed from an earlier Middle Eastern script. It is therefore not of entirely indigenous African origin.

It therefore seems to indeed be true that sub-Saharan Africans never produced a fully-developed script in hihose parts of Africa where they developed beyond the influence of foreign empires.

However, it must here be emphasized that the same is again probably also true of indigenous Europeans.

Thus, as with the wheel, Europeans themselves probably never independently invented a writing system, the Latin alphabet being derived from Greek script, which was itself developed from the Phoenician alphabet, which, like the wheel, first originated in the Middle East, and was itself adapted from Egyptian hieroglyphs.[66]

Indeed, most writing systems were developed, if not directly from, then at least in imitation of, pre-existing scripts. Like the wheel, writing has only been independently reinvented afresh a few times in history.[67]

The question, then, as with the wheel, is, not so much why much of sub-Saharan Africa failed to invent a written script, but rather why those written scripts that were in use in certain parts of the continent south of the Sahara,  nevertheless failed to spread or be imitated over the remainder of that continent.

African Culture: Concluding Thoughts

In conclusion, it certainly seems clear that much of sub-Saharan Africa was indeed backward in those aspects of technology, social structure and culture which Baker identifies as the key components of civilization. This much is true and demands an explanation.

However, blanket statements regarding the failure of sub-Saharan Africans to develop a writing system or two-storey buildings seem, at best, a misleading simplification.

Indeed, Baker’s very notion of what he calls ‘the secluded area’ of Africa is vague and ill-defined, and he never provides a clear definition, or, better still, a map precisely delineating what he means by the term (p334-5).

Indeed, the very notion of a ‘secluded area’ is arguably misconceived, since even relatively remote and isolated areas of the continent that did not have any direct contact with non-Negroid peoples, will presumably have had some indirect influence from outside of sub-Saharan Africa, if only by contact with peoples from those regions of the continent south of the Sahara which had been influenced by foreign peoples and civilizations.

After all, as we have seen, Europeans also failed to independently develop either the wheel and writing system for themselves, instead simply copying these innovations from the neighbouring civilizations of the Middle East.

While, today, politically-correct leftists selectively condemn certain cultural borrowings as cultural appropriation, in reality, copying and improving upon the inventions, discoveries and technological advances of others, including those of different civilizations and cultures (standing on the shoulders of giants), has long been central to both technological and scientific progress.   

Why then were black Africans south of the Sahara, who were indeed exposed to technologies such as the wheel and writing in certain parts of their territory, nevertheless unable to convey these technologies into the remander of the continent in the same way as Europeans and Asians did?

Perhaps one factor impeding the movement of technologies such as the wheel and writing across sub-Saharan Africa in pre-modern times is the relative lack of navigable waterways (e.g. rivers) in the region.

As emphasized by Tim Marshall in his book Prisoners of Geography, rivers in sub-Saharan African tended to be non-navigable, mainly because of the prevalence of large waterfalls that made transport by river a dangerous venture.

Since, in ancient and premodern times, transport by river was, at least in Eurasia, generally easier, safer and quicker than by land, Africas generally non-navigable river system may have ironically impeded the spread throughout Africa even of technologies that were themselves of use primarily for transportation, such as the wheel.

Pre-Columbian Native American Cultures

Baker’s discussion of status of the pre-Columbian civilizations, or putative civilizations, of America is especially interesting. Of these, the Mayans definitely stand out, in Baker’s telling, as the most impressive in terms of their scientific and technological achievements.

Baker ultimately concludes, however, that even the Maya do not qualify as a true civilization, largely on moral grounds – namely, their practice of mass sacrifices and cannibalism.

Yet, as we have seen, this is to judge the Mayans by distinctly western moral standards

No doubt if western cultures were to be judged by the moral values of the Mayans, we too would be judged just as harshly. Perhaps they would condemn us precisely for not massacring enough of our citizens in order to propitiate the gods.

However, even seeking to rank the Mayans based solely on their technological and scientific achievements, they still represent something of a paradox.

On the one hand, their achievements in mathematics and astronomy seem impressive.

Indeed, Baker claims that it was Mayans, not the Hindus or Muslims, who are more often credited with the innovation, who first invented the concept of zero – or rather, to put the matter more precisely, “invent[ed] a ‘local value’ (or ‘place notational’) system of numeration that involved zero: that is to say, a system in which the value of each numberical symbol depended on its position in a series of such symbols, and the zero, if required, took its place in this series ” (p552).

Thus, Baker writes:

The Maya had invented the idea [of zero] and applied it to their vegisimal system [i.e. using a base of twenty] before the Indian mathematicians had thought of it and used it in denary [i.e. decimal] notation” (p522).[68]

Thus, Baker concludes:

The mathematics, astronomy, and calendar of the Middle Americans suggest unqualified acceptance into the ranks of the civilized” (p525).

However, on the other hand, according to Baker’s account:

They had no weights… no metal-bladed hoes or spades and no wheels (unless a few toys were actually provided with wheels and really formed part of the Mayan culture)” (p524).

Yet, as Baker alludes to in his rather disparaging reference to “a few toys”, it now appears the these toys were indeed part of the Maya culture.

Thus, far from failing to invent the wheel, Native Americans are one of the few peoples in the world with an unambiguous claim to have indeed invented the wheel entirely independently, since the possibility of wheels being introduced through contact with Eurasian civilizations is exceedingly remote.

Thus, the key question is, not why Native American civilizations failed to invent the wheel, for they did indeed invent the wheel, but rather why they failed to make full use of this remarkably useful invention, seemingly only employing it for seemingly frivolous items resembling toys (but whose real purpose is unknown) rather than for transport, or indeed the production of ceramics, textiles or energy.

Terrain may have been a factor, since the geography of much of the Mayan territory is particularly uninviting, both to wheeled transport and to general civilizational progress.

Indeed, Baker himself even approves the view that, far from “civilisation develop[ing] wherever the environment was genial”, in fact “it might be nearer the mark to claim the opposite”, since “civilisations, like individuals, despond to challenge”, and he specifically cites the Mayan, along with other so-called hydraulic empires which harnessed irrigation and control of water for cooperation and control, as an example of this, remarking that “their culture reached its climax in that particular part of their extensive territory in which the environment was least favourable” (p528).

However, as mentioned above, one of the markers of a true civilization is arguably its very ability to alter its terrain by large-scale engineering projects such as the building of roads. Thus, if the geography of much of Mesoamerica was unsuitable for wheeled transport, perhaps this was only beacuse the inhabitants failed to sufficiently transform it so as to render it so.

As in respect of sub-Saharan Africa, another factor sometimes cited is the absence of a suitable draft animal.

The Inca, but not the Aztecs and Maya, did have the llama. However, llama are not strong enough to carry humans, or to pull large carts.

Of course, for Baker, as we have seen above, a races track record in domesticating non-human animals, including for use as draft animals, is itself indicative of that races ability and capacity to build and maintain advanced civilization.

However, as pointed out by Jared Diamond, in the Americas, most large wild mammals of the sort possibly suited for domestication as a draft animal were wiped out by the first humans to arrive on the continent, the former having evolved in complete isolation from humans, and hence being completely evolutionarily unprepared for the sudden influx of humans with their formidable hunting skills.[69]

Thus, Jared Diamond in Guns Germs and Steel (which I have reviewed here) argues:

Ancient Native Mexicans invented wheeled vehicles with axles for use as toys, but not for transport. This seems incredible to us until we reflect that ancient Mexicans lacked domestic animals to hitch to their wheeled vehicles, which therefore offered no advantage over human porters” (Guns Germs and Steel: p248).

However, it is simply not true that, in the absence of a draft animal, wheels vehicles offered no advantage over human porters”, as claimed by Diamond. On the contrary, as dicussed above, humans themselves can be employed as draft animals (e.g. the wheelbarrow and pulled rickshaw), and, as Diamond himself observes in a later chapter:

Human-powered wheelbarrows… enabled one or more people, still using just human muscle power, to transport much greater weights than they could have otherwise” (Guns Germs and Steel: p359).

Moreover, as again discussed above, the wheel also has other uses besides transport, one of which, the potter’s wheel, actually seems to have been adopted before the use of wheels for transportation purposes in Europe. Yet there is no evidence for the use of the potter’s wheel in the Americas prior to the arrival of Europeans. 

As for the Mayan script, this was also, according to Baker, quite limited. Thus, Baker reports:

There was no way of writing verbs, and abstract ideas (apart from number) could not be inscribed. It would not appear that the technique even of the Maya lent itself to a narrative form, except in a very limited sense. Most of the Middle Americans conveyed non-calendrical information only by speech or by the display of a series of paintings” (p524).

Indeed, he reports that “nearly all their inscriptions were concerned with numbers and the calendar” (p524).

The Middle Americans had nothing that could properly be called a narrative script” (p523-4).

Baker vs Diamond: The Rematch

However, departing from Baker’s conclusions, I regard the achievements of the Mesoamerican civilizations as, overall, quite impressive.

This is especially so, not only when one takes into account, not only their complete isolation from the Old World civilizations of Eurasia, but also of other factors identified by Jared Diamond in his rightly-acclaimed Guns, Germs and Steel (reviewed here).

Thus, whereas the Eurasian cultural zone is oriented largely on an east-to-west axis, spreading from China and Japan in the East, to western Europe and North Africa in the West, America is a tall, narrow continent that spreads instead from north-to-south, quite narrow in places, especially at the Isthmus of Panama, where the North American continent meets South America, which, at the narrowest point, is less than fifty miles across. 

As Diamond emphasizes, because climate varies with latitude (i.e. distance from the equator), this means that different parts of the Americas have very different climates, making the movement and transfer of crops, domesticated animals and people much more difficult.

This, together with the difficulty of the terrain, might explain why even the Incas and Aztecs, though contemporaneous, seem to have been largely if not wholly unaware of one another’s existence, and certainly had no direct contact.

As a result, Native American cultures developed, not only in complete isolation from Old World civilizations, but even largely in isolation even from one another.

Moreover, the Americas had few large domesticable mammals, almost certainly because the first settlers of the continent, on arriving, hunted them to extinction on first arrival, and the mammals, having evolved in complete isolation from humans, were entirely unprepared for the arrival of humans, with their formidable hunting skills, to whom they were wholly unadapted.

In these conditions, the achievements of the Mesoamerican civilizations, especially the Mayans, seem to me quite impressive, all things considered – certainly far more impressive than the achievements of, say, sub-Saharan Africans or Australian Aboriginals.

This is especially so in comparison to sub-Saharan Africa when one takes into consideration the fact that the latter region was neither completely isolated from Eurasian civilizations nor as narrowly oriented on a north-west axis as are the Americas.

Thus, as has been emphasized by astrophysicist Michael Hart in his book, Understanding Human History, Diamond’s theory is a rather more successful explanation for the technological backwardness and underdevelopment of the pre-Columbian Americas than it is for the even greater technological backwardness and underdevelopment of sub-Saharan Africa.

Thus, if these black Africans and Australian Aboriginals can then indeed be determined to possess lesser innate intellectual capacity as compared to, say, Europeans or East Asians, then I feel it is nevertheless premature to say the same of the indigenous peoples of the Americas.

Artistic Achievement

In addition to ranking cultures on scientific, technological and moral criteria, Baker also assesses the quality of their artwork (p378-81; p411-17; p545-549). However, judgements of artistic quality, like moral judgements, are necessarily subjective.

Indeed, Baker’s own manifest biases are, here, readily apparent. Thus, he, on the one hand, disparages black African art as almost invariably non-naturalistic (p381), yet, at the same time, extols the decorative art of the ancient Celtics, which is mostly non-figurative and abstract (p261-2).

However, interestingly, with regard to styles of music, Baker does, to his credit, recognise the possibility of cultural bias. Thus, he suggests that European explorers were generally were dismissive of indigenous African music only because, looking for European-style melody and harmony, they failed to recognise the rhythmical qualities of African music which are, Baker claims, perhaps unequalled in the music of any other race of mankind (p379).[70]

A Reminder of What Was Possible”?

The fact that Race’ remains a rewarding some read forty years after first publication, is an indictment of the hold of politically-correctness over both science and the publishing industry.

In the intervening years, despite all the advances of molecular genetics, the scientific understanding of race seems to have progressed but little, impeded by political considerations.

Meanwhile, the study of morphological differences between races seems to have almost entirely ceased, and a worthy successor to Baker’s ‘Race’, incorporating the latest genetic data, has, to my knowledge, yet to be published.

At the conclusion of the first section of his book, dealing with what Baker calls “The Historical Background”, Baker, bemoaning the impact of censorship and what would today be called political correctness and cancel culture on both science and the publishing industry, recommends the chapter on race from a textbook published in 1928 (namely, Contemporary Sociological Theories by Pitirim Sorokin) as “well worth reading”, even then, over forty years later, if only “as a reminder of what was still possible before the curtain went down” (p61).

Today, some forty years after Baker penned these very words and as the boundaries of acceptable opinion have narrowed yet further, I recommend Baker’s ‘Race’ in much the same spirit – as both an historical document and “a reminder of what was possible”.

__________________________

Endnotes

[1] For example, anthopologist-geneticist Vincent Sarich and science writer Frank Miele, in their book Race: The Reality of Human Differences (which I have reviewed here and here), provide a good example from the history of race science of where the convergent evolution of similar traits among different human lineages was once mistaken for evidence of homology and hence of shared ancestry, when they write:

There are two groups of people with the combination of dark skin and frizzy hairsub-Saharan Africans and Melanesians. The latter have often been called ‘Oceanic Negroes,’ implying a special relationship with Africans. The blood group data, however, showed that they are about as different from Africans as they could be” (Race: The Reality of Human Differences: p134)

Genetic studies often allow us distinguish homology from analogy, because the same or similar traits in different populations often evolve through different genetic mutations. For example, Europeans and East Asians evolved lighter complexions after leaving Africa, in part, by mutations in different genes (Norton et al 2007). Similarly, lactase persistence has evolved through mutations in different genes in Europeans than among some sub-Saharan Africans (Tishkoff et al 2009). Of course, at least in theory, the same mutation in the same gene could occur in different populations, thus providing an example of convergent evolution and homoplasy even at the genetic level. However, this is unlikely, and, with the analysis of a large number of genetic loci, especially in non-coding DNA, where mutations are unlikely to be selected for or against and hence are lost or retained at random in different populations, is unlikely to lead to errors in determining the relatedness of populations. 

[2] In his defence, the Ainu are not one of the groups upon whom Baker focuses in his discussion, and are only mentioned briefly in passing (p158; p173; p424) and at the very end of the book, in his “Table of Races and Subraces”, where he attempts to list, and classify by race, all the groups mentioned in the book, howsoever briefly (p624-5).

[3] For example, in relation to the controversial issue of race differences in brain size, Beals et al report:

By 1940, data collection on ethnic groups had virtually ceased (in part because of its association with racial prejudice). For modern populations, compartive data derive from museum specimens, private collections and the by-products of historical archeology” (Beals et 1984).

In short, political correctness has had a devastating impact on research in this area.
One result is that much of the data on these topics is quite old. Thus, racial hereditiarians, Baker included, are sometimes criticized for relying on studies published in the nineteenth and early-twentieth century. In principle, there is, however, nothing wrong with citing data from the nineteenth or early-twentieth century, unless critics can show that the methodology adopted has subsequently been shown to have been flawed or the research fraudulent. Indeed, if this is the only data available, it is a necessity.
However, it must be acknowledged that the findings of such studies with respect to morphology may no longer apply to modern populations, as a result of recent population movements and improvements in health and nutrition, among other factors.
Of course, we no longer need to rely on morphological criteria in order to attempt to determine the relatedness between populations as Baker and other early- to mid-twentieth century anthropologists did, as genetic data is now available, and provide a much more reliable, and less problematic, means of determining the relatedness between populations. However, it should hardly need stating that the various differences between racial groups in morphology and bodily structure remain an interesting, and certainly a legitimate, subject for scientific study in their own right.

[4] This is a style of formatting I have not encountered elsewhere. It makes it difficult to bring oneself to skip over the material rendered in smaller typeface since it is right there in the main body of the text, and indeed Baker himself claims that this material is “more technical and more detailed than the rest (but not necessarily less interesting)” (pix).

[5] Yet another source of potential terminological confusion results from the fact that, as will be apparent from many passages from the book quoted in this review, Baker uses the word ethnic to refer to differences that would better to termed racial – i.e. when referring to biologically-inherited physical and morphological differences between populations. Thus, for example, he uses the term “ethnic taxon” as “a comprehensive term that can be used without distinction for any of the taxa that are minor to species: that is to say, races, subraces and local forms” (p4). Similarly, he uses the phrase “the ethnic problem” to refer to the “whole subject of equality and inequality among the ethnic taxa of man” (p6). However, as Baker acknowledges, “English words derived from the Greek ἔθνος (ethnic, ethnology, ethnography, and others) are used by some authors in reference to groups of mankind distinguished by cultural or national features, rather than descent from common ancestors” (p4). However, in defending his adoption of this term, he notes “this usage is not universal” (p4). This usage has, I suspect, become even more prevalent in the years since the publication of Bakers book. However, in my experience, the term ethnic’ is sometimes also used as politically correct euphemism for the word race, both colloquially and in academia.

[6] In both cases, the source of potential confusion is the same, since both terms, though referring to a race, are derived from geographic terms (Europe and the Caucasus region, respectively), yet the indigenous homelands of the races in question are far from identical to the geographic region referred to by the term. The term Asian, when used as an ethnic or racial descriptor, is similarly misleading. For example, in British-English, Asian, as an ethnic term, usually refers to South Asians, since South Asians form a larger and more visible minority ethnic group in the UK than do East Asians. However, in the USA, the term Asian is usually restricted to East Asians and Southeast Asians – i.e. those formerly termed Mongoloid. The British-English usage is more geographically correct, but racially misleading, since populations from the Indian subcontinent, like those from Central Asia and the Middle East (also part of the Asian continent) are actually genetically closer to southern Europeans and North Africans than to East Asians and were generally classed as Caucasian by nineteenth and early-twentieth century anthropologists, and are similarly classed by Baker himself. This is one reason that the term Mongoloid, despite pejorative connotations, remains useful.

[7] Moreover, the term Mongoloid is especially confusing given that it has also been employed to refer to people suffering from a developmental disability and chromosomal abnormality (Down Syndrome), and, while both usages are dated, and the racial meaning is actually the earlier one from which the later medical usage is mistakenly derived, it is the latter usage which seems, in my experience, to retain greater currency, the word ‘Mongoloid’ being sometimes employed as a rather politically-incorrect insult, implying a mental handicap. Therefore, while I find annoying the euphemism treadmill whereby terms once quite acceptable terms (e.g. ‘Negro’, ‘coloured people’) are suddenly and quite arbitrarily deemed offensive, the term ‘Mongoloid’ is, unlike these other etymologically-speaking, quite innocent terms, understandably offensive to people of East Asian descent given this dual meaning.

[8] For example, another ethnonym, Asian, is also etymologically problematic. Thus, the word Asia, the source of the ethnonym, Asian, derives from the Greek Ἀσία, which originally referred only to Anatolia, at the far western edge of what would now be called Asia, the inhabitants of which region are not now, nor have ever likely been, Asian in the current American sense. Indeed, the very term Asia is a Eurocentric concept, grouping together many diverse peoples, fauna, flora and geographic zones, and whose border with Europe is quite arbitrary. Another even more etymologically suspect ethonym is, of course, the term Indian (and its derivatives ‘Amerindian’, ‘Red Indian’ and ‘American Indian’) when applied to the indigenous peoples of the Americas.

[9] The main substantive differences between the rival taxonomies of different racial theorists reflect the perennial divide between lumpers and splitters. There is also the question of precisely where the line is to be drawn between one race and another in clinal variation between groups, and whether a hybrid or clinal population sometimes constitutes a separate race in and of itself.

[10] For example, in Nicholas Wade’s A Troublesome Inheritance, this history of the misuse of the race concept comes in Chapter Two, titled ‘Perversions of Science’; in Philippe Rushton’s Race, Evolution and Behavior: A Life History Perspective (which I have reviewed here, here and here), this historical account is postponed until Chapter Five, titled ‘Race and Racism in History’; in Jon Entine’s Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About it, it is delayed until Chapter Nine, titled ‘The Origins of Race Science’; whereas, in Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here), these opening chapters discussing the history of racial science expand to fill almost half the entire book.

[11] Indeed, somewhat disconcertingly, even Hitler’s Mein Kampf is taken seriously by Baker, the latter acknowledging that “the early part of [Hitler’s] chapter dealing with the ethnic problem is quite well-written and not uninteresting” (p59) – or perhaps this is only to damn with faint praise.

[12] Thus, at the time Stoddard authored The Rising Tide of Color Against White World-Supremacy in 1920, with a large proportion of the world under the control of European colonial empires, a contemporary observer might be forgiven for assuming that what Stoddard called White World-Supremacy, was a stable, long-term, if not permanent arrangement. However, Stoddard accurately predicted the demographic transformation of the West, what soime have termed The Great Replacement or A Third Demographic Transition, almost a century before this process began to become a reality.

[13] The exact connotations of this passage may depend on the translation. Thus, other translators translate the passage that Manheim translates as The mightiest counterpart to the Aryan is represented by the Jew instead as The Jew offers the most striking contrast to the Aryan”, which alternative translation has rather different, and less flattering, connotations, given that Hitler famously extols the Aryan as the master race. The rest of the passage quoted remains, when taken in isolation, broadly flattering, however.

[14] To clarify, both Boas and Montagu are briefly mentioned in later chapters. For example, Boass now largely discredited work on cranial plasticity is cited, discussed and accepted at face-value by Baker at the end of his chapter on ‘Physical Differences Between the Ethnic Taxa of Man: Introductory Remarks’ (p201-2). However, this is outside of Baker’s chapters on “The Historical Background”, and therefore Boas’s role in (allegedly) shaping the contemporary consensus of race denial is entirely unexplored by Baker. For discussion on this topic, see Carl Degler’s In Search of Human Nature; see also Chapter Two of Kevin Macdonald’s The Culture of Critique (which I have reviewed here) and Chapter Three of Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here).

[15] Thus, there was no new scientific discovery that presaged or justified the abandonment of biological race as an important causal factor in the social and behavioural sciences. Later scientific developments, notably in genetics, were certainly later co-opted in support of this view. However, there is no coincidence in time between these two developments. Therefore, whatever the true origins of the theory of racial egalitarianism, whether one attributes it to horror at the misuse of race science by the Nazi regime, or the activism of certain influential social scientists such as Boas and Montagu, one thing is certain – namely, the abandonment, or at least increasing de-emphasis, of the race category in the social and behavioural sciences was originally motivated by political rather than scientific considerations. See Carl Degler’s In Search of Human Nature; see also Chapter 2 of Kevin Macdonald’s Culture of Critique (which I have reviewed here) and Chapter Three of Sarich and Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here).

[16] That OUP gave up the copyright is, of course, to be welcomed, since it means, rather than gathering dust on the shelves of university libraries, while the few remaining copies still in circulation from the first printing rise in value, it has enabled certain dissident publishing houses to release new editions of this now classic work.

[17] Baker suggests that, at the time he wrote, behavioural differences between pygmy chimpanzees and other chimpanzees had yet to be demonstrated (p113-4). Today, however, pygmy chimpanzees are known to differ behaviourally from other chimps, being, among other differences, less prone to intra-specific aggression and more highly sexed. However, they are now usually referred to as bonobos rather than pygmy chimpanzees, and are also recognized as a separate species from other chimpanzees, rather than a mere subspecies.

[18] This is, at least, how Baker describes this species complex and how it was traditionally understood. Researching the matter on the internet, however, suggests whether this species complex represents a true ring species is a matter of some dispute (e.g. Liebers et al 2006).

[19] In cases of matings between sheep and goats that result in offspring, the resulting offspring themselves are usually, if not always, infertile. Moreover, actually, according to the wikipedia page on the topic, the question of when sheep and goats can ever successfully interbreed is more complex than suggested by Baker.

[20] I have found no evidence to support the assertion in some of the older nineteenth-century literature that women of lower races have difficulty birthing offspring fathered by European men, owing to the greater brain- and head-size of European infants. Summarizing this view, contemporary Russian racialist Vladimir Avdeyev, in his impressively encyclopaedic, if extremely racist and occassionally slightly bonkers book, Raciology: The Science of the Hereditary Traits of Peoples, claims:

The form of the skull of a child is directly connected with the characteristics of the structure of the mother’s pelvis—they should correspond to each other in the goal of eliminating death in childbirth. The mixing of the races unavoidably leads to this, because the structure of the pelvis of a mother of a different race does not correspond to the shape of the head of [the] mixed infant; that leads to complications during childbirth” (Raciology: p157).

Thus, Avdeyev claims, owing to race differences in brain size:

Women on lower races [sic] endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth. But this can never be said of women of lower races [sic] who birth children of white fathers” (Raciology: p157).

Thus, he quotes an early-twentieth century Russian race theorist as claiming:

American Indian women… often die in childbirth from pregnancies with a child of mixed blood from a white father, whereas pure-blooded children within them are easily born. Many Indian women know well the dangers [associated with] a pregnancy from a white man, and therefore, they prefer a timely elimination of the consequence of cross-breeding by means of fetal expulsion, in avoidance of it” (Raciology: p157-8).

This, interestingly, accords with the claim of infamous late-twentieth century race theorist J Philippe Rushton, in the ‘Preface to the Third Edition’ of his book Race, Evolution and Behavior (which I have reviewed here, here and here), that, as compared to whites and Asians, blacks have narrower hips, giving them a more efficient stride”, which provides an advantage in many athletic events, and that:

The reason Whites and East Asians have wider hips than Blacks, and so make poorer runners, is because they give birth to larger brained babies” (Race, Evolution and Behavior: p11-12).

Thus, Rushton explains elsewhere:

Increasing brain size [over the course of hominid evolution] was associated with a broadening of the pelvis. The broader pelvis provides a wider birth canal, which in turn allows for delivery of larger-brained offspring” (Odyssey: My Life as a Controversial Evolutionary Psychologist: p284-5).

However, contrary to the claim of Avdeyev, I find no support from contemporary delivery room data for the claim that women from so-called lower-races’ experience greater birth complications, and mortality rates, when birthing offspring fathered by European males due to the larger brain and head-size of the latter.
On the contrary, it seems to be differences in overall body-size, not brain-size, that seem to be the key factor, with East Asian women having greater difficulties birthing offspring fathered by European males because of the smaller frames of East Asian women, even though East Asians have brains as large as or larger than those of Europeans
 (Nystrom et al 2008).
Neither is it true that, where inter-racial mating has not occurred, then, on account of the small brain-size of their babies:

Women on lower races [sic] endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth(Raciology: p157).

On the contrary, data from the USA actually seems to indicate a somewhat higher rate of caesarean delivery among African-American women as compared to white American women (Braveman et al 1995; Edmonds et al 2013; Getahun et al 2009; Valdes 2020).

[21] Any selection would presumably be against the European-derived component of the African-American genome, since African-Americans are of predominantly black African ancestry. It is therefore possible that selection against the (possibly) deleterous European component of their genome was offset by other advantages possibly accruing to African-Americans with increased European ancestry (e.g. the increased intelligence supposedly associated with increased levels of European ancestry, or the social benefits formerly associated with lighter skin tone or a more Caucasoid phsyiognomy).
Examining the effects of interracial hybridization on other traits besides fertility, there are mixed results. Thus, one study reported what the authors interpreted as a hybrid vigour effect on g-factor of general intelligence among the offspring of white-Asian unions in Hawaii, as compared to the offspring of same-race couples matched for educational and occupational levels (Nagoshi & Johnson 1986). Similarly, Lewis (2010) attributed the higher attractiveness ratings accorded to the faces of mixed-race people to heterosis. Meanwhile, another study found that height was positively correlated with the distance between the birthplaces of one’s parents, itself presumably a correlate of their relatedness (Koziel et al 2011).
On the other hand, however, behavioural geneticist Glayde Whitney suggests that hybrid incompatibility may explain the worse health outcomes, and shorter average life-spans, of African Americans as compared to whites in the contemporary USA, owing to the formers mixed African and European ancestry (Whitney 1999). One specific negative health outcome for some African-Americans resulting from a history racial admixture is also suggested by Helgadottir et al (2006). On the other hand, the disproportionate success of African-Americans in professional athletics hardly seems indicative of impaired health.
It is notable that, whereas recent studies tend to emphasize the (supposed) positive genetic effects resulting from interracial unions, the older literature tends to focus on (supposed) negative effects of interracial hybridization (see Frost 2020). No doubt this reflects the differing zeitgeister of the two ages (Provine 1976; Khan 2011c).
At any rate, even assuming that it can be shown that mixed-race people either enjoy improved health outcomes as compared to monoracial people as a consequence of hybrid vigour, or impaired health outcomes due to outbreeding depression, this is not generally regarded as directly relevant to the question of whether the different human races are to be regarded as separate species. As Darwin wrote:

The inferior vitality of mulattoes is spoken of in a trustworthy work as a well-known phenomenon; but this is a different consideration from their lessened fertility; and can hardly be advanced as a proof of the specific distinctness of the parent races… The common mule, so notorious for long life and vigour, and yet so sterile, shews how little necessary connection there is in hybrids between lessened fertility and vitality” (The Descent of Man).

[22] To clarify, some other domestic species have also been described as having self-domesticated. In particular, a currently popular theory of dog domestication holds that, rather than humans adopting and domesticating wolves, wolves effectively domesticated themselves, by scavenging around human campfires to feed themselves, the tamer, less aggressive and less fearful wolves enjoying greater success in this endeavour, and hence coming to predominate.
However, although, in a sense, a form of self-domestication, this process would still have involved wolves habituating themselves to, and becoming tolerated by, and tolerant to, a different species to themselves, namely humans. In contrast, theories of human self-domestication involve humans interacting only with members of the same species, namely other humans. 

[23] Interestingly, while languages and cultures vary in the number of colours that they recognise and have words for, both the ordering of the colours recognised, and the approximate boundaries between different colours, seems to be cross-culturally universal. Thus, some languages have only two colour terms, which are always equivalent to ‘light’ and ‘dark’. Then, if a third colour terms is used, it is always equivalent to ‘red’. Next come either ‘green’ or ‘yellow’. Experimental attempts to teach colour terms not matching the familiar colours show that individuals learn these colour terms much less readily than they do the colour familiar terms recognised in other languages, if if their own language lacks these latter familiar colour terms. This, of course, suggests that our colour perception is both innately programmed into the mind and cross-culturally universal (see Berlin & Kay, Basic Color Terms: Their Universality and Evolution). 

[24] Indeed, as I discuss later, with respect to what Baker calls subraces, we may already have long previously passed this point, at least in Europe and North America. While morphological differences certainly continue to exist, at the aggregate, statistical level, between populations from different regions of Europe, there is such overlap, such a great degree of variation even within families, and the differences are so fluid, gradual and continuous, that I suspect such terms as the Nordic race, Alpine race, Mediterranid race and Dinaric race have likely outlived whatever usefulness they may once have had and are best retired. The differences are now best viewed as continuous and clinal.

[25] While Ethiopians and other populations from the Horn of Africa are indeed a hybrid or clinal population, representing an intermediate position between Caucasians and other black Africans, Baker perhaps goes too far in claiming:

Aethiopids (‘Eastern Hamites’ or ‘Erythriotes’) of Ethiopia and Somaliland are an essentially Europid subrace with some Negrid admixture (p225).

Thus, summarizing the findings of one study from the late-1990s, Jon Entine reports:

Ethiopians [represent] a genetic mixture of about 60 percent African and 40 percent Caucasian” (Taboo: Why Black Athletes Dominate Sports And Why We’re Afraid To Talk About It: p115)

The study upon which Entine based this conclusion looked only at mitochondrial DNA and Y chromosome data. More recent studies have incorporated autosomal DNA as well. However, while eschewing terms such as Caucasian’, such studies broadly confirm that there exist substantial genetic affinities between populations from the Horn of Africa and the Middle East (e.g. Ali et al 2020Khan 2011aKhan 2011bHodgson 2014).

[26] Thus, Lewontin famously showed that, when looking at individual genetic loci, most variation is within a single population, rather than between populations, or between races (Lewontin 1972). However, when looking at phenotypic traits that are caused by polygenes, it is easy to see that there are many such traits in which the variation within the group does not dwarf that between groups – for example, differences in skin colour as between Negroes and Nordics, or differences in stature between as Pygmies and even neighbouring tribes of Bantu. This is a point emphasized by Sarich and Miele in Race: The Reality of Human Differences (which I have reviewed here).

[27] In addition to discussing morphological differences between races, Baker also discusses differences in scent (170-7). This is a particularly emotive issue, given the negative connotations associated with smelling bad. However, given the biochemical differences between races, and the fact that even individuals of the same race, even the same family, are distinguishable by scent, it is inevitable that persons of different races will indeed differ in scent, and, given the apparent universality of ethnocentrism and in-group preference, unsurprising that people would generally prefer the scent of their own group. There is substantial anecdotal evidence that this is indeed the case.
Baker reports that, in general, East Asians have less strong body odour, whereas both Caucasoids and blacks have stronger body odour. Partly this is explained by the relative prevalence of dry and wet ear wax, which is associated with body odour, varies by population and is one of the few easily observable phenotypic traits in humans that is determined by simple Mendelian inheritance (see McDonald, Myths of Human Genetics).
Intriguingly, Nicholas Wade speculates that dry earwax, which is associated with less strong body-odour, may have evolved through sexual selection in colder climates where, due to the cold, more time is spent indoors, in enclosed spaces, where body odour is hence more readily detectable, and producing less scent may have conferred a reproductive advantage (A Troublesome Inheritance: p91). This may explain some of the variation in the prevalence of dry and wet ear wax respectively, with dry earwax predominating only in East Asia, but also being found, albeit to a lesser degree, among Northern Europeans.
On the other hand, however, although populations inhabiting colder climates may spend more time indoors, populations inhabiting warmer tropical climates might be expect to sweat more due to the greater heat and hence build up greater bodily odour, which might be expected to lead to greater sexual selection against body odour among tropical populations.

[28] A few exceptions include where Baker discusses the small but apparently statistically significant differences between the skulls of Celts and Anglo-Saxons (p257), and where he mentions statistically significantally differences between ancient Egypian skulls and those of Negroes (p518).

[29] Interestingly, in this quotation, Reich neglects to mention either North Africa or South Asia. The omission of the former is perhaps an oversight, since, while to some extent genetically distinct, and also having some sub-Saharan African admixture, the peoples of North Africa are genetically and racially continuous with those of Europe and especially the Middle East
His omission of South Asia, on the other hand, may perhaps be deliberate, since, although Baker seemingly classes even South Indian Dravidians as unambiguously Europid/Caucasoid, the situation here is more complex and Reich himself refers to a sharp gradient of change in central Asia before another region of homogeneity is reached in East Asia” (Who We Are and How We Got Here: p93).
Similarly, Nicholas Wade reports that several Central Asian ethnicities, such as Pathans, Hazara and Uigurs, are of mixed European and East Asian ancestry” (A Troublesome Inheritance: p98).
Moreover, Wade reports that, in one more fine-grained and detailed analysis that sampled more genetic markers, two additional clusters emerge, one for the people of Central and South Asia, and another for those of the Middle East (Ibid.: p99-100)

[30] Baker does, however, acknowledge that:

Some Jewish communities scattered over the world are Jews simply in the sense that they adhere to a particular religion (in various forms); they are not definable on an ethnic basis” (p246).

Here, Baker has in mind various communities that are not either Ashkenazi or Sephardic (or Mizrahi), such as the Beta Israel of Ethiopia, the Lemba of Southern Africa and the Kaifeng Jews resident in China. Although Baker speaks of communities”, the same is obviously true of recent converts to Judaism

[31] Thus, of the infamous Khazar hypothesis, now almost wholly discredited by genetic data, but still popular among some anti-Zionists, because it denies the historical connection between (most) contemporary Jews and the land of Israel, and among Christian anti-Semites, because it denies that the Ashkenazim are indeed chosen people’ of the Old Testament, Baker writes:

It is clear they [the Khazars] were not related, except by religion, to any modern group of Jews” (p34).

[32] Baker thus puts the intellectual achievements of the Ashkenazim in the broader context of other groups within this same subrace, including the Assyrians, Hittites and indeed Armenians themselves. Thus, he concludes:

The contribution of the Armenid subrace to civilization will bear comparison with that of any other” (p246-7).

Some recent genetic studies have indeed suggested affinities between Ashkenazim and Armenian populations (Nebel et al 2001; Elhaik 2013).

[33] In Baker’s defence vis a vis any suggestion of anti-Semitism, the illustration in question is actually taken from the work of a Jewish anthropologist, Joseph Jacobs (Jacobs 1886). Jacobs findings this topic are summarized in this entry in the 1906 Jewish Encyclopedia, entitled Nose, authored by Jacobs and Maurice Fishberg, another Jewish anthropologist, which reports that the ‘hook nose’ stereotypically associated with Jewish people is actually found in only a minority of European Jews (Jacobs & Fishberg 1906).
However, such noses do seem to be more common among Jews than among at least some of the host populations among whom they reside. The
wikipedia article on Jewish noses cites this same entry from the Jewish Encyclopaedia as suggesting that the prevalence of this shape of nose is actually no greater among Jews than among populations from the Mediterranean region (hence the supposed similar shape of so-called Roman noses). However, the Jewish Encyclopaedia entry itself does not actually seem to say any such thing. Instead, it reports only that:

“[As compared with] non-Jews in Russia and Galiciaaquiline and hook-noses are somewhat more frequently met with among the Jews” (Jacobs & Fishberg 1906). 

The entry also reports that, measured in terms of their nasal index, “Jewish noses… are mostly leptorhine, or narrow-nosed” (Jacobs & Fishberg 1906). Similarly, Joseph Jacobs reports in On the Racial Characteristics of Modern Jews’:

Weisbach‘s nineteen Jews vied with the Patagonians in possessing the longest nose (71 mm.) of all the nineteen races examined by him … while they had at the same time the narrowest noses (34 mim)” (Jacobs 1886).

This data, suggesting that Jewish noses are indeed long but are also very narrow, contradicts Baker’s claim that the characteristic Ashkenazi nose is “large in all dimensions [emphasis added]” (p239). However, such a nose shape is consistent Jews having evolved in an arid desert environment, such as the Nagev or other nearby deserts, or in the Judean mountains, where the earliest distinctively Jewish settlements are thought to have developed. Thus, anthropologist Stephen Molnar writes:

Among desert and mountain peoples the narrow nose is the predominant form” (Human Variation: Races, Types and Ethnic Groups: p196).

As Baker himself observes, the nose width characteristic of a population correlates with both the temperature and humidity of the environment in which they evolved (p310-311). This is known as Thomson’s nose rule and is thought to reflect the need to warm and moisturize air before it enters the lungs in cold and dry conditions respectively.
However, interestingly, Baker reports that the correlations are much weaker among the indigenous populations of the American continent (p311). Presumably this is because humans only relatively recently populated that continent
, and therefore have yet had sufficient time to become wholly adapted to the different environments in which they find themselves.
A further factor affecting nose width is jaw size. This might explain why Australian Aboriginals have extremely wide noses despite much of the Australian landmass being dry and arid, since Aboriginals also have very large jaws (Human Variation: Races, Types and Ethnic Groups: p196).
However, it is fallacious to believe that most Australian Aborigines lived in the arid Outback prior to the arrival of Europeans and their resulting displacement. In fact, prior to the arrival of Europeans, Aboriginals were probably concentrated in the same more fertile areas where most white European settlers are today themselves concentrated, since the same areas which are conducive for agriculture and settlement today also tended to provide more game and vegetation for foraging groups. Aboriginals are associated with the Outback today only because this is the only part of Australia in which they have not been displaced by white settlers, precisely because it is so arid and inhospitable.

[34] Hans Eysenck refers in his autobiography to a study supposedly conducted by one of his PhD students that he claims demonstrated statistically that people, both Jewish and Gentile, actually perform at no better than chance when attempting to distinguish Jews from non-Jews, even after extended interaction with one another (Rebel with a Cause: p35). However, since he does not cite a source or reference for this study, it was presumably unpublished, and must be interpreted with caution.
Eysenck himself, incidentally, was of closeted 
half-Jewish ancestry, practising what antiSemite Kevin Macdonald calls crypsis, which may be taken to suggest he was not entirely disinterested with regard to to question of the extent to which Jews can be recognized by sight alone. 
The only other study I have found addressing the quite easily researchable, if politically incorrect, question of whether some people can or cannot identify Jews from non-Jews on the basis of phenotypic differences is Andrzejewski et al (2009).

[35] This is one of the few occasions in the book where I recall Baker actually mentioning whether the morphological differences between racial groupings that he describes are statistically significant.

[36] Interestingly, Stephen Oppenheimer, in his book Origins of the British, posits a link between the so-called Celtic regions of the British Isles and populations from one particular area of the Mediterranean, namely the Iberian peninsula, especially the Basques, themselves, speaking a non-Indo-European language withno known relationship to any other language in the world, probably the descendants of the original pre-Indo-European inhabitants of the Iberian peninsula (see Oppenheimer 2006; see also Blood of the Isles).
This seemingly corroborates the otherwise implausible mythological account of the peopling of Ireland provided in Lebor Gabála Érenn, which claims that the last major migration to, and invasion of, Ireland, from which movement of people the modern Irish primarily descend, arrived from Spain in the form of the Milesians. This mythological account may derive from the similarity between the Greek and Latin words for the two regions, namely Iberia and Hibernia respectively, and between the words Gael and Galicia, and the belief of some ancient Roman writers, notably Orosius and Tacitus, that Ireland lay midway between Britain and Spain (Carey 2001).
However, while some early population genetic studies were indeed interpreted to suggest a connection between populations from Iberia and the British Isles, this interpretation seems to have been largely been discredited by more recent research.

[37] Actually, the position with regard to hair and eye colour is rather more complicated. On the one hand, hair colour does appear to be darkest in the ostensibly Celtic’ regions of the British Isles. Thus, Carleton Coon in his 1939 book, The Races of Europe, reports that, with regard to hair colour:

England emerges as the lightest haired of the four major divisions of the British Isles, and Wales as the darkest” (The Races of Europe: p385).

Likewise, Coon reports, that in Scotland:

“Jet black hair is commoner in the western highlands than elsewhere, and is statistically correlated with the greatest survival of Gaelic speech” (The Races of Europe: p387).

However, patterns of eye colour diverge from and complicate this picture. Thus, Coon reports:

“Whereas the British are on the whole lighter-haired than the Irish, they are at the same time darker-eyed” (The Races of Europe: p388).

Indeed, contrary to the notion of the Irish as a people with substantial Mediterranean racial affinities, Coon claims:

There is probably no population of equal size in the world which is lighter eyed, and bluer eyed, than the Irish” (The Races of Europe: p381).

On the other hand, the Welsh, in addition to being darker-haired than the English, are also darker-eyed, with a particularly high prevalence of dark eyes being found in certain more isolated regions of Wales (The Races of Europe: p389).
Interestingly, as far back as the time of the Roman Empire, the Silures, a Brittonic tribe occupying most of South-East Wales and known for their fierce resistance to the Roman conquest, were described by Roman writers Tacitus and Jordanes (the Romans themselves being, of course, a Mediterranean people) as “swarthy” in appearance and as possessing black curly hair.
The same is true of the, also until recently Celtic-speaking, Cornish people, who are, Coon reports, the darkest eyed of the English” (The Races of Europe: p389). Dark hair is also more common in Cornwall (The Races of Europe: p386). Cornwall is, Coon therefore reports, the darkest county in England(The Races of Europe: p396). (However, with the historically unprecedented mass migration of non-whites into the UK in the latter half of the twentieth century and beyond, this is, of course, no doubt no longer true.)
Yet another complicating factor is the prevalence of red hair, which is also associated with the Celtic’ regions of the British Isles, but is hardly a Mediterranean character, and which, like dark hair, reaches its highest prevalence in Wales (The Races of Europe: p385). Baker, for his part, does not dwell on this point, but does acknowledge
, “there is rather a high proportion of people with red hair in Wales”, something for which, he claims “no satisfactory explanation… has been provided” (p265).
However, Baker is skeptical regarding the supposed association of the ancient Celts with ginger or auburn hair. He traces this belief to a single casual remark of Tacitus. However, he suggests that the Latin word used rutilai is actually better translated as red (inclining to golden yellow), and was, he observes, also used to refer to the Golden Fleece and to gold coinage (p257). 

[38] The genetic continuity of the British people is, for example, a major theme of Stephen Oppenheimer’s The Origins of the British (see also Oppenheimer 2006). It is also a major conclusion of Bryan Sykes’s Blood of the Isles, which concludes:

We are an ancient people, and though the [British] Isles has been the target of invasion and opposed settlement from abroad ever since Julius Caesar first stepped onto the shingle shores of Kent, these have barely scratched the topsoil of our deep rooted ancestry” (Blood of the Isles: p338).

However, population genetics is an extremely fast moving science, and recent research has revised this conclusion, suggesting a replacement of around 90% of the population of the British Isles, albeit in very ancient times (around 2000BCE) associated with the spread of the Bell Beaker culture and Steppe-related ancestry, presumably deriving from the Indo-European expansion (Olalde et al 2018). Also, recent population genetic studies suggest that the Anglo-Saxons actually made a greater genetic contribution to the ancestry of the English, especially those from Eastern England, than formerly thought (e.g. Martiniano et al 2016; Schiffels et al 2016).

[39] However, in The Origins of the British, Stephen Oppenheimer proposes an alternative route of entry and point of initial disembarkation, suggesting that the people whom we today habitually refer to as ‘Celts’ arrived, not from Central Europe as traditionally thought, but rather up the Atlantic seaboard from the west coasts of France and Iberia. This is consistent with some archaeological evidence (e.g. the distribution of passage graves) suggesting longstanding trade and cultural links up the Atlantic seaboard from the Mediterranean region, through the Basque country, into Brittany, Cornwall, Wales and Ireland. This would also provide an explanation for what Baker claims is a Mediterranid component in the ancestry of the Welsh and Irish, as supposedly evidenced in distribution of blood groups and the prevalence dark hair and eye colours as recorded by Beddoe.

[40] Interestingly, in addition to gracialization having occurred least, if at all, in Fuegians and Aboriginals, Wade also reports that:

Gracialization of the skull is most pronounced in sub-Saharan Africans and East Asians, with Europeans retaining considerable robustness (A Troublesome Inheritance: p167).

This is an exception to what Steve Sailer calls ‘Rushton’s Rule of Three (see here) and, given that Wade associates gracialization with domestication and pacification (as well as with neoteny), suggests that, at least by this criteria, Europeans evince less evidence of pacification and domestication than do black Africans. This is perhaps a surprising finding given that domestication and pacification among humans are usually associated with the rise of civilization, yet, according to Baker himself, civilization was largely absent from sub-Saharan Africa prior to the arrival of Europeans (see discussion above).

[41] Actually, the meaning of the two terms is subtly different. ‘Paedomorphy’ refers to the retention of juvenile or infantile traits into adulthood. ‘Neoteny refers to one particular process whereby this end-result is achieved, namely slowing some aspects of physiological development. However, ‘paedomorphy’ can also result from another process, namely progenesis’, where, instead, some aspects of development are actually sped up, such that the developing organism reaches sexual maturity earlier, before reaching full maturity in other respects. In humans, most examples of paedomorphy result from the former process, namely ‘neoteny.

[42] Leading mid-twentieth century physical anthropologist Carleton Coon, writing a few years before Baker, denies that this trait is universal among Bushmen, writing:

According to early accounts, all unmixed Bushman males have penises which protrude forward as in infants, but this is not always true” (The Living Races of Men: p112).

Politically correct modern scholarship tends to dismiss the claim entirely as a nineteenth century racialist myth, rooted in stereotypes of native Africans as animalistic, highly-sexed and hence being in a state of permanant sexual arousal, as might be suggested by a semi-erect penis (e.g. Gordon 1998). On the other hand, the photographic evidence provided by Coon and other authors shows that the trait is at least found among some Bushmen.
Coon, interestingly, alludes to another supposed curiosity of San Bushman genitalia, claiming that:

Another oddity of Bushmen is monorchy, or the descent of only one testicle, but this also is not universal among Bushman males” (The Living Races of Men: p113)

An obvious problem with these claims is that, as with the supposed elongated labia of San women (discussed above), verification, or falsification, requires intimate examination, to which subjects might object.
At any rate, the alleged paedomorphic penises of San males contrast with those of neighbouring Negroids, at least according to popular stereotype
. For his part, Baker accepts the stereotype that black males have large penes. However, he cites no quantitative data, remarking only:

That Negrids have large penes is somtimes questioned, but those who doubt it are likely to change their minds if they will look at photographs 8, 9, 20, 23, 29, and 37 in Bernatzig’s excellently illustrated book Zwischen Weissem Nil und Belgisch-Kongo’. They represent naked male Nilotids and appear convincing” (p331).

But five photos, presumably representing just five males, hardly represents a convincing sample size. (I found several of the numbered pictures online by searching for the book’s title, and each showed only a single male.) Interestingly, Baker is rightly skeptical regarding claims of differences in the genitalia between European subraces, given the intimate nature of the measurements required, writing:

It is difficult to obtain measurements of theses parts of the body and statements about subracial differences in them must not be accepted without confirmation” (p219).

[43] Interestingly, in their book Big Brain: The Origins and Future of Human Intelligence, neuroscientists Gary Lynch and Richard Granger devote considerable discussion to a supposedly extinct species of hominid, termed Boskop Man or alternatively the Boskopoid race, who, they claim, possessed, as compared to other hominid species (ourselves included), extremely large brains, paedomorphic traits and some physical resemblence to living San Bushmen. However, anthropologist-blogger John Hawks has critiqued this claim in a blog post where he argues that the Boskops are no longer recognized as a distinct species (or subspecies) of hominid and also that the cranial capacity of those remains formerly identified as Boskop, though certainly large, has nevertheless been exaggerated. In this, Hawks cites Singer (1958), who argues that those skulls identified as Boskops’ should instead be classified as Khoisan, from whom they were formerly distinguished solely on the basis of their brain size. However, as Baker suggests, living San Bushmen have very small brains as compared to other extant human races, at least according to data cited by Richard Lynn in his book, Race Differences in Intelligence (reviewed here).

[44] Indeed, the claim that East Asians are especially paedomorphic or neotenized as compared to other races is not restricted to researchers in the racialist or hereditarian tradition. On the contrary, anthropologist  Ashley Montagu, though an early pioneer in race denial, nevertheless conceded at least one racial difference, namely:

The Mongoloid skull generally, whether Chinese or Japanese, has been rather more neotenized than the Caucasoid or European” (Growing Young: p18).

Similarly, no lesser leftist champion of racial egalitarianism than infamous scientific fraud and charlatan Stephen Jay Gould conceded:

It is hard to deny that Mongoloids… are the most paedomorphic of human groups (Ontogeny and Phylogeny: p134).

Interestingly, Gould made this concession in the context of arguing against the notion that the greater paedomorphosis of Caucasoids as compared to Negroids was indicative of the intellectual superiority of the former. Yet, since there is now widespread agreement among hereditarians that East Asians (but curiously not South-East Asians) score rather higher in IQ tests than do Caucasoids, his observations are actually supportive of both the link between paedomorphosis and encephalization and the hereditarian hypothesis with respect to to race differences in intelligence.
Perhaps recognizing this, in a later book Gould, while still acknowledging that Orientals, not whites, are clearly the most neotenous of human races”, rather contradicted himself just a couple of sentences later by also asserting:

The whole enterprise of ranking groups by degree of neoteny is fundamentally unjustified” (Mismeasure of Man: p121).

[45] Thus, anthropologist Carleton Coon, in Racial Adaptations: A Study of the Origins, Nature, and Significance of Racial Variations in Humans, does not even consider sexual selection as an explanation for the evolution of Khoisan steatopygia, despite their obviously dimorphic presentation. Instead, he proposes:

“[Bushman’s] famous steatopygia (fat deposits that contain mostly fibrous tissue) may be a hedge against scarce nutrients and draught during pregnancy and lactation” (Racial Adaptations: p105). 

[46] Others, however, notably Desmond Morris in The Naked Ape (which I have reviewed here), have implicated sexual selection in the evolution of the human female’s permanent breasts. The two hypotheses are not, however, mutually exclusive. Indeed, they may be complementary. Thus, Nancy Etcoff in Survival of the Prettiest (which I have reviewed here) proposes that breasts may be perceived as attractive by men precisely because they honestly advertise the presence of the fat reserves needed to sustain a pregnancy” (Survival of the Prettiest: p187). By analogy, the same could, of course, also be true of fatty buttocks.

[47] Thus, Baker demands rhetorically:

Who could conceivably fail to distinguish between a Sanid and a Europid, or between an Eskimid [Eskimo] and a Negritid [Negrito], or between a Bambutid (African Pygmy) or an Australid [Australian Aboriginal]?

[48] Baker does discuss the performance of East Asians on IQ tests, but his conclusions are ambivalent (p490-492). He concludes, for example, “the IQs of Mongolid [i.e. East Asian] children in North America are generally found to be about the same as those of Europids” (p490). Yet recent studies have revealed a slight advantage for East Asians in general intelligence. Baker also mentions the relatively higher scores of East Asians on tests of spatio-visual ability, as compared to verbal ability. However, he attributes this to their lack of proficiency in the language of their host culture, as he relied mostly on American studies of first and second-generation immigrants, or the descendants of immigrants, who were often raised in non-English-speaking homes, and hence only learnt English as a second-language (p490). However, recent studies suggest that East Asians score relatively lower on verbal ability, as compared to their scores on spatio-visual ability, even when tested in a language in which they are wholly proficient (see Race Differences in Intelligence: reviewed here).

[49] Rushton and Jensen (2005) favour the hereditarian hypothesis vis a vis race differences in intelligence, and their presentation of the evidence is biased somewhat in this direction. Nisbett’s rejoinder therefore provides a good balance, being very much biased in the opposite direction. Macintosh’s chapter is perhaps more balanced, but he still clearly favours an environmental explanation with regard to population differences in intelligence, if not with regard to individual differences. My own post on the topic is, of course, naturally enough, the most thorough and balanced treatment of this topic.

[50] Indeed, in proposing tenable environmental-geographical explanations for the rise and fall of civilizations in different parts of the world, Jared Diamond’s Guns, Germs and Steel represents a substantial challenge to Baker’s conclusions in this chapter and the two books are well worth reading together. Another recent work addressing the question of why civilizations rise and fall among different races and peoples, but reaching less politically-correct conclusions, is Michael Hart’s Understanding Human History, which seems to have been conceived of as a rejoinder to Diamond, drawing heavily upon, but also criticizing the former work.

[51] Interestingly, Baker quotes Toynbee as suggesting that:

An ‘identifying mark’ (but not a definition) [of] civilization might be equated with ‘a state of society in which there is a minority of the population, however small, that is free from the task, nor merely of producing food, but of engaging in any other form of economic activities-e.g. industry or trade” (p508).

Yet a Marxist would view this, not as a marker of civilization, but rather of exploitation. Those free from engaging in economic activity are, from a Marxist perspective, clearly extracting surplus value, and hence exploiting the labour of others. Toynbee presumably had in mind the idle rich or leisure class, as well perhaps as those whom the latter patronize, e.g. artists, though the latter, if paid for their work, are surely engaging in a form of economic activity, as indeed are the patrons who subsidize them. (Indeed, even the idle rich or leisure class engage in economic activity, if only as consumers.) However, this criterion, at least as described by Baker, is at least as capable of applying to the opposite end of the social spectrum – i.e. the welfare-dependent underclass. Did Toynbee really intend to suggest that the existence of the long-term unemployed is a distinctive marker of civilization? If so, is Baker really agreeing with him?

[52] The full list of criteria for civilization provided by Baker is as follows:

  1. In the ordinary circumstances of life in public places they cover the external genitalia and greater part of the trunk with clothes” (p507);
  2. They keep the body clean and take care to dispose of its waste elements” (p507);
  3. They do not practice severe mutilation or deformation of the body” (p507);
  4. They have knowledge of building in brick or stone, if the necessary materials are available in their territory” (p507);
  5. Many of them live in towns or cities, which are linked by roads” (p507);
  6. “They cultivate food plants” (p507);
  7. They domesticate animals and use some of the larger ones for transportif suitable species are available (p507);
  8. They have knowledge of the use of metals, if these are available” (p507);
  9. They use wheels” (p507);
  10. They exchange property by the use of money” (p507);
  11. They order their society by a system of laws, which are enforced in such a way that they ordinarily go about their various concerns in times of peace without danger of attack or arbitrary arrest” (p507);
  12. They permit accused people to defend themselves and call witnesses” (p507);
  13. They do not use torture to extract information or punishment” (p507);
  14. They do practice cannibalism” (p507);
  15. The religious systems include ethical elements and are not purely or grossly superstitious” (p507);
  16. They use a script… to communicate ideas” (p507);
  17. There is some facility in the abstract use of numbers, without consideration of actual objects” (p507);
  18. A calendar is in use” (p508);
  19. “[There are] arrangements for the instruction of the young in intellectual matters” (p508);
  20. There is some appreciation of the fine arts” (p508);
  21. Knowledge and understanding are valued as ends in themselves” (p508).

[53] Actually, some of the criteria include both technological and moral elements. For example, the second requirement, namely that the culture in question keep the body clean and take care to dispose of its waste elements”, at first seems a purely moral requirement. However, the disposal of sewage is, not only essential for the maintenance of healthy populations living at high levels of population density, but also often involves impressive feats of engineering (p507).
Similarly, the requirement that some people live in towns or cities” seems rather arbitrary. However, to sustain populations at the high population density required in towns and cities usually requires substantial technological, not to mention social and economic, development. Likewise, the building and maintenance of roads linking these settlements, also mentioned by Baker as part of the same criterion, is a technological achievement, often requiring, like the building of facilities for sewage disposal, substantial coordination of labour.

[54] Indeed, even the former Bishop of Edinburgh apparently agrees (see his book, Godless Morality: Keeping Religion out of Ethics). The classic thought-experiment used by moral philosophers to demonstrate that morality does not derive from God’s commandments is to ask devout believers whether, if, instead of commanding Thou shalt not kill, God had instead commanded Thou shalt kill, would they then consider killing a moral obligation? Most people, including devout believers, apparently concede otherwise. In fact, however, the hypothetical thought-experiment is not as hypothetical as many moral philosophers, and many Christians, seem to believe, as various passages in the Bible do indeed command mass killing and genocide (e.g. Deuteronomy 20: 16-17; Samuel 15:3; Deuteronomy 20: 13-14), and indeed rape too (Numbers 31:18).

[55] For example, in IQ and Racial Differences (1973), former president of the American Psychological Association and confirmed racialist Henry E Garrett claims:

Until the arrival of Europeans there was no literate civilization in the continent’s black belt. The Negro had no written language, no numerals, no calendar, no system of measurement. He never developed a plow or wheel. He never domesticated any animal. With the rarest exceptions, he built nothing more elaborate than mud huts and thatched stockades” (IQ and Racial Differences: p2).

[56] These explorers included David Livingston, the famous missionary, and Francis Galton, the infamous eugenicist, celebrated statistician and all-round Victorian polymath, in addition to Henry Francis FlynnPaul Du ChailluJohn Hanning Speke, Samuel Baker (the author John R Baker’s own grand-uncle) and George August Schweinfurth (p343).

[57] This, of course, depends on precisely how we define the words machine and ‘mechanical’. Thus, many authorities, especially military historians, class the simple bow as the first true ‘machine’. However, the only indigenous people known to lack even the bow and arrow at the time of their first contact with Europeans were the Australian Aboriginals of Australia and Tasmania.

[58] With regard to the ruins of Great Zimbabwe, Baker emphasizes that “the buildings in question are in no sense houses; the great majority of them are simply walls” (p402). Nor, according to Baker, do they appear to have been part of a two-storey building, though he concedes that some of the structures may originally have been roofed, an other authors suggest huts were sometimes built atop these (p402).
Unlike some other racialist authors who have attributed their construction to the possibly part-Jewish Lemba people, Baker attributes their construction and design to indigenous Africans (p405). However, he suggests their anomalous nature reflected that they had been constructed in (crude) imitation of buildings constructed outside of the “secluded area” of Africa by non-Negro peoples with whom the former were in a trading relationship (p407-8).
This would explain why the structures, though impressive by the standards of other constructions within the “secluded zone” of Africa from the same time-period, where buildings of brick or stone were rare and tended to be on a much smaller scale (so impressive, indeed, that, in the years since Baker’s book was published, they have even bizarrely had an entire surrounding country named after them), are, by European or Middle Eastern standards of the same time period, quite shoddy. Baker also emphasizes:

The splendour and ostentation were made possible by what was poured into the country from foreign lands. One must acknowledge the administrative capacity of the rulers, but may question the utility of the ends to which much of it was put” (p409).

With regard to the technological achievements of black Africans more generally, Baker also acknowledges the adoption of iron smelting throughout most parts of Africa where the ore was available by the tenth century (p352; see also p373). However, while he attributes its origin to outside influence, recent research apparently suggests a much earlier, and indigenous, origin in some parts of sub-Saharan Africa. He also credits indigenous black Africans with great skill in forging iron into weapons and other tools (p353).

[59] Several plants seem to have been first domesticated in the Sahel region, and the Horn of Africa, both of which are part of sub-Saharan Africa. However, these areas lie outside of what Baker calls the “secluded area”, as I understand it. Also, populations from the Horn of Africa are, according to Baker predominantly Caucasoid (p225).

[60] The sole domestic animal that was perhaps first domesticated by black Africans is the guineafowl. Guineafowl are found wild throughout sub-Saharan Africa, but not elsewhere. It has therefore been argued, plausibly enough, that it was first domesticated in sub-Saharan Africa. However, Baker reports that the nineteenth-century explorers whose work he relies on “nowhere mention its being kept as a domestic animal by Negrids” (p375). Instead, he proposes it was probably first domesticated in Ethiopia, outside the “secluded area” as defined by Baker, and whose population are, according to Baker, predominantly Caucasoid (p225). However, he admits that there are no “early record of tame guinea-fowl in Ethiopia” (p375). 

[61] The relative absense of large wild mammals outside of sub-Saharan Afirca may partly be because such mammals have been driven to extinction or had their numbers depleted in recent times (e.g. wolves have been driven to extinction in Britain and Ireland, bison to the verge of extinction in North America). However, it is likely that Africa had a comparatively large number of large wild mammalian species even in ancient times.
This is because outside of Africa (notably in the Americas), many wild mammals were wiped out by the sudden arrival of humans with their formidable hunting skills to whom indigenous fauna were wholly unadapted. However, Africa is where humans first evolved. Therefore, prey species will have gradually evolved fear and avoidance of humans at the same time as humans themselves first evolved to become formidable hunters.
Thus, Africa, unlike other continents, never experienced a sudden influx of human hunters to whom its prey species were wholly unadapted. It therefore retains many of large wild game animals into modern times.

[62] Of course, rather conveniently for Diamonds theory, the wild ancestors of many modern domesticated animals, including horses and aurochs, are now extinct, so we have no way of directly assessing their temperament. However, we have every reason to believe that aurochs, at least, posed a far more formidable obstacle to domestication than does the zebra.

[63] Actually, a currently popular theory of the domestication of wolves/dogs holds that humans did not so much domesticate wolves/dogs as wolves/dogs domesticated themselves.

[64] Aurochs, and contemporary domestic cattle, also evince another trait that, according to Diamond, precludes their domestication – namely, it is not usually possible to keep two adult males of this species in the same field enclosure. Yet, according to Diamond, the social antelope species for which Africa is famous” could not be domesticated because:

The males of [African antelope] herds space themselves into territories and fight fiercely with one another when breeding. Hence, those antelope cannot be maintained in crowded enclosures in captivity” (Guns, Germs and Steel: p174).

Evidently, the ancient Eurasians who successfully domesticated the auroch never got around to reading Diamonds critially acclaimed bestseller. If they had, they could have learnt in advance to abandon the project as hopeless and hence save themselves the time and effort.

[65] With regard to the racial affinities of the ancient Egyptians, a source of some controversy in recent years, Baker concludes that, contrary to the since-popularized Afrocentrist Black Athena hypothesis, the ancient Egyptians were predominantly, but not wholly, Caucasoid, and that “the Negrid contribution to Egyptian stock was a small one” (p518). Indeed, there is presumably little doubt on this question, since, according to Baker, there is an abundance of well-preserved skulls from Egypt, not least due to the practice of mummifying corpses and thus:

More study has been devoted to the craniology of ancient Egypt than to that of any other country in the world” (p517).

From such data, Baker reports:

Morant showed that all the sets of ancient Egyptian skills that he analysed statistically were distinguishable by each of six criteria from Negrid skulls” (p518).

For what it’s worth, this conclusion is also corroborated by their self-depiction in artwork:

In their monuments the dynastic Egyptians represented themselves as having a long face, pointed chin with scanty beard, a straight or somewhat aquiline nose, black irises, and a reddish-brown complexion” (p518).

Similarly, in Race: the Reality of Human Differences (reviewed here, here and here), Sarich and Miele, claiming that Egyptian monuments are not mere ‘portraits but an attempt at classification’”, report that the Egyptians painted themselves as red, Asiatics or Semites as yellow, Southerns or Negroes” as black, and “Libyans, Westerners or Northerners” as “white, with blue eyes and fair beards” (Race: the Reality of Human Differences: p33).
Thus, if not actually black, neither were the ancient Egyptians exactly white either, as implausibly claimed by contemporary Nordicist Arthur Kemp, in his books, Children of Ra: Artistic, Historical, and Genetic Evidence for Ancient White Egypt and March of the Titans: The Complete History of the White Race.
In the latter work, Kemp contends that the ancient Egyptians were originally white, being part-Mediterranean (the Mediterranean race itself being now largely extinct, in Kemp’s eccentric view), but governed, he implausibly claims, by a Nordic elite. Over time, however, he contends that they interbred with imported black African slaves and Semitic populations from the Middle East and hence the population was gradually transformed and hence Egyptian civilization degenerated.
This is, of course, a version of de Gobineau’s infamous theory that great empires inevitably decline because, through their imperial conquests, they subjugate, and hence ultimately interbreed with, the inferior peoples whom they have conquered (as well as with inward migrants attracted by higher living standards), which interbreeding supposedly dilutes the very racial qualities that permitted their original imperial glories.
Interestingly, consistent with Kemp’s theory, there is indeed some evidence that of an increase in the proportion of sub-Saharan African ancestry in Egypt since ancient times (Schuenemann et al 2017).
However, this same study demonstrating an increase in the proportion of sub-Saharan African ancestry in Egypt also showed that, contrary to Kemp’s theory, Egyptian populations always had close affinities to Middle Eastern populations (including Semites), and, in fact, owing to the increase in sub-Saharan African ancestry, and despite the Muslim conquest, actually had closer affinities to Near Eastern populations in ancient times than they do today (Schuenemann et al 2017).
Importantly, this study was based on DNA extracted from mummies, and, since mummification was a costly procedure that was usually available only to the wealthy, it therefore indicates that even the Egyptian elite were far from Nordic even in ancient times, as implausibly claimed by Kemp.
To his credit, Kemp does indeed amass some remarkable photographic evidence of Egyptian tomb paintings and monuments depicting figures, according to Kemp intended to represent Egyptians themselves, with blue eyes and light hair and complexions.
Admitting that Egyptian men were often depicted with reddish skin, he dismisses this as an artistic convention:

It was a common artistic style in many ancient Mediterranean cultures to portray men with red skins and women with white skins. This was done, presumably to reflect the fact that the men would have been outside working in the fields” (Children of Ra: p33). 

Actually, according to anthropologist Peter Frost, this artistic convention reflects real and innate differences, as well as differing sexually selected ideals of male and female beauty (see Dark Men, Fair Women).
Most interestingly, Kemp also includes photographs of some Egyptian mummies, including Ramses II, apparently with light-coloured hair. 
At first, I suspected this might reflect loss of pigmentation owing to the process of decay occurring after death, or perhaps to some chemical process involved in mummification.
Robert Brier, an expert on mummification, confirms that Ramses’s “strikingly blond” hair was indeed a consequence of its having been “dyed as a final step in the mummification process so that he would be young forever” (The Encyclopedia of Mummies: p153). However, he also reports in the next sentence that:

Microscopic inspection of the roots of the hair revealed that Ramses was originally a redhead” (The Encyclopedia of Mummies: p153).

Brier also confirms, again as claimed by Kemp, that one especially ancient predynastic mummy, displayed in the British Museum, was indeed nicknamed Ginger on account of its hair colour (The Encyclopedia of Mummies: p64). However, whether this was the natural hair colour of the person when alive is not clear.
At any rate, even if both Ginger and Ramses the Great were indeed natural redheads, in this respect they appear to have been very much the exception rather than the rule. Thus, Baker himself reports that
:

It would appear that their head-hair was curly, wavy, or almost straight, and very dark brown or black” (p518).

This conclusion is again based on the evidence of their mummies, and, since mummification was a costly procedure largely restricted to the wealthy, it again contradicts Kemp’s notion of a ‘Nordic elite’ ruling ancient Egypt. On this and other evidence, Baker therefore concludes:

There is general agreement… that the Europid element in the Egyptians from predynastic times onwards has been primarily Mediterranid, though it is allowed that Orientalid immigrants from Arabia made a contribution to the stock” (p518).

In short, ancient Egyptians, including Pharaohs and other elites, though certainly not black, were not really exactly white either, and certainly far from Nordic. Despite the increase in sub-Saharan African ancestry and the probable further influx of Middle Eastern DNA owing the Muslim conquest, they probably resembled modern Egyptians, especially the indigenous, Christian Copts.

[66] The same is true of the earlier runic alphabets of the Germanic peoples, the Paleohispanic scripts of the Iberian peninsula, and presumably also of the undeciphered Linear A alphabet that was in use at the outer edge of the European continent during the Bronze Age.

[67] Writing appears to have been developed first in Mesopotamia, then shortly afterwards in Egypt (though some Egyptologists claim priority on behalf of Egypt). However, the relative geographic proximity of these two civilizations, their degree of contact with one another and the coincidence in time, make it likely that, although the two writing systems are entirely different to one another, the idea of writing was nevertheless conceived in imitation of Sumerian cunniform. A written script then seems to have been independently developed in China. Writing was also developed, almost certainly entirely independently, in Mesoamerica. Other possible candidates for the independent development of writing include the Indus Valley civilization, and Easter Island, though, since neither script has been deciphered, it is not clear that they represent true writing systems, and the Easter Island script has also yet to be reliably dated.

[68] Actually, it is now suggested that both the Mayans and Indians may have been beaten to this innovation by the Babylonians, although, unlike the later Indians and Muslims, neither the Mayans nor the Babylonians went on to take full advantage of this innovation, by developing mathematics in a way made possible by their innovation. For this, it is Indian civilization that deserves credit. The invention of the concept by both the Maya and the Babylonians was, of course, entirely independent of one another, but the Indians, the Islamic civilization and other Eurasian civilizations probably inherited the concept ultimately from Babylonia.

[69] Interestingly, this excuse is not available in Africa. There, large mammals survived, probably because, since Africa was where anatomically modern humans first evolved, prey species evolved in concert with humans, and hence gradually evolved to fear and avoid humans, at the same time as humans themselves gradually evolved to be formidable predators. In contrast, the native species of the Americas would have been totally unprepared to protect themselves from human hunters, to whom they were completely ill-adapted, owing to the late, and, in evolutionary terms, sudden, peopling of the continent. This may be why, to this day, Africa has more large animals than any other continent.

[70] Baker also uses the complexity of a people’s language in order to assess their intelligence. Today, there seems to be an implicit assumption among many linguists that all languages are equal in their complexity. Thus, American linguists rightly emphasize the subtlety and complexity of, for example, African-American vernacular, which is certainly, by no means, merely a impoverished or corrupted version of standard English, but rather has grammatical rules all of its own, which often convey information that is lost on white Americans not conversant in this dialect.
However, there is no a priori reason to assume that all languages are equal in their capacity to express complex and abstract ideas. The size of vocabularies, for example, differs in different languages, as does the number of different tenses that are recognised. For example, the Walpiri language of some Australian Aboriginals is said to have only a few number terms, namely words for just onetwo’ and ‘many, while the Pirahã language of indigenous South Americans is said to get by with no number terms at all. Thus, when Baker contends that certain languages, notably the Arunta language of indigenous Australians, as studied by Alf Sommerfelt, and also the Akan language of Africa, are inherently impoverished in their capacity to express abstract thought, he may well be right.

________________________

References

Ali et al (2020) Genome-wide analyses disclose the distinctive HLA architecture and the pharmacogenetic landscape of the Somali population. Science Reports 10:5652.
Andrzejewski, Hall & Salib (2009) Anti-Semitism and Identification of Jewish Group Membership from Photographs Journal of Nonverbal Behavior 33(1):47-58.
Beals et al (1984) Brain size, cranial morphology, climate and time machines. Current Anthropology 25(3): 301–330
Bhatia et al (2014) Genome-wide Scan of 29,141 African Americans Finds No Evidence of Directional Selection since Admixture. American Journal of Human Genetics 95(4): 437–444.
Braveman et al (1995) Racial/ethnic differences in the likelihood of cesarean delivery, California. American Journal of Public Health 85(5): 625–630.
Carey (2001) Did the Irish come from Spain? History Ireland 9(3).
Chavez (2002) Reinventing the Wheel: The Economic Benefits of Wheeled Transportation in Early Colonial British West Africa. Africa’s Development in Historical Perspective. New York: Cambridge University Press.
Diamond (1994) Race without Color Discover Magazine, November 1st.
Edmonds et al (2013) Racial and ethnic differences in primary, unscheduled cesarean deliveries among low-risk primiparous women at an academic medical center: a retrospective cohort study. BMC Pregnancy Childbirth 13, 168.
Elhaik (2013). The missing link of Jewish European ancestry: contrasting the Rhineland and the Khazarian hypotheses. Genome Biology and Evolution 5 (1): 61–74.
Frost (2020) The costs of outbreeding what do we know? Evoandproud.blogspot.com, January 14th.
Getahun et al (2009) Racial and ethnic disparities in the trends in primary cesarean delivery based on indications. American Journal of Obstetrics and Gynecology 201(4):422.e1-7.
Gordon (1998) The rise of the Bushman penis: Germans, genitalia and genocide, African Studies 57(1):27-54.
Helgason et al (2008) An association between the kinship and fertility of human couples. Science 319(5864):813-6.
Helgadottir et al (2006) A variant of the gene encoding leukotriene A4 hydrolase confers ethnicity-specific risk of myocardial infarction. Nature Genetics 38(1):68-74.
Hodgeson et al (2014) Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics 10(6): e1004393.
Jacobs & Fishberg (1906) ‘Nose, entry in The Jewish Encyclopedia.
Jacobs (1886) On the Racial Characteristics of Modern Jews, Journal of the Anthropological Institute, 1886, xv. 23-62.
Kay, K (2002). Morocco’s miracle muleBBC News 2 October.
Khan (2011a) The genetic affinities of Ethiopians. Discover Magazine, January 10.
Khan (2011b) A genomic sketch of the Horn of AfricaDiscover Magazine, June 10
Khan (2011c) Marry Far and Breed Tall Sons. Discover Magazine, July 7th.
Koziel et al (2011) Isolation by distance between spouses and its effect on children’s growth in height 146(1):14-9.
Labouriau & Amorim (2008) Comment on ‘An association between the kinship and fertility of human couples’ Science 12;322(5908):1634.
Lasker et al (2019) Global ancestry and cognitive abilityPsych 1(1), 431-459.
Law (2011) Wheeled transport in pre-colonial West Africa. Africa 50(3):249-262.
Lewis (2010) Why are mixed-race people perceived as more attractive? Perception 39(1):136-8.
Lewis et al (2015) Lumbar curvature: a previously undiscovered standard of attractiveness. Evolution and Human Behavior 36(5): 345-350.
Lewontin, (1972) The Apportionment of Human Diversity. In: Dobzhansky et al (eds) Evolutionary Biology. Springer, New York, NY.
Liebers et al (2004). “The herring gull complex is not a ring species”Proceedings of the Royal Society B: Biological Sciences 271 (1542): 893–901.
Loehlin et al (1973) Blood group genes and negro-white ability differences Behavior Genetics 3(3):263-270.
Martiniano et al (2016) Genomic signals of migration and continuity in Britain before the Anglo-Saxons. Nature Communications 7: 10326.
Nagoshi & Johnson (1986) The ubiquity of g. Personality and Individual Differences 7(2): 201-207.
Nebel et al (2001). The Y chromosome pool of Jews as part of the genetic landscape of the Middle East. American Journal of Human Genetics. 69 (5): 1095–112.
Nisbett (2005). Heredity, environment, and race differences in IQ: A commentary on Rushton and Jensen (2005). Psychology, Public Policy, and Law 11:302-310.
Norton et al (2009) Genetic evidence for the convergent evolution of light skin in Europeans and East Asians. Molecular Biology & Evolution 24(3): 710-722.
Nystrom et al (2008) Perinatal outcomes among Asian–white interracial couples. American Journal of Obstetrics and Gynecology 199(4), p382.e1-382.e6.
Olalde et al (2018) The Beaker phenomenon and the genomic transformation of northwest Europe. Nature 555: 190–196
Oppenheimer (2006) Myths of British Ancestry. Prospect Magazine, October 21.
Provine (1976) Geneticists and the biology of race crossing. Science 182(4114):790-796.
Relethford (2009) Race and global patterns of phenotypic variation. American Journal of Physical Anthropology 139(1):16-22.
Rong et al (1985) Fertile mule in China and her unusual foal. Journal of the Royal Society of Medicine. 78 (10): 821–25.
Rushton & Jensen (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law, 11:235-294.
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population Human Genetics 39(1):69-86.
Scarr & Weinberg (1976).IQ test performance of black children adopted by White families. American Psychologist 31:726-739.
Schiffels et al (2016) Iron Age and Anglo-Saxon genomes from East England reveal British migration history. Nature Communications 7: 10408.
Schuenemann et al (2017) Ancient Egyptian mummy genomes suggest an increase of Sub-Saharan African ancestry in post-Roman periods. Nature Communications 8:15694.
Singer (1958)  The Boskop ‘race’ problem. Man 58: 173-178.
Stanford University Medical Center (2008) Asian-white couples face distinct pregnancy risks, Stanford/Packard Eurekaltert.org, 1 October.
Tishkoff et al (2007) Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics (1): 31-40.
Valdes (2020) Examining Cesarean Delivery Rates by Race: a Population-Based Analysis Using the Robson Ten-Group Classification System Journal of Racial and Ethnic Health Disparities.
Weinberg et al (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence. Intelligence, 16, 117-135.
Whitney (1999) The Biological Reality of Race American Renaissance, October 1999.

Peter Singer’s ‘A Darwinian Left’

Peter Singer, A Darwinian Left: Politics, Evolution and Cooperation, London: Weidenfeld & Nicolson 1999.

Social Darwinism is dead. 

The idea that charity, welfare and medical treatment ought to be withheld from the poor, the destitute and the seriously ill so that they perish in accordance with the process of natural selection and hence facilitate further evolutionary progress survives only as a straw man sometimes attributed to conservatives by leftists in order to discredit them, and a form of guilt by association sometimes invoked by creationists in order to discredit the theory of evolution.[1]

However, despite the attachment of many American conservatives to creationism, there remains a perception that evolutionary psychology is somehow right-wing

Thus, if humans are fundamentally selfish, as Richard Dawkins is taken, not entirely accurately, to have argued, then this surely confirms the underlying assumptions of classical economics. 

Of course, as Dawkins also emphasizes, we have evolved through kin selection to be altruistic towards our close biological relatives. However, this arguably only reinforces conservatives’ faith in the family, and their concerns regarding the effects of family breakdown and substitute parents

Finally, research on sex differences surely suggests that at least some traditional gender roles – e.g. women’s role in caring for young children, and men’s role in fighting wars – do indeed have a biological basis, and also that patriarchy and the gender pay gap may be an inevitable result of innate psychological differences between the sexes

Political scientist Larry Arnhart thus champions what he calls a new ‘Darwinian Conservatism’, which harnesses the findings of evolutionary psychology in support of family values and the free market. 

Against this, however, moral philosopher and famed animal liberation activist Peter Singer, in ‘A Darwinian Left’, seeks to reclaim Darwin, and evolutionary psychology, for the Left. His attempt is not entirely successful. 

The Naturalistic Fallacy 

At least since David Hume, it has an article of faith among most philosophers that one cannot derive values from facts. To do otherwise is to commit what some philosophers refer to as the naturalistic fallacy

Edward O Wilson, in Sociobiology: The New Synthesis was widely accused of committing the naturalistic fallacy, by attempting to derive moral values form facts. However, those evolutionary psychologists who followed in his stead have generally taken a very different line. 

Indeed, recognition that the naturalistic fallacy is indeed a fallacy has proven very useful to evolutionary psychologists, since it has enabled them investigate the possible evolutionary functions of such morally questionable (or indeed downright morally reprehensible) behaviours as infidelityrape, warfare and child abuse while at the same time denying that they are somehow thereby providing a justification for the behaviours in question.[2] 

Singer, like most evolutionary psychologists, also reiterates the sacrosanct inviolability of the fact-value dichotomy

Thus, in attempting to construct his ‘Darwinian Left’, Singer does not attempt to use Darwinism in order to provide a justification or ultimate rationale for leftist egalitarianism. Rather, he simply takes it for granted that equality is a good thing and worth striving for, and indeed implicitly assumes that his readers will agree. 

His aim, then, is not to argue that socialism is demanded by a Darwinian worldview, but rather simply that it is compatible with such a worldview and not contradicted by it. 

Thus, he takes leftist ideals as his starting-point, and attempts to argue only that accepting the Darwinian worldview should not cause one to abandon these ideals as either undesirable or unachievable. 

But if we accept that the naturalistic fallacy is indeed a fallacy then this only raises the question: If it is indeed true that moral values cannot be derived from scientific facts, whence can moral values be derived?  

Can they only be derived from other moral values? If so, how are our ultimate moral values, from which all other moral values are derived, themselves derived? 

Singer does not address this. However, precisely by failing to address it, he seems to implicitly assume that our ultimate moral values must simply be taken on faith. 

However, Singer also emphasizes that rejecting the naturalistic fallacy does not mean that the facts of human nature are irrelevant to politics. 

On the contrary, while Darwinism may not prescribe any particular political goals as desirable, it may nevertheless help us determine how to achieve those political goals that we have already decided upon. Thus, Singer writes: 

An understanding of human nature in the light of evolutionary theory can help us to identify the means by which we may achieve some of our social and political goals… as well as assessing the possible costs and benefits of doing so” (p15). 

Thus, in a memorable metaphor, Singer observes: 

Wood carvers presented with a piece of timber and a request to make wooden bowls from it do not simply begin carving according to a design drawn up before they have seen the wood. Instead they will examine the material with which they are to work and modify their design in order to suit its grain…Those seeking to reshape human society must understand the tendencies inherent within human beings, and modify their abstract ideals in order to suit them” (p40). 

Abandoning Utopia? 

In addition to suggesting how our ultimate political objectives might best be achieved, an evolutionary perspective also suggests that some political goals might simply be unattainable, at least in the absence of a wholesale eugenic reengineering of human nature itself. 

In watering down the utopian aspirations of previous generations of leftists, Singer seems to implicitly concede as much. 

Contrary to the crudest misunderstanding of selfish gene theory, humans are not entirely selfish. However, we have evolved to put our own interests, and those of our kin, above those of other humans. 

For this reason, communism is unobtainable because: 

  1. People strive to promote themselves and their kin above others; 
  2. Only coercive state apparatus can prevent them so doing; 
  3. The individuals in control of this coercive apparatus themselves seek to promote the interests of themselves and their kin and corruptly use this coercive apparatus to do so. 

Thus, Singer laments: 

What egalitarian revolution has not been betrayed by its leaders?” (p39). 

Or, alternatively, as HL Mencken put it:

“[The] one undoubted effect [of political revolutions] is simply to throw out one gang of thieves and put in another.” 

In addition, human selfishness suggests, if complete egalitarianism were ever successfully achieved and enforced, it would likely be economically inefficient – because it would remove the incentive of self-advancement that lies behind the production of goods and services, not to mention of works of art and scientific advances. 

Thus, as Adam Smith famously observed: 

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.” 

And, again, the only other means of ensuring goods and services are produced besides economic self-interest is state coercion, which, given human nature, will always be exercised both corruptly and inefficiently. 

What’s Left? 

Singer’s pamphlet has been the subject of much controversy, with most of the criticism coming, not from conservatives, whom one might imagine to be Singer’s natural adversaries, but rather from other self-described leftists. 

These leftist critics have included both writers opposed to evolutionary psychology (e.g. David Stack in The First Darwinian Left), but also some other writers claiming to be broadly receptive to the new paradigm but who are clearly uncomfortable with some of its implications (e.g.  Marek Kohn in As We Know It: Coming to Terms with an Evolved Mind). 

In apparently rejecting the utopian transformation of society envisioned by Marx and other radical socialists, Singer has been accused by other leftists for conceding rather too much to the critics of leftism. In so doing, Singer has, they claim, in effect abandoned leftism in all but name and become, in their view, an apologist for and sell-out to capitalism. 

Whether Singer can indeed be said to have abandoned the Left depends, of course, on precisely how we define ‘the Left’, a rather more problematic matter than it is usually regarded as being.[3]

For his part, Singer certainly defines the Left in unusually broad terms.

For Singer, leftism need not necessarily entail taking the means of production into common ownership, nor even the redistribution of wealth. Rather, at its core, being a leftist is simply about being: 

On the side of the weak, not the powerful; of the oppressed, not the oppressor; of the ridden, not the rider” (p8). 

However, this definition is obviously problematic. After all, few conservatives would admit to being on the side of the oppressor. 

On the contrary, conservatives and libertarians usually reject the dichotomous subdivision of society into oppressed’ and ‘oppressor groups. They argue that the real world is more complex than this simplistic division of the world into black and white, good and evil, suggests. 

Moreover, they argue that mutually beneficial exchange and cooperation, rather than exploitation, is the essence of capitalism. 

They also usually claim that their policies benefit society as a whole, including both the poor and rich, rather than favouring one class over another.[4]

Indeed, conservatives claim that socialist reforms often actually inadvertently hurt precisely those whom they attempt to help. Thus, for example, welfare benefits are said to encourage welfare dependency, while introducing, or raising the level of, a minimum wage is said to lead to increases in unemployment. 

Singer declares that a Darwinian left would “promote structures that foster cooperation rather than competition” (p61).

Yet many conservatives would share Singer’s aspiration to create a more altruistic culture. 

Indeed, this aspiration seems more compatible with the libertarian notion of voluntary charitable donations replacing taxation than with the coercively-extracted taxes invariably favoured by the Left. After all, being forced to pay taxes is an example of coercion rather than true altruism. 

Nepotism and Equality of Opportunity 

Yet selfish gene theory suggests humans are not entirely self-interested. Rather, kin selection makes us care also about our biological relatives.

But this is no boon for egalitarians. 

Rather, the fact that our selfishness is tempered by a healthy dose of nepotism likely makes equality of opportunity as unattainable as equality of outcome – because individuals will inevitably seek to aid the social, educational and economic advancement of their kin, and those individuals better placed to do so will enjoy greater success in so doing. 

For example, parents with greater resources will be able to send their offspring to exclusive fee-paying schools or obtain private tuition for them; parents with better connections may be able to help their offspring obtain better jobs; while parents with greater intellectual ability may be better able to help their offspring with their homework. 

However, since many conservatives and libertarians are as committed to equality of opportunity as socialists are to equality of outcome, this conclusion may be as unwelcome on the right as on the left. 

Indeed, the theory of kin selection has even been invoked to suggest that ethnocentrism is innate and ethnic conflict is inevitable in multi-ethnic societies, a conclusion unwelcome across the mainstream political spectrum in the West today, where political parties of all persuasions are seemingly equally committed to building multi-ethnic societies. 

Unfortunately, Singer does not address any of these issues. 

Animal Liberation After Darwin 

Singer is most famous for his advocacy on behalf of what he calls animal liberation

In ‘A Darwinian Left’, he argues that the Darwinian worldview reinforces the case for animal liberation by confirming the evolutionary continuity between humans other animals. 

This suggests that there are unlikely to be fundamental differences in kind as between humans and other animals (e.g. in the capacity to feel pain) sufficient to justify the differences in treatment currently accorded humans and animals. 

It sharply contrasts account of creation in the Bible and the traditional Christian notion of humans as superior to other animals and as occupying an intermediate position between beasts and angels. 

Thus, Singer concludes: 

By knocking out the idea that we are a separate creation from the animals, Darwinian thinking provided the basis for a revolution in our attitudes to non-human animals” (p17). 

This makes our consumption of animals as food, our killing of them for sport, our enslavement of them as draft animals, or even pets, and our imprisonment of them in zoos and laboratories all ethically suspect, since these are not things that are generally permitted in respect of humans. 

Yet Singer fails to recognise that human-animal continuity cuts two ways. 

Thus, anti-vivisectionists argue that animal testing is not only immoral, but also ineffective, because drugs and other treatments often have very different effects on humans than they do on the animals used in drug testing. 

Our evolutionary continuity with non-human species makes this argument less plausible. 

Moreover, if humans are subject to the same principles of natural selection as other species, this suggests, not the elevation of animals to the status of humans, but rather the relegation of humans to just another species of animal

In short, we do not occupy a position midway between beasts and angels; we are beasts through and through, and any attempt to believe otherwise is mere delusion

This is, of course, the theme of John Gray’s powerful polemic Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here). 

Finally, acceptance of the existence of human nature surely entails recognition of carnivory as a part of that nature. 

Of course, we must remember not to commit the naturalistic or appeal to nature fallacy.  

Thus, just because meat-eating may be natural for humans, in the sense that meat was a part of our ancestors diet in the EEA, this does not necessarily mean that it is morally right or even morally justifiable to eat meat. 

However, the fact that meat is indeed a natural part of the human diet does suggest that, in health terms, vegetarianism is likely to be nutritionally sub-optimal. 

Thus, the naturalistic fallacy or appeal to nature fallacy is not always entirely fallacious, at least when it comes to human health. What is natural for humans is indeed what we are biologically adapted to and what our body is therefore best designed to deal with.[5]

Therefore, vegetarianism is almost certainly to some degree sub-optimal in nutritional terms. 

Moreover, given that Singer is an opponent of the view that there is a valid moral distinction between acts and omissions, describing one of his core tenets in the Introduction to his book Writings on an Ethical Life as the belief that “we are responsible not only for what we do but also for what we could have prevented” (Writings on an Ethical Life: pxv), then we must ask ourselves: If he believes it is wrong for us to eat animals, does he also believe we should take positive measures to prevent lions from eating gazelles? 

Economics 

Bemoaning the emphasis of neoliberals on purely economic outcomes, Singer protests:

From an evolutionary perspective, we cannot identify wealth with self-interest… Properly understood self-interest is broader than economic self-interest” (p42). 

Singer is right. The ultimate currency of natural selection is not wealth, but rather reproductive success – and, in evolutionarily novel environments, wealth may not even correlate with reproductive success (Vining 1986). 

Thus, as discussed by Laura Betzig in Despotism and Differential Reproduction, a key difference between Marxism and sociobiology is the relative emphasis on production versus reproduction

Whereas Marxists see societal conflict and exploitation as reflecting competition over control of the means of production, for Darwinians, all societal conflict ultimately concerns control over, not the means of production, but rather what we might term the ‘means of reproduction’ – in other words, women, their wombs and vaginas

Thus, sociologist-turned-sociobiologist Pierre van den Berghe observed: 

“The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon: p165). 

Production is ultimately, in Darwinian terms, merely by which to gain the necessary resources to permit successful reproduction. The latter is the ultimate purpose of life

Thus, for all his ostensible radicalism, Karl Marx, in his emphasis on economics (‘production’) at the expense of sex (‘reproduction’), was just another Victorian sexual prude

Competition or Cooperation: A False Dichotomy? 

In Chapter  Four, entitled “Competition or Cooperation?”, Singer argues that modern western societies, and many modern economists and evolutionary theorists, put too great an emphasis on competition at the expense of cooperation

Singer accepts that both competition and cooperation are natural and innate facets of human nature, and that all societies involve a balance of both. However, he argues that different societies differ in their relative emphasis on competition or cooperation, and that it is therefore possible to create a society that places a greater emphasis on the latter at the expense of the former. 

Thus, Singer declares that a Darwinian left would: 

Promote structures that foster cooperation rather than competition” (p61) 

However, Singer is short on practical suggestions as to how a culture of altruism is to be fostered.[6]

Changing the values of a culture is not easy. This is especially so for a liberal democratic (as opposed to a despotic, totalitarian) government, let alone for a solitary Australian moral philosopher – and Singer’s condemnation of “the nightmares of Stalinist Russia” suggests that he would not countenance the sort of totalitarian interference with human freedom to which the Left has so often resorted in the past, and continues to resort to in the present (even in the West), with little ultimate success, in the past. 

But, more fundamentally, Singer is wrong to see competition and conflict as necessarily in conflict with altruism and cooperation

On the contrary, perhaps the most remarkable acts of cooperation, altruism and self-sacrifice are those often witnessed in wartime (e.g. kamikaze pilotssuicide bombers and soldiers who throw themselves on grenades). Yet war represents perhaps the most extreme form of competition and conflict known to man. 

In short, soldiers risk and sacrifice their lives, not only to save the lives of others, but also to take the lives of other others. 

Likewise, trade is a form of cooperation, but is as fundamental to capitalism as is competition. Indeed, I suspect most economists would argue that exchange is even more fundamental to capitalism than is competition

Thus, far from disparaging cooperation, neoliberal economists see voluntary exchange as central to prosperity. 

Ironically, then, popular science writer Matt Ridley also, like Singer, focuses on humans’ innate capacity for cooperation to justify political conclusions in his book, The Origins of Virtue

But, for Ridley, our capacity for cooperation provides a rationale, not for socialism, but rather for free markets – because humans, as natural traders, produce efficient systems of exchange which government intervention almost always only distorts. 

However, whereas economic trade is motivated by self-interested calculation, Singer seems to envisage a form of reciprocity mediated by emotions such as compassiongratitude and guilt
 
However, sociobiologist Robert Trivers argues in his paper that introduced the concept of reciprocal altruism to evolutionary biology that these emotions themselves evolved through the rational calculation of natural selection (Trivers 1971). 

Therefore, while open to manipulation, especially in evolutionarily novel environments, they are necessarily limited in scope. 

Group Differences 

Singer’s envisaged ‘Darwinian Left’ would, he declares, unlike the contemporary left, abandon: 

“[The assumption] that all inequalities are due to discrimination, prejudice, oppression or social conditioning. Some will be, but this cannot be assumed in every case” (p61). 

Instead, Singer admits that at least some disparities in achievement may reflect innate differences between individuals and groups in abilities, temperament and preferences. 

This is probably Singer’s most controversial suggestion, at least for modern leftists, since it contravenes the contemporary dogma of political correctness

Singer is, however, undoubtedly right.  

Moreover, his recognition that some differences in achievement as between groups reflect, not discrimination, oppression or even the lingering effect of past discrimination or oppression, but rather innate differences between groups in psychological traits, including intelligence, is by no means incompatible with socialism, or leftism, as socialism and leftism were originally conceived. 

Thus, it is worth pointing out that, while contemporary so-called cultural Marxists may decry the notion of innate differences in ability and temperament as between different racessexesindividuals and social classes as anathema, the same was not true of Marx himself

On the contrary, in famously advocating from each according to his ability, to each according to his need, Marx implicitly recognized that people differed in ability – differences which, given the equalization of social conditions envisaged under communism, he presumably conceived of as innate in origin.[7]

As Hans Eysenck observes:

“Stalin banned mental testing in 1935 on the grounds that it was ‘bourgeois’—at the same time as Hitler banned it as ‘Jewish’. But Stalin’s anti-genetic stance, and his support for the environmentalist charlatan Lysenko, did not derive from any Marxist or Leninist doctrine… One need only recall The Communist Manifesto: ‘From each according to his ability, to each according to his need’. This clearly expresses the belief that different people will have different abilities, even in the communist heaven where all cultural, educational and other inequalities have been eradicated” (Intelligence: The Battle for the Mind: p85).

Here Eysenck echoes the earlier observations of the brilliant, pioneering early twentieth century biologist, and unrepentant Marxist, JBS Haldane, who reputedly wrote in the pages of The Daily Worker in the 1940s, that:

The dogma of human equality is no part of Communism… The formula of Communism ‘from each according to his ability, to each according to his needs’ would be nonsense if abilities are equal.”

Thus, Steven Pinker, in The Blank Slate, points to the theoretical possibility of what he calls a “Hereditarian Left”, arguing for a Rawlsian redistribution of resources to the, if you like, innately ‘cognitively disadvantaged’.[8] 

With regard to group differences, Singer avoids discussing the incendiary topic of race differences in intelligence, a question too contentious for Singer to touch. 

Instead, he illustrates the possibility that not “all inequalities are due to discrimination, prejudice, oppression or social conditioning” with the marginally less incendiary case of sex differences.  

Here, it is sex differences, not in intelligence, but rather in temperament, preferences and personality that are probably more important, and likely explain occupational segregation and the so-called gender pay gap

Thus, Singer writes: 

If achieving high status increases access to women, then we can expect men to have a stronger drive for status than women” (p18). 

This alone, he implies, may explain both the universalilty of male rule and the so-called gender pay gap

However, Singer neglects to mention another biological factor that is also probably important in explaining the gender pay gap – namely, women’s attachment to infant offspring. This factor, also innate and biological in origin, also likely impedes career advancement among women. 

Thus, it bears emphasizing that never-married women with no children actually earn more, on average, than do unmarried men without children of the same age in both Britain and America.[9]

For a more detailed treatment of the biological factors underlying the gender pay gap, see Biology at Work: Rethinking Sexual Equality by professor of law, Kingsley Browne, which I have reviewed here.[10] See also my review of Warren Farrell’s Why Men Earn More, which can be found here, here and here.

Dysgenic Fertility Patterns? 

It is sometimes claimed by opponents of welfare benefts that the welfare system only encourages the unemployed to have more children so as to receive more benefits and thereby promotes dysgenic fertility patterns. In response, Singer retorts:

Even if there were a genetic component to something as nebulous as unemployment, to say that these genes are ‘deleterious’ would involve value judgements that go way beyond what the science alone can tell us” (p15).

Singer is, of course, right that an extra-scientific value judgement is required in order to label certain character traits, and the genes that contribute to them, as deleterious or undesirable. 

Indeed, if single mothers on welfare do indeed raise more surviving children than do those who are not reliant on state benefits, then this indicates that they have higher reproductive success, and hence, in the strict biological sense, greater fitness than their more financially independent, but less fecund, reproductive competitors. 

Therefore, far from being deleterious’ in the biological sense, genes contributing to such behaviour are actually under positive selection, at least under current environmental conditions.  

However, even if such genes are not ‘deleterious’ in the strict biological sense, this does not necessarily mean that they are desirable in the moral sense, or in the sense of contributing to successful civilizations and societal advancement. To suggest otherwise would, of course, involve a version of the very appeal to nature fallacy or naturalistic fallacy that Singer is elsewhere emphatic in rejecting. 

Thus, although regarding certain character traits, and the genes that contribute to them, as undesirable does indeed involve an extra-scientific “value judgement”, this is not to say that the “value judgement” in question is necessarily mistaken or unwarranted. On the contrary, it means only that such a value judgement is, by its nature, a matter of morality, not of science. 

Thus, although science may be silent on the issue, virtually everyone would agree that some traits (e.g. generosity, health, happiness, conscientiousness) are more desirable than others (e.g. selfishness, laziness, depression, illness). Likewise, it is self-evident that the long-term unemployed are a net burden on society, and that a successful society cannot be formed of people unable or unwilling to work. 

As we have seen, Singer also questions whether there can be “a genetic component to something as nebulous as unemployment”. 

However, in the strict biological sense, unemployment probably is indeed partly heritable. So, incidentally, are road traffic accidents and our political opinions – because each reflect personality traits that are themselves heritable (e.g. risk-takers and people with poor physical coordination and slow reactions probably have more traffic accidents; and perhaps more compassionate people are more likely to favour leftist politics). 

Thus, while it may be unhelpful and misleading to talk of unemployment as itself heritable, nevertheless traits of the sort that likely contribute to unemployment (e.g. intelligenceconscientiousnessmental and physical illness) are indeed heritable

Actually, however, the question of heritability, in the strict biological sense, is irrelevant. 

Thus, even if the reason that children from deprived backgrounds have worse life outcomes is entirely mediated by environmental factors (e.g. economic or cultural deprivation, or the bad parenting practices of low-SES parents), the case for restricting the reproductive rights of those people who are statistically prone to raise dysfunctional offspring remains intact. 

After all, children usually get both their genes and their parenting from the same set of parents – and this could be changed only by a massive, costly, and decidedly illiberal, policy of forcibly removing offspring from their parents.[11]

Therefore, so long as an association between parentage and social outcomes is established, the question of whether this association is biologically or environmentally mediated is simply beside the point, and the case for restricting the reproductive rights of certain groups remains intact.  

Of course, it is doubtful that welfare-dependent women do indeed financially benefit from giving birth to additional offspring. 

It is true that they may receive more money in state benefits if they have more dependent offspring to support and provide for. However, this may well be more than offset by the additional cost of supporting and providing for the dependent offspring in question, leaving the mother with less to spend on herself. 

However, even if the additional monies paid to mothers with dependent children are not sufficient as to provide a positive financial incentive to bearing additional children, they at least reduce the financial disincentives otherwise associated with rearing additional offspring.  

Therefore, given that, from an evolutionary perspective, women probably have an innate desire to bear additional offspring, it follows that a rational fitness-maximizer would respond to the changed incentives represented by the welfare system by increasing their reproductive rate.[12]

Towards A New Socialist Eugenics?

If we accept Singer’s contention that an understanding of human nature can help show us how achieve, but not choose, our ultimate political objectives, then eugenics could be used to help us achieve the goal of producing the better people and hence, ultimately, better societies. 

Indeed, given that Singer seemingly concedes that human nature is presently incompatible with communist utopia, perhaps then the only way to revive the socialist dream of communism is to eugenically re-engineer human nature itself. 

Thus, it is perhaps no accident that, before World War Two, eugenics was a cause typically associated, not with conservatives, nor even, as today, with fascism and German National Socialism, but rather with the political left, the main opponents of eugenics, on the other hand, being Christian conservatives.

Thus, early twentieth century socialist-eugenicists like H.G. Wells, Sidney Webb, Margaret Sanger and George Bernard Shaw may then have tentatively grasped what eludes contemporary leftists, Singer very much included – namely that re-engineering society necessarily requires as a prerequisite re-engineering Man himself.[13]

_________________________

Endnotes

[1] Indeed, the view that the poor and ill ought to be left to perish so as to further the evolutionary process seems to have been a marginal one even in its ostensible late nineteenth century heyday (see Bannister, Social Darwinism Science and Myth in Anglo-American Social Thought). The idea always seems, therefore, to have been largely, if not wholly, a straw man.

[2] In this, the evolutionary psychologists are surely right. Thus, no one accuses biomedical researchers of somehow ‘justifying disease’ when they investigate how infectious diseases, in an effort maximize their own reproductive success, spread form host to host. Likewise, nobody suggests that dying of a treatable illness is desirable, even though this may have been the ‘natural’ outcome before such ‘unnatural’ interventions as vaccination and antibiotics were introduced.

[3] The convenional notion that we can usefully conceptualize the political spectrum on a single dimensional left-right axis is obviously preposterous. For one thing, there is, at the very least, a quite separate liberal-authoritarian dimension. However, even restricting our definition of the left-right axis to purely economic matters, it remains multi-factorial. For example, Hayek, in The Road to Serfdom classifies fascism as a left-wing ideology, because it involved big government and a planned economy. However, most leftists would reject this definition, since the planned economy in question was designed, not to reduce economic inequalities, but rather, in the case of Nazi Germany at least, to fund and sustain an expanded military force, a war economy, external military conquest and grandiose vanity public works architectural projects. The term right-wing’ is even more problematic, including everyone from fascists, to libertarians to religious fundamentalists. Yet a Christian fundamentalist who wants to outlaw pornography and abortion has little in common with either a libertarian who wants to decriminalize prostitution and child pornography, nor with a eugenicist who wants to make abortions, for certain classes of person, compulsory. Yet all three are classed together as ’right-wing’ even though they share no more in common with one another than any does with a raving unreconstructed Marxist.

[4] Thus, the British Conservatives Party traditionally styled themselves one-nation conservatives, who looked to the interests of the nation as a whole, rather than what they criticized as the divisive ‘sectionalism’ of the trade union and labour movements, which favoured certain economic classes, and workers in certain industries, over others, just as contemporary leftists privilege the interests of certain ethnic, religious and culturally-defined groups (e.g. blacks, Muslims, feminists) over others (i.e. white males).

[5] Of course, some ‘unnatural’ interventions have positive health benefits. Obvious examples are modern medical treatments such as penicillin, chemotherapy and vaccination. However, these are the exceptions. They have been carefully selected and developed by scientists to have this positive effect, have gone through rigorous testing to ensure that their effects are indeed beneficial, and are generally beneficial only to people with certain diagnosed conditions. In contrast, recreational drug use almost invariably has a negative effect on health.
It might also be noted that, although their use by humans may be ‘unnatural’, the role of antibiotics in fighting bacterial infection is not itself ‘unnatural’, since antibiotics such as penicillin themselves evolved as a natural means by which one microorganism, namely mould, a form of fungi, fights another form of microorganism, namely bacteria.

[6] It is certainly possible for more altruistic cultures to exist. For example, the famous (and hugely wasteful) potlatch feasts of some Native American cultures, which involved great acts of both altruism and wanton waste, exemplify an extreme form of competitive altruism, analogous to conspicuous consumption, and may be explicable as a form of status display in accordance with Zahavi’s handicap principle. However, recognizing that such cultures exist does not easily translate into working out how to create or foster such cultures, let alone transform existing cultures in this direction.

[7]  Indeed, by modern politically-correct standards, Marx was a rampant racist, not to mention an anti-Semite

[8] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ’veil of ignorance’. 

[9] As Warren Farrell documents in his excellent Why Men Earn More (which I have reviewed here, here and here), in the USA, women who have never married and have no children actually earn more than men who have never married and have no children and have done since at least the 1950s (Why Men Earn More: pxxi). More precisely, according to Farrell, never-married men without children on average earn only about 85% of their childless never-married female counterparts (Ibid: pxxiii).
The situation is similar in the UK. Thus, economist JR Shackleton reports:

Women in the middle age groups who remain single earn more than middle-aged single males” (Should We Mind the Gap? p30).

The reasons unmarried, childless women earn more than unmarried childless men are multifarious and include:

  1. Married women can afford to work less because they appropriate a portion of their husband’s income in addition to their own
  2. Married men and men with children are thus obliged to earn even more so as to financially support, not only themselves, but also their wife, plus any offspring;
  3. Women prefer to marry richer men and hence poorer men are more likely to remain single;
  4. Childcare duties undertaken by women interfere with their earning capacity.

[10]  Incidentally, Browne has also published a more succinct summary of the biological factors underlying the pay-gap that was first published in the same ‘Darwinism Today’ series as Singer’s ‘A Darwinian Left’, namely Divided Labors: An Evolutionary View of Women at Work. However, much though I admire Browne’s work, this represents a rather superficial popularization of his research on the topic, and I would recommend instead Browne’s longer Biology at Work: Rethinking Sexual Equality (which I have reviewed here) for a more comprehenseive treatment of the same, and related, topics. 

[11] A precedent for just such a programme, enacted in the name of socialism, albeit imposed consensually, was the communal rearing practices in Israeli Kibbutzim, since largely abandoned. Another suggestion along rather different lines comes from a rather different source, namely Adolf Hitler, who, believing that nature trumped nurture, is quoted in Mein Kampf as proposing: 

The State must also teach that it is the manifestation of a really noble nature and that it is a humanitarian act worthy of all admiration if an innocent sufferer from hereditary disease refrains from having a child of his own but bestows his love and affection on some unknown child whose state of health is a guarantee that it will become a robust member of a powerful community” (quoted in: Parfrey 1987: p162). 

[12] Actually, it is not entirely clear that women do have a natural desire to bear offspring. Other species probably do not have any such natural desire. After all, since they are almost certainly are not aware of the connection between sex and child birth, such a desire would serve no adaptive purpose and hence would never evolve. All an organism requires is a desire for sex, combined perhaps with a tendency to care for offspring after they are born. (Indeed, in principle, a female does not even require a desire for sex, only a willingness to submit to the desire of a male for sex.) As Tooby and Cosmides emphasize: 

Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.” 

There is no requirement for a desire for offspring as such. Nevertheless, anecdotal evidence of so-called broodiness, and the fact that most women do indeed desire children, despite the costs associated with raising children, suggests that, in human females, there is indeed some innate desire for offspring. Curiously, however, the topic of broodiness is not one that has attracted much attention among evolutionists.

[13] However, there is a problem with any such case for a ‘Brave New Socialist Eugenics’. Before the eugenic programme is complete, the individuals controlling eugenic programmes (be they governments or corporations) would still possess a more traditional human nature, and may therefore have less than altruistic motivations themselves. This seems to suggest then that, as philosopher John Gray concludes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):  

“[If] human nature [is] scientifically remodelled… it will be done haphazardly, as an upshot of the struggles in the murky world where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

References  

Parfrey (1987) Eugenics: The Orphaned Science. In Parfrey (Ed.) Apocalypse Culture (New York: Amoc Press). 

Trivers 1971 The evolution of reciprocal altruism Quarterly Review of Biology 46(1):35-57 

Vining 1986 Social versus reproductive success: The central theoretical problem of human sociobiologyBehavioral and Brain Sciences 9(1), 167-187.

The Decline of the Klan and of White (and Protestant) Identity in America

Wyn Craig Wade, The Fiery Cross: The Ku Klux Klan in America New York: Simon and Schuster, 1987

Given the infamy of the organization, it is surprising that there are so few books that cover the entire history of the Ku Klux Klan. 

Most seem to deal only with only a single period (usually, but not always, either the Reconstructionera Klan or the Second Klan that reached its apotheosis during the twenties), one locality or indeed only a single time and place

On reflection, however, this is not really surprising. 

For, though we habitually refer to the Ku Klux Klan, or the Klan, or the KKK (emphasis on ‘the’), as if it were a single organization that has been in continuous existence since its first formation in Pulaski, Tennessee during the Reconstruction-era, there have in fact been many different groups calling themselves ‘the Ku Klux Klan’, or some slight variant upon this name (e.g. ‘Knights of the Ku Klux Klan’, ‘United Klans of America’), that have emerged and disappeared over the century and a half since the name was first coined in the aftermath of the American Civil War.

Most of these groups had small memberships, recruited and were active in only a single locality and soon disappeared altogether. Yet even those incarnations of the Klan name that had at least some claim to a national, or at least a pan-Southern, membership invariably lacked effective centralized control over local klaverns.

Thus, Wade observes: 

After the Klan had spread outwards from Tennessee, there wasn’t the slightest chance of central control over it – a problem that would characterize the Klan throughout its long career” (p58). 

It is perhaps for this reason that most historians authoring books about the Klan have focussed on Klan activity in only a single time-frame and/or geographic locality.

Indeed, it is notable that, besides Wynn Wade’s ‘The Fiery Cross’, the only other work of which I am aware that even purports to cover the entirety of the Klan’s history (apart from the recently published White Robes and Burning Crosses, which I have yet to read) is David Chambers’ Hooded Americanism: The History of the Ku Klux Klan

Yet even this latter work (‘Hooded Americanism’), though it purports in its blurb to be “The only work that treats Ku Kluxism for the entire period of it’s [sic] existence”, actually devotes only a single, short, cursory chapter to the Reconstruction-era Klan, when the group was first founded, arguably at its strongest, and certainly at its most violent.

Moreover, ‘Hooded Americanism’ is composed of separate chapters recounting the history of the Klan in different states in each time period, such that the book lacks an overall narrative structure and is difficult to read. 

In contrast, for those with an interest in the topic, Wade’s ‘The Fiery Cross’ is both readable and informative, and somehow manages to weave the story of the various Klan groups in different parts of the country into a single overall narrative. 

A College Fraternity Turned Terrorist? 

If, today, the stereotypical Klansman is an illiterate redneck, it might come as some surprise to many to learn that the group’s name actually bears an impressively classical etymology. It derives from the ancient Greek kuklos, meaning ‘circle’. To this was added ‘Klan’, both for alliterative purposes, and in reference to the ostensible Scottish ancestry of the group’s founders.[1]

This classical etymology was no oddity. Rather, it reflected the social standing and educational background of the six young Confederate veterans who founded the group on their return to their hometown at the end of the Civil War, and who, far from being illiterate rednecks, were, Wade reports, “well educated for their day” (p32). 

Thus, he reports, of the six founder members, two would go on to become lawyers, another would become editor of a local newspaper, and yet another a state legislator (p32). 

Neither, seemingly, was the group formed with any terroristic, nor even any discernible political, aspirations in mind at all. Instead, one of these six founder members, the, in retrospect, ironicallynamed James Crow, claimed their intention was initially: 

Purely social and for our amusement” (p34). 

Since, as a good white Southerner and Confederate veteran, Crow likely approved the white supremacist politics with which the Klan later became associated, he had no obvious incentive to downplay a political motive. Certainly, Wade takes him at his word. 

Thus, if the various Klan titles – Grand GoblinImperial Wizard etc. – sound more like what one might expect in, say, a college fraternity than a serious political or terrorist group, then this perhaps reflects the fact that the organization was indeed conceived with just such adolescent tomfoolery in mind. 

Indeed, although it is not mentioned by Wade, it has even been suggested that a then-defunct nineteenth-century fraternity, Kuklos Adelphon, may even have provided a partial model for the group, including its name. Thus, Wade writes: 

It has been said that, if Pulaski had had an Elks Club, the Klan would never have been born” (p33). 

White Sheets and Black Victims 

However, from early on, the group’s practical jokes increasingly focussed on the newly-emancipated, and already much resented, black population of Giles County

Yet, even here, intentions were initially jocular, if mean-spirited. Thus, the white sheets famously worn by Klansmen were, Wade informs us, originally conceived in imitation of the stereotypical appearance of ghosts, the wearers ostensibly posing as: 

The ghosts of the Confederate dead, who had risen from their graves to wreak vengeance on [the blacks]” (p35). 

This accorded with the then prevalent stereotype of black people as being highly superstitious. 

However, it is likely that few if any black victims were ever actually taken in by this ruse. Rather, the very real fear that the Klan came to inspire in its predominantly black victims reflected instead the also very real acts of terror and cruelty with which the group became increasingly associated. 

The sheets also functioned, of course, as a crude disguise.  

However, it was only when the Klan name was revived in the early twentieth century, and through the imagination of its reviver, William Joseph Simmons, that this crude disguise was transformed into a mysterious ceremonial regalia, the sale of which was jealously guarded, and an important source of revenue, for the Klan leadership. 

Indeed, in the Reconstruction-era Klan, the sheets, though a crude disguise, would not even qualify as a uniform, as there was no standardization whatsoever. Instead:  

Sheets, pillowcases, handkerchiefs, blankets, sacks… paper masks, blackened faces, and undershirts and drawers were all employed” (p60).  

Thus, Wade reports the irony whereby one: 

Black female victim of the Klan was able to recognise one of her assailants because he wore a dress she herself had sewed for his wife” (p60). 

Chivalry – or Reproductive Competition

Representing perhaps the original white knights, Klansmen claimed to be acting in order to protect the ostensible virtue and honour of white women

However, at least in Wade’s telling, the rapes of white women by black males, upon which white Southern propaganda so pruriently dwelt (as prominently featured, for example, in the movie, Birth of a Nation, and the book upon which the movie was based, The Clansman: A Historical Romance of the Ku Klux Klan), were actually very rare. 

Indeed, he even quotes a former Confederate General, and alleged Klansman, seemingly admitting as much when, upon being asked whether such assaults were common, he acknowledged: 

Oh no sir, but one case of rape by a negro upon a white woman was enough to alarm the whole people of the state” (p20). 

Certainly, the Emmett Till case demonstrates that even quite innocuous acts could indeed invite grossly disproportionate responses in the Southern culture of honour, at least where the perceived malfeasors were black. Thus, Wade claims: 

“Sometimes a black smile or the tipping of a hat were sufficient grounds for prosecution for rape. As one southern judge put it, ‘I see a chicken cock drop his wings and take after a hen; my experience and observation assure me that his purpose is sexual intercourse, no other evidence is needed’” (p20). 

Likewise, such infamous cases as the Scottsboro boys and Groveland four illustrate that false allegations were certainly not unknown in the American South. Indeed, false rape allegations, directed against men of all races, remain disturbingly common to this day

However, I remain skeptical of Wade’s claim that black-on-white rape were quite as rare as he makes out. 

After all, American blacks have had high rates of violent crime ever since records began, and, as contemporary racists are fond of pointing out, today, black-on-white rape is actually quite common, at least as compared to other victim-offender dyads. 

Thus, in Paved with Good Intentions: The Failure of Race Relations in Contemporary America, published in 1992, Jared Taylor reports: 

In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93) 

Indeed, the authors of one recent textbook on criminology even claim that: 

Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544).[2] 

At any rate, Southern chivalry was rather selectively accorded, and certainly did not extend to black women.[3]

Indeed, Wade claims that Klansmen themselves, employing a blatant double-standard and rank hypocrisy, actually themselves regularly raped black women during their raids: 

The desire for group intercourse was sometimes sufficient reason for a den to go out on a raid…. Sometimes during a political raid, Klansmen would rape the female members of the household as a matter of course” (p76). 

As someone versed in sociobiological theory who has studied evolutionary psychology, I tempted to see these double-standards in sociobiological terms as a form of reproductive competition, designed to maximize the reproductive success of the white males involved, and indeed of the white race in general.

Thus, for white men, it was open season on black women, but white women were strictly off-limits to black men: 

In Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation. White men soon learned that women placed on a pedestal acted like statues in bed, and they came to prefer the female slave whom they found open and uninhibited… The more white males turned to female slaves, the more they exalted their own women, who increasingly became a mere ornament and symbol of the Southern way of life” (p20).

Indeed, this pattern of double-standards, whereby men of a given ethnicity are only too happy to miscegenate with women of another ethnicity but are jealously protective of their own women, is cross-culturally recurrent, and eminently explicable in terms of reproductive competition.

Klan Success? 

The Klan came to stand for the reestablishment of white supremacy and the denial of voting rights to blacks. 

In the short-term, at least, these aims were to be achieved, with the establishment of segregation and effective disenfranchisement of blacks throughout much of the South. Wade, however, denies the Klan any part in this victory: 

The Ku-Klux Klan… didn’t weaken Radical Reconstruction nearly as much as they nurtured it. So long as an organized secret conspiracy swore oaths and used cloak and dagger methods in the South, Congress was willing to legislate against it… Not until the Klan was beaten and the former confederacy turned to more open methods of preserving the Southern way of life did Reconstruction and its Northern support decline” (p109-110). 

Thus, it was, Wade reports, not the Klan, but rather other groups, today largely forgotten, such as Louisiana’s White League and South Carolina’s Red Shirts, that were responsible for successfully scaring blacks away from the polls and ensuring the return of white supremacy in the South. Moreover, he reports that they were only able to do so only because the federal laws enacted to tackle the Klan had ceased to be enforced precisely because the Klan itself had ceased to represent a serious threat. 

On this telling, then, the First Klan was, politically, a failure. In this respect, it was to set the model for later Klans, which would fight a losing rearguard action against Catholic immigration and racial integration. 

Resurrection 

If the First Klan was a failure, why then was it remembered, celebrated and ultimately revived, while other groups, such as the White LeagueRed Shirts and Knights of the White Camelia, which employed similar terrorist tactics in pursuit of the same political objectives, are today largely forgotten? 

Wade does not address this, but one suspects the outlandishness of the group’s name and ceremonial titles contributed, as did the fact that the Klan seems to have been the only such group active throughout the entirety of the former Confederacy

The reborn Klan, founded in the early twentieth century, was the brainchild of William Joseph Simmons, a self-styled professional ‘fraternalist’, alumni of countless other fraternal organizations, Methodist preacher, strict prohibitionist and rumoured alcoholic. 

It is him to whom credit must go for inventing most of the arcane ceremonial ritualism (aka ‘Klancraft’) and terminology (including the very word ‘Klancraft’) that came to be associated with the Klan in the twentieth century. 

Birth of a Nation’ and the Rebirth of the Klan 

Two further factors contributed to the growth and success of the reborn Klan. First, was the spectacularly successful 1915 release of the movie, The Birth of a Nation

Both deplored for its message yet also grudgingly admired for its technical and artistic achievement, this film occupies a curious place in film history, roughly comparable to that of Leni Riefenstahl’s Nazi propaganda film, Triumph of the Will. (Sergei Eisenstein’s Communist and Stalinist propaganda films curiously, but predictably, receive a free pass.) 

In this movie, pioneering filmmaker DW Griffith is credited with largely inventing much of the grammar of modern moviemaking. If, today, it seems distinctly unimpressive, if not borderline unwatchable, this is, not only because of the obvious technological limitations of the time period, but also precisely because it invented many of the moviemaking methods that modern cinema-goers, and television viewers, have long previously learnt to take for granted (e.g. cross-cutting). 

Yet, if its technical and artistic innovations have won the grudging respect of film historians, its message is, of course, wholly anathema to modern western sensibilities. 

Thus, portraying the antebellum American South with the same pair of rose-tinted spectacles as those donned by the author of Gone with the Wind, ‘Birth of a Nation’ went even further, portraying blacks during the Reconstruction period as rampant rapists salivating over the flesh of white women, and Klansmen as heroic white knights who saved white womanhood, and indeed the South itself, from the ravages both of Reconstruction and of Southern blacks. 

Yet, though it achieved unprecedented box-office success, even being credited as the first modern blockbuster, the movie was controversial even for its time. 

It even became the first movie to be screened in the White House, when, as an apparent favour to Thomas Dixon, the author of the novel upon which the movie was based, the film received an advance, pre-release screening for the benefit of the then-President, Woodrow Wilson, a college acquaintance of Dixon – though what the President thought of the film is a matter of some dispute.[4]

Indeed, such was the controversy that the nascent NAACP, itself launched just a few years earlier, even launched a campaign to have the film banned outright (p127-8). 

This, of course, puts the lie to the notion that the political left was, until recent times, wholly in favour of freedom of speech and artistic expression

Actually, even then, the Left’s commitment to freedom of expression was, it seems, highly selective, just as it is today. Thus, it was one thing for the left defend the free speech rights of raving communists during the so-called Second Red Scare when the threat of Soviet infiltration was very real, quite another to apply the same principle to unreconstructed racists and Klansmen.

The Murders of Mary Phagan and Leo Frank

Another factor in the successful resurrection of the Klan were two murders that galvanized popular opinion in the South, and indeed the nation. 

First was the rape and murder of Mary Phagan, a thirteen-year-old factory girl in Atlanta, Georgia. Second was the lynching of Leo Frank, her boss and ostensible murderer, who was convicted of her rape and murder and sentenced to death, only to have this sentence commuted to life-imprisonment, only to be lynched by outraged locals. 

His lynching was carried out by a group styling themselves ‘The Knights of Mary Phagan’, many of whom would go on to become founder members of Simmons’s newly reformed Klan. 

It was actually this group, not the Klan itself, which would establish a famous Klan ritual, namely the ascent of Stone Mountain to burn a cross, a ritual Simmons would repeat to inaugurate his nascent Klan a few months later.[5]

Yet, in the history of alleged miscarriages of justice in the American South, the lynching of Leo Frank stands very much apart. 

For one thing, most victims of such alleged miscarriages of justice were, of course, black. Yet Leo Frank was white. 

Moreover, most of his apologists insist that the real perpetrator was, in fact, a black man. They are therefore in the unusual position of claiming racism caused white Southerners to falsely convict a white man when they should have pinned the blame on a black instead.

It is true, of course, that Frank was also Jewish. However, there was little history of anti-Semitism in the South. Indeed, I suspect there was more prejudice against him as a wealthy Northerner who had moved south for business purposes, and hence as, in Southern eyes, a ‘Yankee carpetbagger’.

Moreover, although his lynching was certainly unjustified, and his conviction possibly unsafe, it is still not altogether clear that Frank was indeed innocent of the murder of which he stood accused.[6]

Wade himself admits that there was some doubt as to his innocence at the time. However, he refers to a deathbed statement by an elderly witness some seventy years later in 1982 as finally definitively proving his innocence: 

Not until 1982 would Frank’s complete innocence come to light as a result of a witness’s deathbed statement” (p143). 

However, a claim made, not in court, but rather to the press for a headline (albeit also in a signed affidavit under oath), by an elderly, dying man, regarding things he had supposedly witnessed some seventy years earlier when he was himself little more than a child, is obviously open to question.

Thus, the case is problematic on many levels. While it is unclear whether Frank was indeed guilty, it certainly seems that he was wrongly convicted, since his guilt was not proven beyond reasonable doubt.

Yet, despite this, the grant of clemency and reduction of his sentence to life imprisonment rather than execution somehow almost seems more wrong still, since it reflected more the fact that Frank was a wealthy, well-connected Jew, backed by the powerful Northern Jewish community, and a campaign in the influential (and Jewish-owned) New York Times, than it did to the weakness of the case against him.

One suspects a poor Southern black, or indeed a poor Southern (or Northern) white, convicted of a similarly gruesome crime on similarly ambiguous evidence, would have soon found himself summarily hanged with little attendent fanfare, and the case would have remained little known outside of Georgia, and little remembered today.

Thus, while his Frank’s lynching was undoubtedly even more wrongful than either his conviction or his subsequent commutation of sentence, nevertheless one can well understand the anger and animosity that his commutation of sentence provoked among the local population.

At any rate, it is interesting to note that Frank’s lynching played an important role, not only in the founding of the Second Klan, but also in the genesis of another political pressure group whose influence on American social, cultural and political life has far outstripped that of the Klan and which, unlike the Second Klan, survives to this day – namely the Anti-Defamation League of of B’nai B’rith or ADL

The parallels abound. Just as the Second Klan was a fraternal organization for white protestants, so B’nai B’rith, the organization which birthed the ADL, was a fraternal order for Jews, and Frank himself, surely not uncoincidentally, was president of the Atlanta chapter of this group. 

The organizational efforts of B’nai B’rith to protect Frank, a local chapter president, from punishment can therefore be viewed as analogous to the way in which the Klan itself sought to protect its own members from successful prosecution through its own corrupt links in law enforcement and government and on juries. 

Moreover, just as the Klan was formed to defend and promote the interests of white Christian Protestants, so the ADL was formed to protect the interests of Jews.

However, the ADL was to prove far more successful in this endeavour than the Klan had ever been, and, unlike the Second Klan, very much survives, and prospers, to this day.[7]

Klan Enemies 

Jews were not, however, the primary objects of Klan enmity during the twenties – and neither, perhaps surprisingly, were blacks. 

This was, after all, the period that later historians have termed ‘the nadir of American race relations’, when, throughout the South, blacks were largely disenfranchised, and segregation firmly entrenched. 

Yet, from a white racialist perspective, the era is misnamed.[8] Far from a nadir, for white racialists the period represented something like a utopia, lost Eden or Golden Age.[9]

White supremacy was firmly entrenched and not, it seemed, under any serious threat. The so-called civil rights movement had barely begun, and certainly had yet to achieve any major successes.

Of course, then as now, race riots did periodically puncture the apparent peace – at Wilmington in 1898Springfield in 1908Tulsa in 1912Rosewood in 1923, and throughout much of America in 1919

However, unlike contemporary American race riots, these typically took the form of whites attacking blacks rather than vice versa, and, even when the latter did occur, white solidarity was such that the whites invariably gave at least as good as they got.[10]

Thus, in early-twentieth century America, unlike during Reconstruction, there was no need for a Klan to suppress ‘uppity’ blacks. On the contrary, blacks were already adequately suppressed.  

Thus, if the Second Klan was to have an enemy worthy of its enmity, and a cause sufficient to justify its resurrection, and, more important, sufficient to persuade prospective inductees to hand over their membership dues, it would have to look elsewhere. 

To some extent the enemy selected varied on a regional basis, depending on the local concerns of the population. The Klan thus sought, like Hitler’s later NSDAP, to be ‘all things to all men’, and, for some time before it hit upon a winning strategy, the Klan flitted from one issue to another, never really finding its feet. 

However, to the extent the Second Klan, at the national level, was organized in opposition to a single threat or adversary, it was to be found neither in Jews nor blacks, but rather in Catholics. 

Anti-Catholicism 

To modern readers, the anti-Catholicism of the Second Klan seems almost bizarre. Modern Americans may be racist and homophobic in ever decreasing numbers, but they at least understand racism and homophobia. However, anti-Catholicism of this type, especially in so relatively recent a time period, seems wholly incomprehensible.

Indeed, the anti-Catholicism of the Second Klan is now something of an embarrassment even to otherwise unreconstructed racists and indeed to contemporary Klansmen, and is something they very much disavow and try to play down. 

Thus, anti-Catholicism, at least of this kind, is now wholly obsolete in America, and indeed throughout the English-speaking world outside of Northern Ireland – and perhaps Ibrox Football stadium for ninety minutes on alternate Saturdays for the duration of the Scottish football season. 

It seems something more suited to cruel and barbaric times, such as England in the seventeenth century, or Northern Ireland in the 1970s… or, indeed, Northern Ireland today. But in twentieth century America? Surely not. 

How then can we make sense of this phenomenon? 

Partly, the Klan’s anti-Catholicism reflected the greater religiosity of the age. In particular, the rise of the Second Klan was, at least in Wade’s telling, intimately linked with the rise of Christian fundamentalism in opposition to reforming practices (the so-called Social Gospel) in the early twentieth century.

Indeed, under its first Imperial Wizard, William Joseph Simmons, a Methodist preacher, the new Klan was initially more of a religious organization than it was a political one, and Simmons himself was later to lament the Klan’s move into politics under his successor.[11]

There was, however, also a nativist dimension to the Klan’s rabid anti-Catholicism, since, although Catholics had been present among the first settlers of North America and numbered even among the founding fathers, Catholicism was still associated with recent immigrants to the USA, especially Italians, Irish and Poles, who had yet to fully assimilate into the American mainstream

Catholics were also seen as inherently disloyal, as the nature of their religious affiliation (supposedly) meant that they owed ultimate loyalty, not to America, but rather to the Pope in Rome.  

This idea seems to have been a cultural inheritance from the British Isles.[12] In England prior to recent times, Catholics had long been viewed as inherently disloyal, and as desirous to overthrow the monarchy and restore Britain to Catholicism, as, in an earlier age, many had indeed sought to do

This view is, of course, directly analogous to the claim of many contemporary Islamophobes and counter-Jihadists today that the ultimate consequence of Muslim immigration into Europe will be the imposition of Shariah law across Europe.

However, even in the twenties, during the Second Klan’s brief apotheosis, their anti-Catholicism already seemed, in Wade’s words, “strangely anachronistic”, to the point of being “almost astounding” (p179).

Thus, as anti-Catholicism waned as a serious organizing force in American social and political (or even religious) life, it soon became clear that the Klan had nailed their colours to a sinking ship. Thus, as anti-Catholic sentiments declined among the American population at large, so the Klan attempted to both jettison and disassociate itself from its earlier anti-Catholicism.[13]

First, anti-Catholicism was simply deemphasized by the Klan in favour of new more pressing enemies like communism, trade unionism and the burgeoning civil rights movement. 

Then, eventually, during the Sixties, the United Klans of America, the then dominant Klan faction in America, announced, during “an all-out crusade for new members”, that: 

Catholics were now welcome to join the Klan – the Communist conspiracy more than made up for the Klan’s former anti-Catholic fears of Americans loyal to a foreign power” (p328). 

Today, meanwhile, the Second Klan’s anti-Catholicism is seen as an embarrassment even by raving racists and otherwise unreconstructed Klansmen. 

The decline of anti-Catholicism in the Klan, and in American society at large, provides, then, an optimistic case-study of the remarkable speed with which (some) intergroup prejudices can be overcome.[14]

It also points to an ironic side-effect of the gradual move towards greater tolerance, inclusivity and diversity in American society – namely, even groups ostensibly opposed to this process have nevertheless been affected by it. 

In short, even the Klan has become more tolerantinclusive and diverse

Losing Land and Territory

For many nationalists, racial and ethnic conflict is ultimately a matter of competition for territory and land.

It is therefore of interest that the decline of the Klan, and of white protestant identity in the USA, was itself presaged and foreshadowed by two land sales, one in the early-twenties, when Klan membership was at a peak, and a second just over a decade later, when the decline was already well underway.

First, in the early-twenties, the Klan’s boldly envisaged Klan University had gone bankrupt. The land was sold and a synagogue was constructed on the site. 

Then, under financial pressure in the 1930s as the Depression set in, the Klan was even forced to sell even its main headquarters in Atlanta. 

If selling a Klan university only to see a synagogue constructed on the same site was an embarrassment, then the eventual purchaser of the Klan headquarters was to be an even greater Klan enemy – the Catholic Church. 

Thus, the erstwhile site of the Klan’s grandly-titled Imperial Palace became a Catholic cathedral

Perhaps surprisingly, and presumably in an effort at rapprochement and reconciliation, the new cathedral’s hierarchy reached out to the Klan by inviting the then-Grand Wizard, Hiram Evans, who had outmanoeuvred Simmons for control of the then-lucrative cash-cow during the Klan’s twenties heyday, to the new Cathedral’s inaugural service. 

Perhaps even more surprisingly, Evans actually accepted the invitation. Afterwards, even more surprisingly still, he was quoted as observing: 

It was the most ornate ceremony and one of most beautiful services I ever saw” (p265). 

More beautiful even, it seems, than a cross-burning.

Evans was, not uncoincidentally, forced to resign immediately afterwards. However, in deemphasizing anti-Catholicism, he correctly gaged the public mood and the Klan itself was later, if belatedly, to follow his lead. 

The Turn to Terror 

The Klan is seemingly preadapted to terror. However benign the intentions of its successive founders, each Klan descended into violence. 

If the First Klan was formed as something akin to college fraternity, the Second Klan seems to have been conceived primarily as a money-making venture, and hence, in principle, no more inherently violent than the Freemasons or the Elks

Yet the turn to terror was perhaps, in retrospect, inevitable. After all, this new Klan had been modelled on what had been, or at least become, a terrorist group (namely, the First Klan), employed masks, and, with the lynching of Leo Frank, had associated itself with violence and vigilantism from the very onset. 

Interestingly, although precise data is not easy to come by, one gets the distinct impression that, during this era of Klan activity, most of the victims of its violence were, not blacks nor even Catholics, but rather the very white protestant Christians whom the Klan ostensibly existed to protect, or, more specifically, those among this community who had somehow offended against the values of the community, or simply opposed the Klan. 

Of course, it was blacks who continued to represent the vast majority of victims of lynching, and such lynchings continued to occur, at least in the South, although all forms of lynching had long been in decline.

However, lynchings of blacks were rarely conducted under the auspices of the Klan. After all, these were a longstanding Southern tradition that long predated the Klan’s re-emergence, and the perpetrators of such acts rarely felt the need to wear masks to conceal their identities, let alone don the elaborate apparel, and pay the requisite membership dues, of the upstart Klan.[15]

But Klan violence per se did not always deter new members. On the contrary, some seem to have been attracted by it, and Klan recruiters (‘Kleagles’) at first defiantly maintained that newspaper exposés amounted to free publicity and only helped them in their recruitment drives.

However, if Klan violence attracted some new members, it probably deterred and repelled other protential recruits, especially among the law-abiding and respectable sections of society, and likely led to some existing members leaving the group. It is therefore far from clear that Klan violence was a net positive for the organization in terms of growth and recruitment.

Moreover, if newspaper revelations of Klan violence did indeed lead to an increase in membership, they undoubtedly also led to an increase in opposition to the Klan as well, and no doubt contributed to the declining perceived respectability of the group, especially among the law-abiding white middle-classes who had formerly represented the core membership.

Yet, Wade claims, more than violence, it was the perceived hypocrisy of Klan leaders which ultimately led to the group’s demise (p254).  

Thus, it purported to champion prohibition, temperance and Christian values, but had been founded by Simmons, a rumoured alcoholic, while its (hugely successful) marketing and recruitment campaign was headed by Edward Young Clarke and Mary Elizabeth Tyler of the Southern Publicity Association, who were openly engaged in an extra-marital affair with one another. 

However, the most damaging scandal to hit the Klan, which, as we have seen, purported to champion Prohibition and the protection of the sanctity of white womanhood, combined violence, drunkenness and hypocrisy, and occurred when DC ‘Steve’ Stephenson, a hugely successful Indianna Grand Dragon, was convicted of the rape, kidnap and murder of Madge Oberholtzer, herself a white protestant woman, during a drunken binge. 

In fact, by the time of the assault, Stephenson had already split from the national Klan to form his own rival, exclusively Northern, Klan group. However, his former prominence in the organization meant that, though they might disclaim him, the Klan could never wholly disassociate themselves from him.  

It seems to have been this scandal more than any other which finally discredited the Klan in the minds of most Americans. Thus, Wade concludes: 

The Klan in the twenties began and ended with the death of an innocent young girl. The Mary Phagan-Leo Frank case had been the spark that ignited the Klan. And the Oberholtzer-Stephenson case had put out the fire” (p247). 

Decline 

Thenceforth, the Klan’s decline was as rapid and remarkable as its rise. Thus, Wade reports: 

In 1924 the Ku Klux Klan had boasted more than four million members. By 1930, that number had withered to about forty-five thousand… No other American movement has ever risen so high and fallen so low in such a short period” (p253). 

Indeed, in Wade’s telling, even its famous 1925 march on Washington “proved to be its most spectacular last gasp”, attracting, Wade reports, “only half of the sixty thousand expected” (p249) 

The National gathering of thirty thousand was less than what [DC Stephenson] could have mustered in Indiana alone during the Klan’s heyday” (p250). 

Not only did numbers decline, so did the membership profile. 

Thus, initially, the new group had attracted members from across the socioeconomic spectrum of white Protestant America, or at least among all those who could afford the membership dues. Indeed, analyses of surviving membership rolls suggest that the Klan in this era was, at first, a predominantly middle-class group representing what was then the heart of Middle America

However, probably as a consequence of the revelations of violence, the respectable classes increasingly deserted the group.

Klan defections began with the prominent, the educated and the well-to-do, and proceeded down through the middle-class” (p252). 

Thus, the stereotype of the archetypal Klansman as an uneducated, semi-literate, tattooed, beer-swilling redneck gradually took hold. 

Indeed, from as early as 1926 or so, the Klan even sought to reclaim this image as a positive attribute, portraying themselves as, in their own words, “a movement of plain people” (p252). 

But this marketing strategy, in Wade’s telling, badly backfired, since even less well-off but ever aspirant Americans hardly wanted to associate themselves with a group that self-identified as something akin to uneducated hicks (Ibid.). 

As well as the membership narrowing in its socioeconomic profile, Klan membership also retreated geographically. 

Thus, in its brief heyday, the Second Klan, unlike its Reconstruction-era predecessor, had boasted a truly national membership. 

Indeed, the state with the largest membership was said to be Indiana, where DC ‘Steve’ Stephenson, in the few years before his dramatic downfall, was said to have built up a one-man political machine that briefly came to dominate politics throughout the Hoosier State. 

However, in the aftermath of the fall of Stephenson and his Indiana Klan, the Klan was to haemorrhage members not just in Indiana, but throughout the North. The result was that: 

By 1930, the Klan’s little strength was concentrated in the South. Over the next half-century the Klan would gradually lose its Northern members, regressing more and more closely towards its Reconstruction ancestor until, by the 1960s, it would stand as a near-perfect replica” (p252) 

Thenceforth, the Klan was to remain, once again, a largely Southern phenomenon, with what little numerical strength it retained overwhelmingly concentrated in the states of the former Confederacy. 

The Klan and the Nazis – A Match Made in Hell? 

In between recounting the Klan’s decline, Wade also discusses its supposed courtship of, or by, the pro-Nazi German-American Bund

Actually, however, a careful reading of Wade’s account suggests that he exaggerates the extent of any such association. 

Thus, it is notable, if bizarre, that, in Wade’s own telling, the Bund’s leader, German-born Fritz Julius Kuhn, in seeking the “merging of the Bund with some native American organization who would shield it from charges of being a ‘foreign’ agency”, had first set his sights on that most native and American of “native American organizations” – namely, Native Americans (p269-70). 

When this quixotic venture inevitably ended in failure, if only due to “profound indifference on the Indians’ part”, only then did the rebuffed Kuhn turn his spurned attentions reluctantly to the Klan (p270). 

Yet the Klan seemed to have been almost as resistant to Kuhn’s advances as the Native Americans had been. Thus, Wade quotes Kuhn, hardly an unbiased source, as himself admitting, somewhat ambiguously:

The Southern Klans did not want to be known in it… So the negotiations were between representatives of the Klans in New Jersey and Michigan, but it was understood that the Southerners were in” (p270). 

Yet, by this time, in Wade’s own telling, the Klan was extremely weak in Northern states such as New Jersey and Michigan, and what little numerical strength it retained was concentrated in the Southern states of the former Confederacy. 

This suggests that it was only the already marginalized northern Klan groups who, bereft of other support, were willing to entertain the notion of an alliance with Bund. 

If the Southern Klan leadership was indeed aware of, and implicitly approved, the link, which itself seems uncertain, it was nevertheless clear that they wanted to keep any such association indirect and at an arm’s length, hence maintaining plausible deniability

This is perhaps the only way we can make sense of Kuhn’s otherwise apparently contradictory acknowledgement that, on the one hand, “the Southern Klans did not want to be known in it”, while, on the other, “it was understood that the Southerners were in” (p270). 

Thus, when negative publicity resulted from the joint Klan-Bund rally in New Jersey, the national (i.e. Southern) Klan leadership was quick to distance itself from and disavow any notion of an alliance, promptly relieving the New Jersey Grand Dragon of his office.

On reflection, this is little surprise.

For one thing, German-Americans, especially those who willing to flagrantly flaunt their ‘dual loyalty’ by joining a group like the German-American Bund, were themselves exactly the type of hyphenated-Americans that the 100% Americans of the Klan professed to despise.

Indeed, though they may have been white and (mostly) protestant, German-Americans’ own integration into the American mainstream was, especially after the anti-German sentiment aroused during the First World War, still very much incomplete.

Moreover, anti-German prejudice was likely to have been particularly prevalent in the South, where, contrary to prevailing notions of white Southern racists seeking common cause with German Nazis, support for Britiain and for American involvement in WWII on Britain’s side seems to have been strongest.

Today, of course, we naturally tend to think of Nazis and the Klan as natural allies, both being, after all, that most reviled subspecies of humanity – namely, white racists.

However, besides racialism, the Klan and the Nazis actually had surprisingly little in common. 

After all, the Klan was a protestant fundamentalist group opposed to Darwinism and the teaching of evolutionary theory in schools.

Hitler, in contrast, was a reputed social Darwinist, who was reported by his confidents as harbouring a profound antipathy to the Christian faith, albeit one he kept out of his public pronouncements for reasons of political expediency, and some of whose followers even championed a return to Germanic paganism.[16]

Indeed, even their shared racialism was directed primarily towards different outgroups.

In Germany, blacks, though indeed persecuted by the Nazis, were few in number, and hence not a major target of Nazi propaganda, animosity or persecution – and nor were Catholics as such among the groups targeted for persecution by the Nazis, Hitler himself having been raised as a Catholic in his native Austria, where Catholicism was the majority religion.[17]

Yet, if Catholics were not among the groups targeted for persecution by the Nazis, members of secret societies like the Klan very much were. 

Thus, among the less politically-fashionable targets for persecution by the Nazis were both the Freemasons and indeed the closest thing Germany itself ever had to a Ku Klux Klan. 

Thus, in 1923 a Klan-like group, “the German Order of the Fiery Cross”, had been founded in Germany in apparent imitation of the Klan, by an expatriate German on his return to the Fatherland from America (p266). 

Yet, ironically, it was Hitler himself who was ultimately to ban and suppress this German Klan imitator upon his coming to power (p267). 

Death and Taxes – The Only Certainties in Life 

The Second Klan was finally destroyed, however, not by declining membership, scandals, violent atrocities, bad publicity and inept brand-management, nor even by government prosecution, though all these factors did indeed undoubtedly play a part in weakening it.  

Rather, the final nail in the Klan’s coffin was dealt by the taxman. 

In 1944, the Inland Revenue demanded restitution in respect of unpaid taxes due on the profits earned from subscription dues during the Klan’s brief but lucrative 1920s membership boom (p275). 

The Klan, which had been haemorrhaging members even before the Depression, and, unlike the economy as a whole, had yet to recover, was already in a dire financial situation. Therefore, it could never hope to pay the monies demanded by the government, and instead was forced to declare bankruptcy (p275). 

Thenceforth, the Klan was no more. 

Ultimately, then, the government destroyed the Klan the same way had Al Capone – failure to pay their taxes. 

The Third Klan/s 

The so-called Third Klan was really not one Klan, but many different Klans, each not only independent of one another, but also often in fierce competition with one another for members and influence. 

They filled the vacuum left by the defunct Second Klan and competed to match its size, power and influence – though none were ever to succeed. 

From this point on, it is no longer really proper to talk about the Klan, since there was not one Klan but rather many separate Klans, operating independently of one another, and sometimes in fierce rivalry. 

Moreover, the different Klan groups varied greatly in their ethos and activity. Thus, Wade reports: 

Some Klans were quietly ineffective, some were violent and some were borderline psychotic” (p302) 

With no one group maintaining a registered trademark over the Klan brand, inevitably the atrocities committed by one group ended up discrediting even other groups with no connection to them. The Klan ‘brand’ was thus irretrievably damaged, even among those who might otherwise be attracted to its ideology and ethos.[18] 

Indeed, the plethora of different groups was such that even Klansmen themselves were confused, one Dragon complaining: 

The old countersigns and passwords won’t work because all Klansmen are strangers to each other” (p302). 

Increasingly, opposition to the burgeoning African-American civil rights movement, rather than to Catholicism or communism, now seems to have become the Klan’s chief preoccupation and the primary basis upon which Klaverns, and Kleagles, sought to attract recruits. 

However, respectable opposition to desegregation throughout the South was largely monopolized by the Citizens Councils.

Indeed, in Wade’s telling, “preventing a build-up of the Ku Klux Klan” was, quite as much as opposing desegregation, one of the principal objectives for which the Citizens’ Councils had been formed, since “violence was bad for business, and most of the council leaders were businessmen” (p299). 

If this is true, then perhaps the Citizens’ Councils were more successful in achieving their objectives than they are usually credited as having been. Segregation, of course, was gone and did ever not come back – but, then again, neither, to any substantial degree, did the Klan. 

However, in practice, Wade reports, the main impact of the Citizens’ Councils on the Klan was: 

Not so much eliminating the Klan as leaving it with nothing but nothing but the violence prone dregs of Southern white society” (p302). 

Thus, the Klan’s image, and the characteristic socioeconomic status of its membership profile, declined still further. 

The electoral campaigns of the notorious (sometime-)segregationist and governor of Alabama George Wallace also had a similar effect. Thus, Wade reports: 

Wallace’s campaigns… swallowed a lot of disaffected Klansmen. In fact, Wallace’s campaigns offered them the first really viable alternative to the Klan” (p364). 

Political Cameos and Reinventions 

Here in Wade’s narrative, the myriad of disparate Klan groups inevitably fade into the background, playing a largely reactive, and often violent but nevertheless largely ineffective, and often outright counterproductive, role in opposing desegregation. 

Instead, the starring role is usurped by, in Wade’s own words: 

Two men who were masters of the electronic media: an inspired black minister, Martin Luther King, and a pragmatic white politician, JFK, who would work in an uneasy but highly productive tandem” (p310). 

Actually, in my view, it would be more accurate to say that the centre state was taken by two figures who are today vastly overrated on account of their premature deaths at the hands of assassins, and consequent elevation to martyr status. 

In fact, however, while Wade’s portrait of King is predictably hagiographic, his portrayal of Kennedy is actually refreshingly revisionist. 

Far from the liberal martyr of contemporary left-liberal imagining, Kennedy was, in Wade’s telling, only a “pragmatic white politician”, and moreover only a rather late convert to the African-American civil rights movement

Indeed, before he first took office, Wade reports, Kennedy had actually endorsed the interpretation of the Dunning School of historiography regarding the Reconstruction-era, was critical of Eisenhower having sent troops into Arkansas to enforce school desegregation, and only reluctantly, when his hand was forced, himself sent the National Guard into Alabama with the same objective (p317-22). 

Meanwhile, another political figure making a significant cameo appearance in Wade’s narrative, ostensibly on the opposite side of the debate over desegregation, is the notorious segregationist governor of Alabama, George Wallace

Yet Wade’s take on Wallace is, in many respects, as revisionist as his take on Kennedy. Thus, far from a raving racist and staunch segregationist, Wade argues: 

In retrospect… no one used and manipulated the Klansmen more than Wallace. He gave them very few rewards for their efforts on his behalf: often his approval was enough. And in spite of his fiery cant and cries of ‘Never!’ that so thrilled Klansmen, Wallace was a former judge who well understood the law – especially how far he could bend it” (p322). 

Thus, Wade reports, while it is well-known that Wallace, in a famous photo op, blocked the entrance to the University of Alabama preventing black students from entering, what is rather less well-known is that: 

When the marshals asked for the black students to be admitted in the afternoon, Wallace quietly stepped aside. Instead of being recognized, at best, as a practical politician or, at worst, a pompous coward, Wallace was instead hailed by Klansmen as a dauntless hero” (p322). 

Thus, if Kennedy was, in Wade’s telling, “a pragmatic white politician”, then Wallace emerges as an outright political chameleon and shameless opportunist. 

As further evidence for this interpretation, what Wade does not get around to mentioning is that, in his first run for the governorship of Alabama in 1958, Wallace had actually spoken out against the Klan and had even been endorsed by the NAACP, only after his defeat vowing, as he was eloquently quoted as observing, ‘never to be outniggered again’ again, and hence reinventing himself as an (ostensible) arch-segregationist. 

Neither does Wade mention that, in his last run for governor in 1982, reinventing himself once again as a born-again Christian repentant for his past, Wallace actually managed to win over 90% of the black vote

Yet even Wallace’s capacity for political reinvention is outdone by that of one of his supporters and speech-writers, former Klan leader Asa ‘Ace’ Carter, a man so notorious for racism that even the outspoken segregationist Wallace was to deny ever employing him, but who was supposedly responsible for penning the words to Wallace’s infamous segregation now, segregation tomorrow, segregation forever” speech

Expelled from a Citizens’ Council for extremism, Carter had then founded and briefly reigned as tin pot führer of one of the most violent of Klan outfits – “the Original Ku Klux Klan of the Confederacy, which resembled a cell of Nazi storm troopers” (p303). 

This group was responsible for one of the worst Klan atrocities of the period, namely the literal castration of a black man, whom they: 

Castrated… with razor blades; and then tortured… with by pouring kerosene and turpentine over his wounds” (p303). 

This gruesome act was, according to a Klan informant, performed for no better reason than as a “test of one of the members’ mettle before being elected ‘captain of the lair” (p303). 

The group was also, it seems, rather too violent even for its own good, and ultimately violently imploded when, in a dispute over financing and his alleged misappropriation of funds, Carter was to shoot two fellow members, an shooting for which, it seems, he never stood trial (Ibid.).

Yet what Wade does not get around to mentioning is Asa ‘Ace’ Carter was also, like Wallace, to later successfully reinvent himself, and achieve fame once again, this time as Forrest Carter, an ostensibly half-Native American author who penned such hugely successful novels as The Rebel Outlaw: Josey Wales (subsequently made into the successful motion picture, The Outlaw Josey Wales, directed by and starring Clint Eastwood) and The Education of Little Tree, an ostensible autobiography of a growing up on an Indian reservation, and a book so sickeningly sentimental that it was even recommended and endorsed by Oprah Winfrey

The David Duke Show” 

By the 1970s, open support for white supremacy and segregation was in decline, even among white Southerners. This, together with Klansmen’s involvement in such atrocities such as the horrendous 16th Street Baptist Church bombing, might have made it seem that the Klan brand was irretrievably damaged and in terminal decline, never again to play a prominent role in American social or political life again. 

Yet, perhaps surprisingly, the Klan brand did manage one last hurrah in the 1970s, this time through the singular talents of one David Duke

Duke was to turn the Klan’s very infamy to his own advantage. Thus, he exploited the provocative imagery of the Klan to attract media attention, but then, having attracted that attention, came across as much more eloquent, reasonable, intelligent and clean-cut than anyone ever expected a Klansman to be – which, in truth, isn’t difficult! 

The result was a media circus that one disgruntled Klansmen aptly dismissed as “The David Duke Show” (p373). 

It was the same trick that George Lincoln Rockwell had used a generation before, though, whereas Rockwell used Nazi imagery (e.g. swastikas, Nazi salutes) to attract media attention, Duke instead used the imagery of the Klan (e.g. white sheets, burning crosses).

If Duke was successor to Rockwell, then Duke’s own contemporary equivalent, fulfilling a similar niche for the contemporary American media as the handsome, eloquent, go-to face of white nationalism, is surely Richard Spencer. Indeed, if rumours are to be believed, Spencer even has a similar penchant to Duke for seducing the wives and girlfriends of his colleagues and supporters. 

Such behaviour, along with his lack of organizational ability, was, according to Wade, the main reason that Duke alienated much of his erstwhile support, haemorrhaging members almost as fast as he attracted them. 

Many such defectors would go on to form prominent rival groups, including Tom Metzger, a TV repairman, who split from Duke to form a more openly militant group calling itself White Aryan Resistance (known by the memorable acronym ‘WAR’), and who achieved some degree of media infamy by starring in multiple television documentaries and talk-shows, despite being bankrupted by a legal verdict in which he and his organization were somehow held financially liable for involvement in a murder in which they seem to have had literally no involvement.

However, for Wade, the most important defector was, not Metzger, but rather Bill Wilkinson, perhaps because, unlike Metzger, who, on splitting from Duke, abandoned the Klan name, Wilkinson was to set up a rival Klan group, successfully poaching members from Duke.

However, lacking Duke’s eloquence and good-looks, Wilkinson had instead to devise to another strategy in order to attract media attention and members. The strategy he hit upon was the opposite of Duke’s measured eloquence and moderation, namely “taking a public stance of unbridled violence” (p375).

This, together with the fact the fact that he was nevertheless somehow able to evade prosecution, led to the allegation that he was a state agent and his Klan an FBI-sponsored honey trap, an allegation only reinforced by the recent revelation that he is now a multi-millionaire in the multiracial utopia of Belize

Besides openly advocating violence, Wilkinson also hit upon another means of attracting members. Thus, Wade reports, he “perfected a technique that other Klan leaders belittled as ‘ambulance chasing’” (p384): 

Wilkinson… traversed the nation seeking racial ‘hot spots’… where he can come into a community, collect a large amount of initiation fees, sell a few robes, sell some guns… collect his money and be on his way to another ‘hot spot’” (p384). 

This is, of course, ironically, the exact same tactic employed by contemporary black race-baiters like Al Sharpton and the Black Lives Matter movement

Owing partly to the violent activities of rival Klan groups such as Wilkinsons from whom he could never hope to wholly disassociate himself, Duke himself eventually came to see the Klan name, and associated baggage, as a liability. 

One by one, he jettisoned these elements, styling himself ‘National Director’ rather than Imperial Wizard, wearing a suit rather than a white sheet and eventually giving up even the Klan name itself. Finally, in what was widely perceived as an act of betrayal, Duke was secretly recorded offering to sell his membership rolls to Wilkinson, his erstwhile rival and enemy (p389-90). 

In place of the Klan, Duke sought to set up what he hoped would be a more mainstream and respectable group, namely the National Assocation for the Advancement of White People or NAAWP, one of the several successive white advocacy organizations to adopt this this derivative and rather unimaginative name.[19]

Yet, on abandoning the provocative Klan imagery that had first brought him to the attention of the media, Duke suddenly found media attention much harder to come by. Wade concludes:

Duke had little chance at making a go of any Klan-like organization without the sheets and ‘illuminated crosses’. Without the mumbo-jumbo the lure of the Klan was considerably limited. Five years later the National Association for the Advancement of White People hadn’t got off the ground” (p390). 

Duke was eventually to re-achieve some degree of notoriety as a perennial candidate for elective office, initially with some success, even briefly holding a seat in the Louisiana state legislature and winning a majority of the white vote in his 1991 run for Governorship of Louisiana.

However, despite abandoning the Klan, Duke was never to escape its shadow. Thus, even forty years after abandoning the Klan name, Duke was to still find his name forever prefixed with the title former Klansman or former Grand Wizard David Duke, an image he was never able to jettison. 

Today, still railing against the Jews to anyone still bothering to listen, his former boyish good looks, augmented by cosmetic surgery, having long previously faded, Duke cuts a rather lonely figure, marginal even among the already marginal alt-right, and in his most recent electoral campaign, an unsuccessful run for a Senate seat, he managed to pick up only a miserly three percent of the vote, a far cry from his heyday when he had won a majority of the white vote throughout Lousianna

Un-American Americanism 

Where once Klansmen could unironically claim to stand for 100% Americanism, now, were not the very word ‘un-American’ so tainted by McCarthyism as to sound almost un-American in itself, the Klan could almost be described as a quintessentially un-American organization. 

Indeed, interestingly, Wade reports that there was pressure on the House Un-American Activities Committee to investigate the Klan from even before the committee was first formed. Thus, Wade laments: 

The creation of the Dies Committee had been urged and supported by liberals and Nazi haters who wanted it used as a congressional forum against fascism. But in the hands of chairman Martin Dies of Texas, an arch-segregationist and his reactionary colleagues… the committee instead had become an anachronistic pack of witch hunters who harassed labor leaders… and discovered ‘communists’ in every imaginable shape and place” (p272).

Thus, Wade’s chief objection to the House Un-American Activities Committee seems to be, not that they became witch hunters, but that they chose to hunt, to his mind, the wrong coven of witches. Instead of going after the commies, they should have targeted the Nazis, fascists and Klansmen instead, who, in his misguided mind, evidently represented the real threat.

Yet what Wade does not mention is that perhaps the most prominent of the heroic “liberals and nazi haters” who advocated for the formation of the HUAC in order persecute fascists and Klansmen, and who, as the joint-chairman of the ‘Special Committee on Un-American Activities’, the precursor to the HUAC, from 1934 to 1937, did indeed use the Committee to target fascists, albeit mostly imaginary ones, was congressman Samuel Dickstein, who is himself now known to have been a paid Soviet agent, hence proving that McCarthyist concerns regarding communist infiltration and subversion at the highest level of American public life were no delusion.

Ultimately, however, Wade was to have his wish. Thus, the Klan did indeed fall victim to the same illiberal and sometimes illegal FBI cointelpro programme of harassment as more fashionable victims on the left (or ostensibly on the left), such as Martin Luther King, the Nation of Islam, and the Black Panther Party (p361-3).

Indeed, according to Wade, it was actually the Klan who were the first victims of this campaign of FBI harassment, with more fashionable victims of the left being targeted only later. Thus, Wade writes:

After developing Cointelpro for the Klan, the FBI also used it against the Black Panthers, civil rights leaders, and antiwar demonstrators” (p363).[20]

Licence to Kill?

The Klan formerly enjoyed a reputation something like that of the the Mafia, namely as a violent and dangerous group whom a person crossed at their peril, since, again like the Mafia, they had a proven track record of committing murder and getting away with it, largely through their corrupt links with local law enforcement in the South, and the unwillingness of all-white Southern juries to convict Klansmen accused of violent crimes.[21]

Today, however, this reputation is long previously lost.

Indeed, if a suspect in a racist murder today were outed as a Klansman, this would likely unfairly prejudice a jury of any ethnic composition, anywhere in the country, against him, arguably to the point of denying him any chance of a fair trial. 

Thus, when aging Klansmen, such as Edgar Ray KillenThomas Blanton and Bobby Frank Cherrywere belatedly put on trial and convicted in the 2000s for killings committed in the early 1960s, some forty years previously, one rather suspects that they received no fairer a trial then than they did, or would have had, when put on trial before all-white juries in the 1960s American South. The only difference was that now the prejudice was against them rather than in their favour. 

Thus, today, we have gone full circle. Quite when the turning point was reached is a matter of conjecture.

Arguably, the last incident of Klansmen unfairly getting away with murder was the so-called Greensboro massacre in 1979, when Klansmen, neo-Nazis and other white nationalist activists shot up an anti-Klan rally organized by radical left Maoist labour agitators in North Carolina. 

Here, however, if the all-white jury was indeed prejudiced against the victims of this attack, it was not because they were blacks (all but one of the five people killed were actually white), but rather that they were ‘reds’ (i.e. communists).[22]

Today, then, the problem is not with all-white juries in the South refusing to convict Klansmen, but rather with majority-black juries in urban areas across America refusing to convict black defendants, especially on police evidence, no matter how strong the case against them, as occurred, for example, most famously in the OJ case (see also Paved with Good Intentions: p43-4; p71-3). 

Klans Today 

Wade’s ‘The Fiery Cross’ was first published in 1987. It is therefore not, strictly speaking, a history of the Klan for the entirety of its existence right up to the present day, since Klan groups have continued to exist since this date, and indeed continue to exist in modern America even today. 

However, Wade’s book nevertheless seems complete, because such groups have long previously ceased to have any real significance in American political, social and cultural life, save as a media bogeyman and contemporary folk devils

In its brief 1920s heyday, the Second Klan could claim to play a key role in politics, even at the national level. 

Wade even claims, dubiously as it happens, that Warren G Harding was inducted into the organization in a special and secret White House ceremony while in office as President (p165).

Certainly, they helped defeat the candidacy of Al Smith, on account of his Catholicism, in 1924 and again in 1928 (p197-99). 

Over half a century later, during the 1980 presidential election campaign, the Klan again made a brief cameo, when one candidate sought to associate his opponent with the Klan and thereby discredit him.

Thus, Reagan, himself accused of deliberately employing a coded racist dog whistle by praising “states’ rights during a speech in Mississippi, responded by accusing his opponent, inaccurately as it happens, of opening his campaign in the city that “gave birth to and is the parent body of the Ku Klux Klan”. 

This led Grand Dragon Bill Wilkinson to declare defiantly: 

We’re not an issue in this Presidential race because we’re insignificant” (p388). 

Yet what Wilkinson failed to grasp, or at least refused to publicly acknowledge, was that the Klan’s role was now wholly negative. Neither candidate actually had any actual Klan links; each sought to link the Klan only with their opponent.

Whereas in the 1920s, candidates for elective office had actively and openly courted Klan support and endorsement, by the time of the 1980 Presidential election to have done so would have been electoral suicide.

The Klan’s role, then, was as bogeymen and folk devils – roughly analogous to that played by Willie Horton in the 1988 presidential campaign; the role NAMBLA plays in the debate over gay rights; or, indeed, the role communists played during the First and Second Red Scares.[23]

Indeed, although in modern America lynching has fallen into disfavour, one suspects that, if it were ever to re-emerge as a popular American pastime and application of participatory democracy to the judicial process, then, among the first contemporary folk devils to be hoisted from a tree, alongside paedophiles and other classes of sex offender, would surely be Klansmen and other unreconstructed white racists

Likewise, today, if a group of Klansmen are permitted to march in any major city in America, then a police presence is required, not to protect innocent blacks, Jews and Catholics from rampaging Klansmen, but rather to protect the Klansmen themselves from angry assailants of all ethnicities, but mostly white

Indeed, the latter, styling themselves Antifa (an abbreviation of anti-fascist), despite their positively fascist opposition to freedom of speech, expression and assembly, have even taken, like Klansmen of old, to wearing masks to disguise their identities

Perhaps anti-masking laws, first enacted to defeat the First Klan, and later resurrected to tackle later Klan revivals, must be revived once again, but this time employed, without prejudice, against the contemporary terror, and totalitarianism, of the left

Endnotes

[1] The only trace of possible illiteracy in the name is found in the misspelling of ‘clan’ as ‘klan’, presumably, again, for alliterative purposes, or perhaps reflecting a legitimate spelling in the nineteenth century when the group was founded.

[2] The popular alt-right meme that there are literally no white-on-black rapes is indeed untrue, and reflects the misreading of a table in a government report that actually involved only a small sample. In fact, the government does not currently release data on the prevalence of interracial rape. However, there is no doubt that black-on-white rape is much more common than white-on-black rape. Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by a Human Rights Watch report.

[3] If Klan chivalry did not extend to black women, neither did it extend evidently even to severely handicapped black males. Thus, the most memorable and remarkable figure to emerge in this part of Wade’s narrative is not a Klansman, but rather a victim of Klan violence, namely black pastor and political leader, Elias Hill
The latter had born into slavery, yet, having lost the use of both his arms and legs through childhood illness, had been freed by his owner, who saw little profit to be had from a handicapped slave. Yet, in adulthood, Hill overcame his disability to become an unlikely yet “influential and powerful leader” among the freedmen of York County, South Carolina (p74). 
As a consequence, Hill found himself visited by hooded Klan nightriders, who dragged him from his home by his withered limbs, beat him with a horse whip and threatened to throw him in a nearby river unless he agreed to renounce the Republican Party (p75). 
After this ordeal, Hill abandoned any hope for black social, political or economic advancement in America. Instead, he, along with other black families, departed for Liberia on the West African coast, with the aid of the American Colonization Society, which aimed to resettle black Americans in Africa, Hill declaring to a congressional committee before he left:

We do not believe it is possible from the past history and present aspect of affairs, for our people to live in this country peaceably and educate and elevate their children to any degree which they desire. They do not believe it is possible. Neither do I” (p75). 

In this assessment Hill and his fellow black emigrants may have been correct. However, they were not to find, or create, an egalitarian utopia in Liberia either. 
On the contrary, in a definative proof that ethnic conflict, exploitation, prejudice and oppression know no colour, but rather are universal phenomena and no exclusive monopoly of the white race, the black American freedmen who colonized Liberia then proceeded to oppress, dispossess, exploit and enslave the native African blacks whom they encountered, just as white Americans had dispossessed Native Americans and enslaved black Africans in the Americas.

[4] The then-president Woodrow Wilson (who, in addition to being a politican, was also a noted historian of the Reconstruction period, whose five-volume book, A History of the American People, is actually quoted in several of the movie’s title cards) was later quoted as describing the movie, in some accounts the first moving picture that he had ever seen, as: 

History [writ] with lightning. My only regret is that it is all so terribly true” (p126). 

However, during the controversy following the film’s release, Wilson himself later issued a denial that he had ever uttered any such words, insisting that he had only agreed to the viewing as a “courtesy extended to an old acquaintance” and that:

The President was entirely unaware of the character of the play before it was presented and has at no time expressed his approbation of it” (p137).

This claim that Wilson was “entirely unaware of the character of the [film] before it was presented” is, however, regarded as doubtful by many historians given the earlier notoriety of the novel and play upon which the film had been based, and indeed of its author, Thomas Dixon, the very “old acquaintance” as a favour to whom Wilson had, on his own account, agreed to the screening.

[5] Like so many other aspects of what is today considered Klan ritual, there is no evidence that cross-burning, or cross-lighting as devout Christian Klansmen prefer to call it, was ever practised by the original Reconstruction-era Klan. However, unlike other aspects of Klan ritualism, it had been invented, not by Simmons, but by novelist Thomas Dixson (by way of Walter Scott’s The Lady of the Lake), in imitation of an ostensible Scottish tradition, for his book, The Clansman: A Historical romance of the Ku Klux Klan, upon which novel the movie Birth of a Nation was based. The new Klan was eventually granted an easement in perpetuity over Stone Mountain, allowing it to repeat this ritual.

[6] A conviction may be regarded as unsafe, and even as a wrongful conviction, even if we still believe the defendant might be guilty of the crime with which s/he is charged. After all, the burden is on the prosecution to prove that the defendant is guilty beyond reasonable doubt. If there remains reasonable doubt, then the defendant should not have been convicted.
Steve Oney, who researched the case intensively for his book, And the Dead Shall Rise, concedes that “the case [against Frank] is not as feeble as most people say it is”, but nevertheless concludes that Frank was probably innocent, “but there is enough doubt to leave the door ajar” (Berger, Leo Frank Case Stirs Debate 100 Years After Jewish Lynch Victim’s Conviction, Forward, August 30, 2013).
Similarly, Albert Lindemann, a professor of history at the University of California who has published multiple works on the history of anti-Semitism, including an in-depth study of the Frank case, concludes that, although Jim Conley, the other main suspect, was “most likely the actual murderer“, and that “there were enough holes in the prosecution’s case that Frank’s guilt was not demonstrated beyond a reasonable doubt”, nevertheless Franks trial was “not quite the travesty of justice that many have believed it to be“ (Esau’s Tears: p381, p382).

[7] The ADL ’s role in Wade’s narrative does not end here, since the ADL would later play a key role in fighting later incarnations of the Klan.

[8] Indeed, even from a modern racial egalitarian perspective, the era is arguably misnamed. After all, from a racial egalitarian perspective, the plantation era, when slavery was still practised, was surely worse, as surely was the period of bloody conflict between Native Americans and European colonists.

[9] Even among extreme racists, support for slavery is today rare. Therefore, few American racists openly pine for a return to the plantation era. Segregation is, then, the next best thing, short of the actual expulsion of blacks back to Africa. Thus, it is common to hear white American racialists hold up early twentieth century America as lost Eden. For example, many blame the supposed decline of the US public education system on desegregation.

[10] It is thus a myth that oppressed peoples invariably revolt against their oppressors. In reality, truly oppressed peoples, like blacks in the South in this period, tend to maintain a low profile precisely so as to avoid incurring the animosity of their oppressors. It is only when they sense weakness in their oppressors, or their ostensible oppressors, that insurrections tend to occur. This then explains the paradox that black militancy in America seems to be inversely proportional to the actual extent of black oppression.
Thus, the preeminent black leader in America at the height of the Jim Crow era was Booker T Washington, by modern standards a conservative, if not an outright Uncle Tom, who believed that blacks must focus on self-improvement and education in order to prove themselves worthy of full participation in American society before they could demand an end to discrimination. Yet, today, when blacks are the beneficiaries, not the victims of discrimination, in the form of what is euphemistically called affirmative action, and it is whites who are ‘walking on eggshells’ and in fear of losing their jobs if they say something politically incorrect on the subject of race, American blacks are seemingly more militant and belligerent than ever, as the recent and hugely destructive BLM riots have proven only too well. 

[11] Simmons’ disavowal of the Klan’s move into politics may well have been disingenuous and reflected the fact that, by this time, Simmons had lost control of the Klan to a rival, Hiram Evans.

[12] Thus, in Ireland, the Protestant minority opposed Home Rule’ for Ireland (a form of devolution, or self-government, that neverthlesss fell short of full independence) on the grounds that it would supposedly amount, in effect, to Rome Rule, due to the Catholic majority in Ireland, a prediction that, under the Irish Free State formed after partition, was shown to be not entirely unwarranted.

[13] Interestingly, unlike the Klan, another formerly anti-Catholic American fraternal order, Junior Order of United American Mechanics, successfully jettisoned both its earlier anti-Catholicism, and an association with violence (which it also shared with the Klan), reinventing itself as a respectable, non-sectarian beneficent group. However, the Klan was ultimately unable to achieve the same transformation. 

[14] Of course, other forms of intergroup prejudice have been altogether more intransigent, long-lasting and thus-far impervious to eradication. Indeed, even anti-Catholicism itself had a long history. Pierre van den Berghe, in his excellent The Ethnic Phenomenon (which I have reviewed here), argues that assimilation is possible on in specific circumstances, namely when the groups to be assimilated are: 

Similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (The Ethnic Phenomenon: p219). 

Thus, those hoping other forms of intergroup conflit (e.g. black-white conflict in the USA, or indeed the continuing animosity between Catholics and Protestants in Northern Ireland) can be similarly overcome in such a short period of time in coming years are well-advised not to hold their breaths.

[15] Thus, in the many often graphic images of lynchings of black victims accessible via the internet, I have yet to find one in which the lynch-mobs are dressed in the ceremonial regalia of the Klan. On the contrary, far from wearing masks, the perpetrators often proudly face the camera, evidently feeling no fear of retribution or legal repercussions for their vigilantism.

[16] The question of the religious beliefs, if any, of Hitler is one of some controversy which I have discussed elsewhere. Certainly, many leading  figures in the National Socialist regime, including Martin Bormann and Alfred Rosenberg, were hostile to Christianity. Likewise, Hitler is reported as making antiChristian statements in private, in both Hitler’s Table Talk and by such confidents as Speer in his memoirs. However, Hitler kept such sentiments out of his public pronouncements and speeches for fear of alienating those Christians who numbered among his constituency of supporters, let alone provoking opposition from the established churches, and even forbade his principle associates, such Göring and Goebbels, from leaving the church. Thus, Hiter proposed postponing his Kirchenkampf, or settling of accounts with the churches, until after the War, not wishing to fight enemies on multiple fronts.

[17] To clarify, it has been claimed that the Catholic Church faced persecution in National Socialist Germany. However, this persecution did not extend to individual Catholics, save those, including some priests, who opposed the regime and its policies, in which case the persecution reflected their political activism rather than their religion as such. Although Hitler was indeed privately hostile to Christianity, Catholicism very much included, Nazi conflict with the Church seems to have reflected primarily the fact that the Nazis, as a totalitarian regime, sought to control all aspects of society and culture in Germany, including those over which the Church had formerly claimed hegemony (e.g. education).

[18] In a later era, this was among the reasons given by David Duke in his autobiography for his abandonment of the Klan brand, since his own supposedly entirely non-violent Klan faction was, he complained, invariably confused with, and tarred with the same brush as, other more violent Klan groups through guilt by association

[19] Duke later had a better idea for a name for his organization – namely, the National Organization For European American Rights, which he intended to be known by the memorable acronym, NO-FEAR. Unfortunately for him, however, the clothing company who had already registered this name as a trademark thought better of it and forced him to change the group’s name to the rather less memorable European-American Unity and Rights Organization (or EURO).

[20] Certainly, the Klan was henceforth a major target of the FBI. Indeed, the FBI were even accused, in a sting operation apparently funded by the ADL, of provoking one Klan bombing in which a woman, Kathy Ainsworth, herself one of the bombers and an active, militant Klanswoman, was killed (p363). The FBI was also implicated in another Klan killing, namely that of civil rights campaigner Viola Liuzzo, since an FBI agent was present with the killers in the car from which the fatal shots were fired (p347-54). Indeed, Wade reports that “about 6 percent of all Klansmen in the late 1960s worked for the FBI” (p362).

[21] Thus, former Klan leader David Duke, in his autobiographical My Awakening, reports that, when he and other arrestees were outed as Klansmen in a Louisiana prison, the black prisoners, far from attacking them, were initially cowed by the revelation: 

At first, it seemed my media reputation intimidated them. The Klan had a reputation, although undeserved, like that of the mafia. Some of the Black inmates obviously thought that if they did anything to harm me, a “Godfather” type of character, they might soon end up with their feet in cement at the bottom of the Mississippi.

[22] All but one of those killed, Wade reports, were leaders of the Maoist group responsible for organizing the rally (p381). Wade uses this to show that the violence was premeditated, having been carefully planned and coordinated by the Klansmen and neo-Nazis. However, the fact that the victims were leaders of a communist group would also likely mean that they would likely hardly be viewed as entirely innocent victims by conservative white jurors in North Carolina. 
In fact, the victims were indeed highly unsympathetic, at least from the perspective of white Southern jurors, not merely on account of their radical leftist politics, but also on account of the fact that they had seemingly deliberately provoked the Klan attack, openly challenging the Klan to attend their provocatively titled ‘Death to the Klan’ rally (p379), and, though ultimately heavily outgunned, they themselves seem to have first initiated the violence by attacking the cars carrying Klansmen with placards (p381).

[23] The Klan was recently to reprise this role to play once again during the recent Trump presidential campaigns, as left-wing journalists trawled the South in search of grizzled, tattooed, self-appointed ‘Grand Dragons’ and tinpot führers willing, presumably in return for a few drinks, to offer their unsolicited endorsement of the Trump candidature and thereby, in the journalists’ own minds, and that of most of their moronic readership, discredit him through guilt-by-association.

‘Alas Poor Darwin’: How Stephen Jay Gould Became an Evolutionary Psychologist and Steven Rose a Scientific Racist

Steven Rose and Hillary Rose (eds.), Alas Poor Darwin: Arguments against Evolutionary Psychology, London: Jonathan Cape, 2000.

Alas Poor Darwin: Arguments against Evolutionary Psychology’ is an edited book composed of multiple essays by different authors, from different academic fields, brought together for the purpose of ostensibly all critiquing the emerging science of evolutionary psychology. This multiple authorship makes it difficult to provide an overall review, since the authors approaches to the topic differ markedly.  

Indeed, the editors admit as much, conceding that the contributors “do not speak with a single voice” (p9). This seems to a tacit admission that they frequently contradict one another. 

Thus, for example, feminist biologist Anne Fausto-Sterling attacks evolutionary psychologists such as Donald Symons as sexist for arguing that the female orgasm as a mere by-product of the male orgasm and not an adaptation in itself, complaining that, according to Symons, women “did not even evolve their own orgasms” (p176). 

Yet, on the other hand, scientific charlatan Stephen Jay Gould criticizes evolutionary psychologists for the precise opposite offence, namely for (supposedly) viewing all human traits and behaviours as necessarily adaptations and ignoring the possibility of by-products (p103-4).

Meanwhile, some chapters are essentially irrelevant to the project of evolutionary psychology

For example, one, that of full-time ‘Dawkins-stalker’ (and part-time philosopher) Mary Midgley critiques the quite separate approach of memetics

Likewise, one singularly uninsightful chapter by ‘disability activist’ Tom Shakespeare and a colleague seems to say nothing with which the average evolutionary psychologist would likely disagree. Indeed, they seem to say little of substance at all. 

Only at the end of their chapter do they make the obligatory reference to just-so stories, and, more bizarrely, to the “single-gene determinism of the biological reductionists” (p203).

Yet, as anyone who has ever read any evolutionary psychology is surely aware, evolutionary psychologists, like other evolutionary biologists, emphasize to the point of repetitiveness that, while they may talk of ‘genes for’ certain characteristics as a form of scientific shorthand, nothing in their theories implies a one-to-one concordance between single genes and behaviours. 

Indeed, the irrelevance of some chapters to their supposed subject-matter (i.e. evolutionary psychology) makes one wonder whether some of the contributors to the volume have ever actually read any evolutionary psychology, or even any popularizations of the field – or whether their entire limited knowledge of the field was gained by reading critiques of evolutionary psychology by other contributors to the volume. 

Annette Karmiloff-Smith’s chapter, entitled ‘Why babies’ brains are not Swiss army knives’, is a critique of what she refers to as nativism, namely the belief that certain brain structures (or modules) are innately hardwired into the brain at birth.

This chapter, perhaps alone in the entire volume, may have value as a critique of some strands of evolutionary psychology.

Any analogy is imperfect; otherwise it would not be an analogy but rather an identity. However, given that even a modern micro-computer has been criticized as an inadequate model for the human brain, comparing human brains to a Swiss army knives is obviously an analogy that should not be taken too far.

However, the nativist, massive modularity thesis that Karmiloff-Smith associates with evolutionary psychology, while indeed typical of what we might call the narrow ‘Tooby and Cosmides brand’ of evolutionary psychology is rejected by many evolutionary psychologists (e.g. the authors of Human Evolutionary Psychology) and is not, in my view, integral to evolutionary psychology as a discipline or approach.

Instead, evolutionary psychology posits that behaviour have been shaped by natural selection to maximise the reproductive success of organisms in ancestral environments. It therefore allows us to bypass the proximate level of causation in the brain by recognising that, howsoever the brain is structured and produces behaviour in interaction with its environment, given that this brain evolved through a process of natural selection, it must be such as to produce behaviour which maximizes the reproductive success of its bearer, at least under ancestral conditions. (This is sometimes called the phenotypic gambit.) 

Stephen Jay Gould’s Deathbed Conversion?

Undoubtedly the best known, and arguably the most prestigious, contributor to the Roses’ volume is the famed palaeontologist and popular science writer Stephen Jay Gould. Indeed, such is his renown that Gould evidently did not feel it necessary to contribute an original chapter for this volume, instead simply recycling, and retitling, what appears to be a book review, previously published in The New York Review of Books (Gould 1997). 

This is a critical review of a book Darwin’s Dangerous Idea: Evolution and the Meanings of Life by philosopher Daniel Dennett that is itself critical of Gould, a form of academic self-defence. Neither the book, nor the review, deal primarily with the topic of evolutionary psychology, but rather with more general issues in evolutionary biology. 

Yet the most remarkable revelation of Gould’s chapter – especially given that it appears in a book ostensibly critiquing evolutionary psychology – is that the best-known and most widely-cited erstwhile opponent of evolutionary psychology is apparently no longer any such thing. 

On the contrary, he now claims in this essay: 

‘Evolutionary psychology’… could be quite useful, if proponents would change their propensity for cultism and ultra-Darwinian fealty for a healthy dose of modesty” (p98). 

Indeed, even more remarkably, Gould even acknowledges: 

The most promising theory of evolutionary psychology [is] the recognition that differing Darwinian requirements for males and females imply distinct adaptive behaviors centred on male advantage in spreading sperm as widely as possible… and female strategy for extracting time and attention from males… [which] probably does underlie some different, and broadly general, emotional propensities oof human males and females” (p102). 

In other words, it seems that Gould now accepts the position of evolutionary psychologists in that most controversial of areas – innate sex differences

In this context, I am reminded of John Tooby and Leda Cosmides’s observation that critics of evolutionary psychology, in the course of their attacks on evolutionary psychology, often make concessions that, if made in any context other than that of an attack on evolutionary psychology, would cause them to themselves be labelled (and attacked) as evolutionary psychologists (Tooby and Cosmides 2000). 

Nevertheless, Gould’s backtracking is a welcome development, notwithstanding his usual arrogant tone.[1]

Given that he passed away only a couple of years after the current volume was published, one might almost, with only slight hyperbole, characterise his backtracking as a deathbed conversion. 

Ultra-Darwinism? Hyper-Adaptationism?

On the other hand, Gould’s criticisms of evolutionary psychology have not evolved at all but merely retread familiar gripes which evolutionary psychologists (and indeed so-called sociobiologists before them) dealt with decades ago. 

For example, he accuses evolutionary psychologists of viewing every human trait as adaptive and ignoring the possibility of by-products (p103-4). 

However, this claim is easily rebutted by simply reading the primary literature in the field. 

Thus, for example, Martin Daly and Margo Wilson view the high rate of abuse perpetrated by stepparents, not as itself adaptive, but as a by-product of the adaptive tendency for stepparents to care less for their stepchildren than they would for their biological children (see The Truth about Cinderella: which I have reviewed here).  

Similarly, Donald Symons argued that the female orgasm is not itself adaptive, but rather is merely a by-product of the male orgasm, just as male nipples are a non-adaptive by-product of female nipples (see The Evolution of Human Sexuality: which I have reviewed here).  

Meanwhile, Randy Thornhill and Craig Palmer are divided as to whether human rape is adaptive or merely a by-product of men’s greater desire for commitment-free promiscuous sex (A Natural History of Rape: which I have reviewed here). 

However, unlike Gould himself, evolutionary psychologists generally prefer the term ‘by-product’ to Gould’s unhelpful coinage ‘spandrel’. The former term is readily intelligible to any educated person fluent in English. Gould’s preferred terms is needless obfuscation. 

As emphasized by Richard Dawkins, the invention of jargon to baffle non-specialists (e.g. referring to animal rape as “forced copulation” as the Roses advocate: p2) is the preserve of fields suffering from physics-envy, according to ‘Dawkins’ First Law of the Conservation of Difficulty’, whereby “obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity”. 

Untestable? Unfalsifiable?

Gould’s other main criticism of evolutionary psychology is his claim that sociobiological theories are inherently untestable and unfalsifiable – i.e. what Gould calls Just So Stories

However, one only has to flick through copies of journals like Evolution and Human Behavior, Human Nature, Evolutionary PsychologyEvolutionary Psychological Science, and many other journals that regularly publish research in evolutionary psychology, to see evolutionary psychological theories being tested, and indeed often falsified, every month. 

As evidence for the supposed unfalsifiability of sociobiological theories, Gould cites, not such primary research literature, but rather a work of popular science, namely Robert Wright’s The Moral Animal

Thus, he quotes Robert Wright as asserting in this book that our “sweet tooth” (i.e. taste for sugar), although maladaptive in the contemporary West because it leads to obesity, diabetes and heart disease, was nevertheless adaptive in ancestral environments (i.e. the EEA) where, as Wright put it, “fruit existed but candy didn’t” (The Moral Animal: p67). 

Yet, Gould protests indignantly, in support of this claim, Wright cites “no paleontological data about ancestral feeding” (p100). 

However, Wright is a popular science writer, not an academic researcher, and his book, The Moral Animal, for all its many virtues, is a work of popular science. As such, Wright, unlike someone writing a scientific paper, is not to be expected to cite a source for every claim he makes. 

Moreover, is Gould, a palaeontologist, really so ignorant of human history that he seriously believes we really need “paleontological data” in order to demonstrate that fruit is not a recent invention but that candy is? Is this really the best example he can come up with? 

From ‘Straw Men’ to Fabricated Quotations 

Rather than arguing against the actual theories of evolutionary psychologists, contributors to ‘Alas Poor Darwin’ instead resort to the easier option of misrepresenting these theories, so as to make the task of arguing against them less arduous. This is, of course, the familiar rhetorical tactic of constructing of straw man

In the case of co-editor, Hilary Rose, this crosses the line from rhetorical deceit to outright defamation of character when, on p116, she falsely attributes to sociobiologist David Barash an offensive quotation violating the naturalistic fallacy by purporting to justify rape by reference to its adaptive function

Yet Barash simply does not say the words she attributes to him on the page she cites (or any other page) in Whisperings Within, the book form which the quotation claims be drawn. (I know, because I own a copy of said book.) 

Rather, after a discussion of the adaptive function of rape in ducks, Barash merely tentatively ventures that, although vastly more complex, human rape may serve an analogous evolutionary function (Whisperings Within: p55). 

Is Steven Rose a Scientific Racist? 

As for Steven Rose, the book’s other editor, unlike Gould, he does not repent his sins and convert to evolutionary psychology. However, in maintaining his evangelical crusade against evolutionary psychology, sociobiology and all related heresies, Rose inadvertently undergoes a conversion, in many ways, even more dramatic and far reaching in its consequences. 

To understand why, we must examine Rose’s position in more depth. 

Steven Rose, it goes almost without saying, is not a creationist. On the contrary, he is, in addition to his popular science writing and leftist political activism, a working neuroscientist who very much accepts Darwin’s theory of evolution

Rose is therefore obliged to reconcile his opposition to evolutionary psychology with the recognition that the brain is, like the body, a product of evolution. 

Ironically, this leads him to employ evolutionary arguments against evolutionary psychology

For example, Rose mounts an evolutionary defence of the largely discredited theory of group selection, whereby it is contended that traits sometimes evolve, not because they increase the fitness of the individual possessing them, but rather because they aid the survival of the group of which s/he is a member, even at a cost to the fitness of the individual themselves (p257-9). 

Indeed, Rose even goes further, even going so far as to assert: 

Selection can occur at even higher levels – that of the species for example” (p258). 

Similarly, in the book’s introduction, co-authored with his wife Hillary, the Roses dismiss the importance of evolutionary psychological concept of the environment of evolutionary adaptedness’ (or ‘EEA’).[2] 

This term refers to the idea that we evolved to maximise our reproductive success, not in the sort of post-industrial contemporary Western societies in which we now so often find ourselves, but rather in the sorts of environments in which our ancestors spent most of our evolutionary history, namely as Stone Age hunter-gatherers

On this view, much behaviour in modern Western societies is recognized as maladaptive, reflecting a mismatch between the environment to which we are adapted and that in which we find ourselves, simply because we have not had sufficient time to evolve psychological mechanisms for dealing with such ‘evolutionary novelties’ as contraception, paternity tests and chocolate bars. 

However, the Roses argue that evolution can occur much faster than this. Thus, they point to: 

The huge changes produced by artificial selection by humans among domesticated animals – cattle, dogs and… pigeons – in only a few generations. Indeed, unaided natural selection in Darwin’s own Islands, the Galapagos, studied over several decades by the Grants is enough to produce significant changes in the birds’ beaks and feeding habits in response to climate change” (p1-2). 

Finally, Rose rejects the modular’ model of the human mind championed by some evolutionary psychologists, whereby the brain is conceptualized as being composed of many separate domain-specific modules, each specialized for a particular class of adaptive problem faced by ancestral humans.  

As evidence against this thesis, Rose points to the absence of a direct one-to-one relationship between the modules postulated by evolutionary psychologists and actual regions of the brain as identified by neuroscientists (p260-2). 

Whether such modules are more than theoretical entities is unclear, at least to most neuroscientists. Indeed evolutionary psychologists such as Pinker go to some lengths to make it clear that the ‘mental modules’ they invent do not, or at least do not necessarily, map onto specific brain structures” (p260). 

Thus, Rose protests: 

Evolutionary psychology theorists, who… are not themselves neuroscientists, or even, by and large, biologists, show as great a disdain for relating their theoretical concepts to material brains as did the now discredited behaviorists they so despise” (p261). 

Yet there is an irony here – namely, in employing evolutionary arguments against evolutionary psychology (i.e. emphasizing the importance of group selection and of recently evolved adaptations), Rose, unlike many of his co-contributors, actually implicitly accepts the idea of an evolutionary approach to understanding human behaviour and psychology

In other words, if Rose is indeed right about these matters (group selection, recently evolved adaptations and domain general psychological mechanisms), this would suggest, not the abandonment of an evolutionary approach in psychology, but rather the need to develop a new evolutionary psychology that gives appropriate weight to such factors as group selection, recently evolved adaptations and domain general psychological mechanisms

Actually, however, as we will see, this ‘newevolutionary psychology may not be all that new and Rose may find he has unlikely bedfellows in this endeavour. 

Thus, group selection – which tends to imply that conflict between groups such as races and ethnic groups is inevitable – has already been defended by race theorists such as Philippe Rushton and Kevin MacDonald

For example, Rushton, author of Race, Evolution and Behavior (which I have reviewed here), a notorious racial theorist known for arguing that black people are genetically predisposed to crime, promiscuity and low IQ, has also authored papers with titles like Genetic similarity, human altruism and group-selection (Rushton 1989) and Genetic similarity theory, ethnocentrism, and group selection (Rushton 1998), which defend and draw on the concept of group selection to explain such behaviours as racism and ethnocentrism.

Similarly, Kevin Macdonald, a former professor of psychology widely accused of anti-Semitism, has also championed the theory of group selection, and even developed a theory of cultural group selection to explain the survival and prospering of the Jewish people in diaspora in his book, A People That Shall Dwell Alone: Judaism as a Group Evolutionary Strategy (which I have reviewed here) and its more infamous, and theoretically flawed, sequel, The Culture of Critique (which I have reviewed here). 

Similarly, the claim that sufficient time has elapsed for significant evolutionary change to have occurred since the Stone Age (our species’ primary putative environment of evolutionary adaptedness) necessarily also entails recognition that sufficient time has also elapsed for different human populations, including different races, to have significantly diverged in, not just their physiology, but also their psychology, behaviour and cognitive ability.[3]

Finally, rejection of a modular conception of the human mind is consistent with an emphasis on what is perhaps the ultimate domain-general factor in human cognition, namely general factor of intelligence, as championed by psychometriciansbehavioural geneticists, intelligence researchers and race theorists such as Arthur Jensen, Richard Lynn, Chris Brand, Philippe Rushton and the authors of The Bell Curve (which I have reviewed here), who believe that individuals and groups differ in intellectual ability, that some individuals and groups are more intelligent across the board, and that these differences are partly genetic in origin.

Thus, Kevin Macdonald specifically criticizes mainstream evolutionary psychology for its failure to give due weight to the importance of domain-general mechanisms, in particular general intelligence (Macdonald 1991). 

Indeed, Rose himself elsewhere acknowledges that: 

The insistence of evolutionary psychology theorists on modularity puts a strain on their otherwise heaven-made alliance with behaviour geneticists” (p261).[4]

Thus, in rejecting the tenets of mainstream evolutionary psychology, Rose inadvertently advocates, not so much a new form of evolutionary psychology, as rather an old form of scientific racism.

Of course, Steven Rose is not a racist. On the contrary, he has built a minor, if undistinguished, literary career smearing those other scientists he characterises and smears as such.[5]

However, descending to Rose’s own level of argumentation (e.g. employing guilt by association and argumenta ad hominem), he is easily characterised as such. After all, his arguments against the concept of the EEA, and in favour of group-selectionism directly echo those employed by the very scientific racists (e.g. Rushton, Sarich) whom Rose has built a minor literary career out of defaming. 

Thus, by rejecting many claims of mainstream evolutionary psychologists – about the environment of evolutionary adaptedness, about group-selectionism and about modularity – Rose ironically plays into the hands of the very ‘scientific racists’ whom he purportedly opposes.

Thus, if his friend and comrade Stephen Jay Gould, in own his recycled contribution to ‘Alas Poor Darwin’, underwent a surprising but welcome deathbed conversion to evolutionary psychology, then Steven Rose’s transformation proves even more dramatic but rather less welcome. He might, moreover, find his new bedfellows less good company than he expected. 

Endnotes

[1] Throughout his essay, Gould, rather than admit he was wrong with respect to sociobiology, the then-emerging approach that came to dominate research in animal behaviour but was rashly rejected by Gould and other leftist activists, instead makes no such concession. Rather, he seems to imply, even if he does not directly state, that it was his constructive criticism of sociobiology which led to advances in the field and indeed to the development of evolutionary psychology from human sociobiology. Yet, as anyone who followed the controversies over sociobiology and evolutionary psychology, and read Gould’s writings on these topics will be aware, this is far from the case.
Gould, it ought to be noted in this context, was notorious for his arrogance and self-importance. For example, even his friend, colleague and collaborator Richard Lewontin, who shared Gould’s radical leftist politics, and his willingness to subordinate science to politics and misrepresent scientific findings for reasons of political expediency, acknowledged that “Steve… was preoccupied with the desire to be considered a very original and great evolutionary theorist”, and that this led him to exaggerate the importance of his own supposed scientific discoveries, especially so-called punctuated equilibrium. Hence my reference above to his “his usual arrogant tone”. Thus, when Gould advises, in the passage quoted above, that evolutionary psychologists adopt “a healthy dose of modesty”, there is no little irony, and perhaps some projection, in this suggestion.

[2] Actually, the term environment of evolutionary adaptedness was coined, not by evolutionary psychologists, but rather by psychoanalyst and attachment theorist, John Bowby.

[3] This is a topic addressed in such controversial recent books as Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution and Nicholas Wade’s A Troublesome Inheritance: Genes, Race and Human History. It is also a central theme of Sarich and Frank Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here). Papers discussing the significance of recent and divergent evolution in different populations for the underlying assumptions of evolutionary psychology include Winegard et al (2017) and Frost (2011). Evolutionary psychologists in the 1990s and 2000s, especially those affiliated with Tooby and Cosmides at UCSB, were perhaps guilty of associating the environment of evolutionary adaptedness too narrowly with Pleistocene hunter-gatherers on the African savanna. Thus, Tooby and Cosmides have written our modern skulls house a stone age mind. However, while embracing this catchy if misleading soundbite, in the same article Tooby and Cosmides also write more accurately:

The environment of evolutionary adaptedness, or EEA, is not a place or time. It is the statistical composite of selection pressures that caused the design of an adaptation. Thus the EEA for one adaptation may be different from that for another” (Cosmides and Tooby 1997).

Thus, the EEA is not a single time and place that a researcher could visit with the aid of a map, a compass, a research grant and a time machine. Rather a range of environments, and also that the relevant range of environments may differ in respect of different adaptations.

[4] This reference to the “otherwise heaven-made alliance” between evolutionary psychologists and behavioural geneticists, incidentally, contradicts Rose‘s own acknowledgement, made just a few pages earlier, that:

Evolutionary psychologists are often at pains to distinguish themselves from behaviour geneticists and there is some hostility between the two” (p248). 

As we have seen, consistency is not Steven Rose’s strong point. See Kanazawa 2004 the alternative view that general intelligence is itself, paradoxically, a domain-specific module.

[5] I feel the need to emphasise that Rose is not a racist, not least for fear that he might sue me for defamation if I suggest otherwise. And if you think the idea of a professor suing some random, obscure blogger for a blog post is preposterous, then just remember – this is a man who once threatened legal action against publishers of a comic book – yes, a comic book – and forced the publishers to append an apology to some 10,000 copies of the said comic book, for supposedly misrepresenting his views in a speech bubble in said comic book, complaining “The author had literally [sic] put into my mouth a completely fatuous statement” (Brown 1999) – an ironic complaint given the fabricated quotation, of a genuinely defamatory nature, attributed to David Barash by his Rose’s own wife Hillary in the current volume: see above, for which Rose himself, as co-editor, is vicariously responsible.
Rose, it should be noted, is an open and unabashed opponent of free speech. Indeed, Rose even stands accused by German scientist, geneticist and intelligence researcher Volkmar Weiss of actively instigating the infamously repressive communist regime in East Germany to repress a courageous dissident scientist in that country (Weiss 1991). This is moreover an allegation that Rose has, to my knowledge, never denied or brought legal action in respect, despite his known track record for threatening legal action against the publishers of comic books.

References 

Brown (1999) Origins of the speciousGuardian, November 30.
Frost (2007) Human nature or human natures? Futures 43(8): 740-74.
Gould (1997) Darwinian Fundamentalism, New York Review of Books, June 12.
Kanazawa, (2004) General Intelligence as a Domain-Specific Module, Psychological Review 111(2):512-523. 
Macdonald (1991) A perspective on Darwinian psychology: The importance of domain-general mechanisms, plasticity, and individual differencesEthology and Sociobiology 12(6): 449-480.
Rushton (1989) Genetic similarity, human altruism and group-selectionBehavioral and Brain Sciences 12(3) 503-59.
Rushton (1998). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Ideology and Warfare: Evolutionary Perspectives (pp369-388). Oxford: Berghahn Books.
Tooby & Cosmides (1997) Evolutionary Psychology: A Primer, published at the Center for Evolutionary Psychology website, UCSB.
Tooby & Cosmides (2000) Unpublished Letter to the Editor of New Republic, published at the Center for Evolutionary Psychology website, UCSB.
Weiss (1991) It could be Neo-Lysenkoism, if there was ever a break in continuity! Mankind Quarterly 31: 231-253.
Winegard et al (2007) Human Biological and Psychological Diversity. Evolutionary Psychological Science 3:159–180.

Edward O Wilson’s ‘Sociobiology: The New Synthesis’: A Book Much Read About, But Rarely Actually Read

Edward O Wilson, Sociobiology: The New Synthesis Cambridge: Belknap, Harvard 1975

Sociobiology – The Field That Dare Not Speak its Name? 

From its first publication in 1975, the reception accorded Edward O Wilson’s ‘Sociobiology: The New Synthesis’ has been divided. 

On the one hand, among biologists, especially those specialist in the fields of ethology, zoology and animal behaviour, the reception was almost universally laudatory. Indeed, my 25th Anniversary Edition even proudly proclaims on the cover that it was voted by officers and fellows of the Animal Behavior Society as the most important ever book on animal behaviour, supplanting even Darwin’s own seminal On The Expression of Emotions in Man and Animals

However, on the other side of the university campus, in social science departments, the reaction was very different. 

Indeed, the hostility that the book provoked was such that ‘sociobiology’ became almost a dirty word in the social sciences, and ultimately throughout the academy, to such an extent that ultimately the term fell into disuse (save as a term of abuse) and was replaced by largely synonymous euphemisms like behavioral ecology and evolutionary psychology.[1]

Sociobiology thus became, in academia, ‘the field that dare not speak its name’. 

Similarly, within the social sciences, even those researchers whose work carried on the sociobiological approach in all but name almost always played down the extent of their debt to Wilson himself. 

Thus, books on evolutionary psychology typically begin with disclaimers acknowledging that the sociobiology of Wilson was, of course, crude and simplistic, and that their own approach is, of course, infinitely more sophisticated. 

Indeed, reading some recent works on evolutionary psychology, one could be forgiven for thinking that evolutionary approaches to understanding human behaviour began around 1989 with the work of Tooby and Cosmides

Defining the Field 

What then does the word ‘sociobiology’ mean? 

Today, as I have mentioned, the term has largely fallen into disuse, save among certain social scientists who seem to employ it as a rather indiscriminate term of abuse for any theory of human behaviour that they perceive as placing too great a weight on hereditary or biological factors, including many areas of research only tangentially connected to with sociobiology as Wilson originally conceived of it (e.g. behavioral genetics).[2]

The term ‘sociobiology’ was not Wilson’s own coinage. It had occasionally been used by biologists before, albeit rarely. However, Wilson was responsible for popularizing – and perhaps, in the long-term, ultimately unpopularizing it too, since, as we have seen, the term has largely fallen into disuse.[3] 

Wilson himself defined ‘sociobiology’ as: 

The systematic study of the biological basis of all social behavior” (p4; p595). 

However, as the term was understood by other biologists, and indeed applied by Wilson himself, sociobiology came to be construed more narrowly. Thus, it was associated in particular with the question of why behaviours evolved and the evolutionary function they serve in promoting the reproductive success of the organism (i.e. just one of Tinbergen’s Four Questions). 

The hormonal, neuroscientific, or genetic causes of behaviours are just as much a part of “the biological basis of behavior” as are the ultimate evolutionary functions of behaviour. However, these lie outside of scope of sociobiology as the term was usually understood. 

Indeed, Wilson himself admitted as much, writing in ‘Sociobiology: The New Synthesis’ itself of how: 

Behavioral biology… is now emerging as two distinct disciplines centered on neurophysiology and… sociobiology” (p6). 

Yet, in another sense, Wilson’s definition of the field was also too narrow. 

Thus, behavioural ecologists have come to study all forms of behaviour, not just social behaviour.  

For example, optimal foraging theory is a major subfield within behavioural ecology (the successor field to sociobiology), but concerns feeding behaviour, which may be an entirely solitary, non-social activity. 

Indeed, even some aspects of an organism’s physiology (as distinct from behaviour) have come to be seen as within the purview of sociobiology (e.g. the evolution of the peacock’s tail). 

A Book Much Read About, But Rarely Actually Read 

Sociobiology: The New Synthesis’ was a massive tome, numbering almost 700 pages. 

As Wilson proudly proclaims in his glossary, it was: 

Written with the broadest possible audience in mind and most of it can be read with full understanding by any intelligent person whether or not he or she has had any formal training in science” (p577). 

Unfortunately, however, the sheer size of the work alone was probably enough to deter most such readers long before they reached p577 where these words appear. 

Indeed, I suspect the very size of the book was a factor in explaining the almost universally hostile reception that the book received among social scientists. 

In short, the book was so large that the vast majority of social scientists had neither the time nor the inclination to actually read it for themselves, especially since a cursory flick through its pages showed that the vast majority of them seemed to be concerned with the behaviour of species other than humans, and hence, as they saw it, of little relevance to their own work. 

Instead, therefore, their entire knowledge of the sociobiology was filtered through to them via the critiques of the approach authored by other social scientists, themselves mostly hostile to sociobiology, who presented a straw man caricature of what sociobiology actually represented. 

Indeed, the caricature of sociobiology presented by these authors is so distorted that, reading some of these critiques, one often gets the impression that included among those social scientists not bothering to read the book for themselves were most of the social scientists nevertheless taking it upon themselves to write critiques of it. 

Meanwhile, the fact that the field was so obviously misguided (as indeed it often was in the caricatured form presented in the critiques) gave most social scientists yet another reason not to bother wading through its 700 or so pages for themselves. 

As a result, among sociologists, psychologists, anthropologists, public intellectuals, and other such ‘professional damned fools’, as well as the wider the semi-educated, reading public, ‘Sociobiology: The New Synthesis’ became a book much read about – but rarely actually read (at least in full). 

As a consequence, as with other books falling into this category (e.g. the Bible and The Bell Curve) many myths have emerged regarding its contents which are quite contradicted on actually taking the time to read it for oneself. 

The Many Myths of Sociobiology 

Perhaps the foremost myth is that sociobiology was primarily a theory of human behaviour. In fact, as is revealed by even a cursory flick through the pages of Wilson’s book, sociobiology was, first and foremost, a theoretical approach to understanding animal behaviour. 

Indeed, Wilson’s decision to attempt to apply sociobiological theory to humans as well was, it seems, almost something of an afterthought, and necessitated by his desire to provide a comprehensive overview of the behaviour of all social animals, humans included. 
 
This is connected to the second myth – namely, that sociobiology was Wilson’s own theory. In fact, rather than a single theory, sociobiology is better viewed as a particular approach to a field of study, the field in question being animal behaviour. 
 
Moreover, far from being Wilson’s own theory, the major advances in the understanding of animal behaviour that gave rise to what came to be referred to as ‘sociobiology’ were made in the main by biologists other than Wilson himself.  
 
Thus, it was William Hamilton who first formulated inclusive fitness theory (which came to be known as the theory of kin selection); John Maynard Smith who first introduced economic models and game theory into behavioural biology; George C Williams who was responsible for displacing a crude group-selection in favour of a new focus on the gene itself as the principal unit of selection; while Robert Trivers was responsible for such theories such as reciprocal altruismparent-offspring conflict and differential parental investment theory
 
Instead, Wilson’s key role was to bring the various strands of the emerging field together, give it a name and, in the process, take far more than his fair share of the resulting flak. 
 
Thus, far from being a maverick theory of a single individual, what came to be known as ‘sociobiology’ was, if not based on accepted biological theory at the time of publication, then at least based on biological theory that came to be recognised as mainstream within a few years of its publication. 
 
Controversy attached almost exclusively to the application of these same principles to explain human behaviour. 

Applying Sociobiology to Humans 

In respect of Wilson’s application of sociobiological theory to humans, misconceptions again abound. 

For example, it is often asserted that Wilson only extended his theory to apply to human behaviour in his infamous final chapter, entitled, ‘Man: From Sociobiology to Sociology’. 

Actually, however, Wilson had discussed the possible application of sociobiological theory to humans several times in earlier chapters. 
 
Often, this was at the end of a chapter. For example, his chapter on “Roles and Castes” closes with a discussion of “Roles in Human Societies” (p312-3). Similarly, the final subsection of his chapter on “Aggression” is titled “Human Aggression” (p 254-5). 
 
Other times, however, humans get a mention in mid-chapter, as in Chapter Fifteen, which is titled ‘Sex and Society’, where Wilson discusses the association between adultery, cuckoldry and violent retribution in human societies, and rightly prophesizes that “the implications for the study of humans” of Trivers’ theory of differential parental investment “are potentially great” (p327). 
 
Another misconception is that, while he may not have founded the approach that came to be known as sociobiology, it was Wilson who courted controversy, and bore most of the flak, because he was the first biologist brave, foolish, ambitious, farsighted or naïve enough to attempt to apply sociobiological theory to humans. 
 
Actually, however, this is untrue. For example, a large part of Robert Trivers’ seminal paper on reciprocal altruism published in 1971 dealt with reciprocal altruism in humans and with what are presumably specifically human moral emotions, such as guilt, gratitude, friendship and moralistic anger (Trivers 1971). 
 
However, Trivers’ work was published in the Journal of Theoretical Biology and therefore presumably never came to the attention of any of the leftist social scientists largely responsible for the furore over sociobiology, who, being of the opinion that biological theory was wholly irrelevant to human behaviour, and hence to their own field, were unlikely to be regular readers of the journal in question. 

Yet this is perhaps unfortunate since Trivers, unlike the unfortunate Wilson, had impeccable left-wing credentials, which may have deflected some of the overtly politicized criticism (and pitchers of water) that later came Wilson’s way. 

Reductionism vs Holism

Among the most familiar charges levelled against Wilson by his opponents within the social sciences, and by contemporary opponents of sociobiology and evolutionary psychology, alongside the familiar and time-worn charges of ‘biological determinism’ and ‘genetic determinism’, is that sociobiology is inherently reductionist, something which is, they imply, very much a bad thing. 
 
It is therefore something of a surprise to find among the opening pages of ‘Sociobiology: The New Synthesis’, Wilson defending “holism”, as represented, in Wilson’s view, by the field of sociobiology itself, as against what he terms “the triumphant reductionism of molecular biology” (p7). 
 
This passage is particularly surprising for anyone who has read Wilson’s more recent work Consilience: The Unity of Knowledge, where he launches a trenchant, unapologetic and, in my view, wholly convincing defence of “reductionism” as representing, not only “the cutting edge of science… breaking down nature into its constituent components” but moreover “the primary and essential activity of science” and hence at the very heart of the scientific method (Consilience: p59). 

Thus, in a quotable aphorism, Wilson concludes: 

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science” (Consilience: p59). 

Of course, whether ‘reductionism’ is a good or bad thing, as well as the extent to which sociobiology can be considered ‘reductionist’, ultimately depends on precisely how we define ‘reductionism’. Moreover, ‘reductionism’, how ever defined, is a surely matter of degree. 

Thus, philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, distinguishes what he calls “greedy reductionism”, which attempts to oversimplify the world (e.g. Skinnerian behaviourism, which seeks to explain all behaviours in terms of conditioning), from “good reductionism”, which attempts to understand it in all its complexity (i.e. good science).

On the other hand, ‘holistic’ is a word most often employed in defence of wholly unscientific approaches, such as so-called holistic medicine, and, for me, the word itself is almost always something of a red flag. 

Thus, the opponents of sociobiology, in using the term ‘reductionist’ as a criticism, are rejecting the whole notion of a scientific approach to understanding human behaviour. In its place, they offer only a vague, wishy-washy, untestable and frankly anti-scientific obscurantism, whereby any attempt to explain behaviour in terms of causes and effects is dismissed as reductionism and determinism

Yet explaining behaviour, whether the behaviour of organisms, atoms, molecules or chemical substances, in terms of causes and effects is the very essence, if not the very definition, of science. 

In other words, determinism (i.e. the belief that events are determined by causes) is not so much a finding of science as its basic underlying assumption.[4]

Yet Wilson’s own championing of “holism” in ‘Sociobiology: The New Synthesis’ can be made sense of in its historical context. 

In other words, just as Wilson’s defence of reductionism in ‘Concilience’ was a response to the so-called sociobiology debates of the 1970s and 80s in which the charge of ‘reductionism’ was wielded indiscriminately by the opponents of sociobiology, so Wilson’s defence of holism in ‘Sociobiology: The New Synthesis’ itself must be understood in the context, not of the controversy that this work itself provoked (which Wilson was, at the time, unable to foresee), but rather of a controversy preceded its publication. 

In particular, certain molecular biologists at Harvard, and perhaps elsewhere, led by the brilliant yet but abrasive molecular biologist James Watson, had come to the opinion that molecular biology was to be the only biology, and that traditional biology, fieldwork and experiments were positively passé. 

This controversy is rather less familiar to anyone outside of Harvard University’s biology department than the sociobiology debates, which not only enlisted many academics from outside of biology (e.g. psychologists, sociologists, anthropologists and even philosophers), but also spilled over into the popular media and even became politicized. 

However, within the ivory towers of Harvard University’s department of biology, this controversy seems to have been just as fiercely fought over.[5]

As is clear from ‘Sociobiology: The New Synthesis’, Wilson’s own envisaged “holism” was far from the wishy-washy obscurantism which one usually associates with those championing a ‘holistic approach’, and thoroughly scientific. 

Thus, in On Human Nature, Wislon’s follow-up book to ‘Sociobiology: The New Synthesis’, where he first concerned himself specifically to the application of sociobiological theory to humans, Wilson gives perhaps his most balanced description of the relative importance of reductionism and holism, and indeed of the nature of science, writing: 

Raw reduction is only half the scientific process… the remainder consist[ing] of the reconstruction of complexity by an expanding synthesis under the control if laws newly demonstrated by analysis… reveal[ing] the existence of novel emergent phenomena” (On Human Nature: p11). 

It is therefore in this sense, and in contrast to the reductionism of molecular biology, that Wilson saw sociobiology as ‘holistic’. 

Group Selection? 

One of the key theoretical breakthroughs that formed the basis for what came to be known as sociobiology was the discrediting of group-selectionism, largely thanks to the work of George C Williams, whose ideas were later popularized by Richard Dawkins in The Selfish Gene (which I have reviewed here).[6] 
 
A focus the individual, or even the gene, as the primary, or indeed the only, unit of selection, came to be viewed as an integral component of the sociobiological worldview. Indeed, it was once seriously debated on the pages of the newsletter of the European Sociobiological Society whether one could truly be both a ‘sociobiologist’ and a ‘group-selectionist’ (Price 1996). 

It is therefore something of a surprise to discover that the author of ‘Sociobiology: The New Synthesis’, responsible for christening the emerging field, was himself something of a group-selectionist. 

Wilson has recently ‘come out’ as a group-selectionist by co-authoring a paper concerning the evolution of eusociality in ants (Nowak et al 2010). However, reading ‘Sociobiology: The New Synthesis’ leads one to suspect that Wilson had been a closet, or indeed a semi-out, group-selectionist all along. 

Certainly, Wilson repeats the familiar arguments against group-selectionism popularised by Richard Dawkins in The Selfish Gene (which I have reviewed here), but first articulated by George C Williams in Adaptation and Natural Selection (see p106-7). 

However, although he offers no rebuttal to these arguments, this does not prevent Wilson from invoking, or at least proposing, group-selectionist explanations for behaviours elsewhere in the remainder of the book (e.g. p275). 

Moreover, Wilson concludes: 

Group selection and higher levels of organization, however intuitively implausible… are at least theoretically possible under a wide range of conditions” (p30). 

 
Thus, it is clear that, unlike, say, Richard Dawkins, Wilson did not view group-selectionism as a terminally discredited theory. 

Man: From Sociobiology to Sociology… and Perhaps Evolutionary Psychology

What then of Wilson’s final chapter, entitled ‘Man – From Sociobiology to Sociology’? 

It was, of course, the only one to focus exclusively on humans, and, of course, the chapter that attracted by far the lion’s share of the outrage and controversy that soon ensued. 

Yet, reading it today, over forty years after it was first written, it is, I feel, rather disappointing. 

Let me be clear, I went in very much wanting to like it. 

After all, Wilson’s general approach was basically right. Humans, like all other organisms, have evolved through a process of natural selection. Therefore, their behaviour, no less than their physiology, or the physiology or behaviour of non-human organisms, must be understood in the light of this fact. 

Moreover, not only were almost all of the criticisms levelled at Wilson misguided, wrongheaded and unfair, but they often bordered upon persecution as well.

The most famous example of this leftist witch hunting was when, during a speech at the annual meeting of the American Association for the Advancement of Science, he was drenched him with a pitcher of water by leftist demonstrators. 

However, this was far from an isolated event. For example, an illustration from the book The Moral Animal shows a student placard advising protesters to “bring noisemakers” in order to deliberately disrupt one of Wilson’s speaking engagements (The Moral Animal: illustration p341). 

In short, Wilson seems to have been an early victim of what would today be called ‘deplatorming’ and ‘cancel culture’, phenomena that long predated the coining of these terms

Thus, one is tempted to see Wilson in the role of a kind of modern Galileo, being, like Galileo, persecuted for his scientific theories, which, like those of Galileo, turned out to be broadly correct. 

Moreover, Wilson’s views were, in some respects, analogous to those of Galileo. Both disputed prevailing orthodoxies in such a way as to challenge the view that humans were somehow unique or at the centre of things, Galileo by suggesting the earth was not at the centre of the solar system, and Wilson by showing that human behaviour was not all that different from that of other animals.[7]

Unfortunately, however, the actual substance of Wilson’s final chapter is rather dated.

Inevitably, any science book will be dated after forty years. However, while this is also true of the book as a whole, it seems especially true of this last chapter, which bears little resemblance to the contents of a modern textbook on evolutionary psychology

This is perhaps inevitable. While the application of sociobiological theory to understanding and explaining the behaviour other species was already well underway, the application of sociobiological theory to humans was, the pioneering work of Robert Trivers on reciprocal altruism notwithstanding, still very much in its infancy. 

Yet, while the substance of the chapter is dated, the general approach was spot on.

Indeed, even some of the advances claimed by evolutionary psychologists as their own were actually anticipated by Wilson. 

Thus, Wilson recognises:

One of the key questions [in human sociobiology] is to what extent the biogram represents an adaptation to modern cultural life and to what extent it is a phylogenetic vestige” (p458). 

He thus anticipates the key evolutionary psychological concept of the Environment of Evolutionary Adaptedness or EEA, whereby it is theorized that humans are evolutionarily adapted, not to the modern post-industrial societies in which so many of us today find ourselves, but rather to the ancestral environments in which our behaviours first evolved.

Wilson proposes examine human behavior from the disinterested perspective of “a zoologist from another planet”, and concludes: 

In this macroscopic view the humanities and social sciences shrink to specialized branches of biology” (p547). 

Thus, for Wilson: 

Sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the Modern Synthesis” (p4). 

Indeed, the idea that the behaviour of a single species is alone exempt from principles of general biology, to such an extent that it must be studied in entirely different university faculties by entirely different researchers, the vast majority with little or no knowledge of general biology, nor of the methods and theory of researchers studying the behaviour of all other organisms, reflects an indefensible anthropocentrism

However, despite the controversy these pronouncements provoked, Wilson was actually quite measured in his predictions and even urged caution, writing 

Whether the social sciences can be truly biologicized in this fashion remains to be seen” (p4) 

The evidence of the ensuing forty years suggests, in my view, that the social sciences can indeed be, and are well on the way to being, as Wilson puts it, ‘biologicized’. The only stumbling block has proven to be social scientists themselves, who have, in some cases, proven resistant. 

‘Vaunting Ambition’? 

Yet, despite these words of caution, the scale of Wilson’s intellectual ambition can hardly be exaggerated. 

First, he sought to synthesize the entire field of animal behavior under the rubric of sociobiology and in the process produce the ‘New Synthesis’ promised in the subtitle, by analogy with the Modern Synthesis of Darwinian evolution and Mendelian genetics that forms the basis for the entire field of modern biology. 

Then, in a final chapter, apparently as almost something of an afterthought, he decided to add human behaviour into his synthesis as well. 

This meant, not just providing a new foundation for a single subfield within biology (i.e. animal behaviour), but for several whole disciplines formerly virtually unconnected to biology – e.g. psychology, cultural anthropology, sociology, economics. 

Oh yeah… and moral philosophy and perhaps epistemology too. I forgot to mention that. 

From Sociobiology to… Philosophy?

Indeed, Wilson’s forays into philosophy proved even more controversial than those into social science. Though limited to a few paragraphs in his first and last chapter, they were among the most widely quoted, and critiqued, in the whole book. 

Not only were opponents of sociobiology (and philosophers) predictably indignant, but even those few researchers bravely taking up the sociobiological gauntlet, and even applying it to humans, remained mostly skeptical. 

In proposing to reconstruct moral philosophy on the basis of biology, Wilson was widely accused of committing what philosophers call the naturalistic fallacy or appeal to nature fallacy

This refers to the principle that, if a behaviour is natural, this does not necessarily make it right, any more than the fact that dying of tuberculosis is natural means that it is morally wrong to treat tuberculosis with such ‘unnatural’ interventions as vaccination or antibiotics. 

In general, evolutionary psychologists have generally been only too happy to reiterate the sacrosanct inviolability of the fact-value chasm, not least because it allowed them to investigate the evolutionary function of such morally dubious, or indeed morally reprehensible, behaviours as infidelity, rape, war, sexual infidelity and child abuse, while denying they are thereby providing a justification for the behaviours in question. 

Yet this begs the question: if we cannot derive values from facts, whence can values be arrived at? Can they be derived only from other values? If so, then whence are our ultimate moral values, from which all others are derived, themselves ultimately derived? Must they be simply taken on faith? 

Wilson has recently controversially argued, in his excellent Consilience: The Unity of Knowledge, that, in this context: 

The posing of the naturalistic fallacy is itself a fallacy” (Consilience: p273). 

Leaving aside this controversial claim, it is clear that his point in ‘Sociobiology’ is narrower. 

In short, Wilson seems to be arguing that, in contemplating the appropriateness of different theories of prescriptive ethics (e.g. utilitarianism, Kantian deontology), moral philosophers consult “the emotional control centers in the hypothalamus and limbic system of the brain” (p3). 

Yet these same moral philosophers take these emotions largely for granted. They treat the brain as a “black box” rather than a biological entity the nature of which is itself the subject of scientific study (p562). 

Yet, despite the criticism Wilson’s suggestion provoked among many philosophers, the philosophical implications of recognising that moral intuitions are themselves a product of the evolutionary process have since become an serious and active area of philosophical enquiry. Indeed, among the leading pioneers in this field of enquiry has been the philosopher of biology Michael Ruse, not least in collaboration Wilson himself (Ruse & Wilson 1986). 

Yet if moral philosophy must be rethought in the light of biology and the evolved nature of our psychology, then the same is also surely true of arguably the other main subfield of contemporary philosophy – namely epistemology.  

Yet Wilson’s comments regarding the relevance of sociobiological theory to epistemology are even briefer than the few sentences he devotes in his opening and closing chapters to moral philosophy, being restricted to less than a sentence – a mere five-word parenthesis in a sentence primarily discussing moral philosophy and philosophers (p3). 

However, what humans are capable of knowing is, like morality, ultimately a product of the human brain – a brain which is a itself biological entity that evolved through a process of natural selection. 

The brain, then, is designed not for discovering ‘truth’, in some abstract, philosophical sense, but rather for maximizing the reproductive success of the organism whose behaviour it controls and directs. 

Of course, for most purposes, natural selection would likely favour psychological mechanisms that produce, if not ‘truth’, then at least a reliable model of the world as it actually operates, so that an organism can modify its behaviour in accordance with this model, in order to produce outcomes that maximizes its inclusive fitness under these conditions. 

However, it is at least possible that there are certain phenomena that our brains are, through the very nature of their wiring and construction, incapable of fully understanding (e.g. quantum mechanics or the hard question of consciousness), simply because such understanding was of no utility in helping our ancestors to survive and reproduce in ancestral environments. 

The importance of evolutionary theory to our understanding of epistemology and the limits of human knowledge is, together with the relevance of evolutionary theory to moral philosophy, a theme explored in philosopher Michael Ruse’s book, Taking Darwin Seriously, and is also the principal theme of such recent works as The Case Against Reality: Why Evolution Hid the Truth from Our Eyes by Donald D Hoffman. 

Dated? 

Is ‘Sociobiology: The New Synthesis’ worth reading today? At almost 700 pages, it represents no idle investment of time. 

Wilson is a wonderful writer even in a purely literary sense, and has the unusual honour for a working scientist of being a twice Pulitzer-Prize winner. However, apart from a few provocative sections in the opening and closing chapters, ‘Sociobiology: The New Synthesis’ is largely written in the form of a student textbook, is not a book one is likely to read on account of its literary merits alone. 

As a textbook, Sociobiology is obviously dated. Indeed, the extent to which it has dated is an indication of the success of the research programme it helped inspire. 

Thus, one of the hallmarks of true science is the speed at which cutting-edge work becomes obsolete.  

Religious believers still cite holy books written millennia ago, while adherents of pseudo-sciences like psychoanalysis and Marxism still paw over the words of Freud and Marx. 

However, the scientific method is a cumulative process based on falsificationism and is moreover no respecter of persons.

Scientific works become obsolete almost as fast as they are published. Modern biologists only rarely cite Darwin. 

If you want a textbook summary of the latest research in sociobiology, I would instead recommend the latest edition of Animal Behavior: An Evolutionary Approach or An Introduction to Behavioral Ecology; or, if your primary interest is human behavior, the latest edition of David Buss’s Evolutionary Psychology: The New Science of the Mind

The continued value of ‘Sociobiology: The New Synthesis’ lies in the field, not of science, but history of science In this field, it will remain a landmark work in the history of human thought, for both the controversy, and the pioneering research, that followed in its wake. 

Endnotes

[1] Actually, ‘evolutionary psychology’ is not quite a synonym for ‘sociobiology’. Whereas the latter field sought to understand the behaviour of all animals, if not all organisms, the term ‘evolutionary psychology’ is usually employed only in relation to the study of human behaviour. It would be more accurate, then, to say ‘evolutionary psychology’ is a synonym, or euphemism, for ‘human sociobiology’.

[2] Whereas behavioural geneticists focus on heritable differences between individuals within a single population, evolutionary psychologists largely focus on behavioural adaptations that are presumed to be pan-human and universal. Indeed, it is often argued that there is likely to be minimal heritable variation in human psychological adaptations, precisely because such adaptations have been subject to such strong selection pressure as to weed out suboptimal variation, such that only the optimal genotype remains. On this view, substantial heritable variation is found only in respect of traits that have not been subject to intense selection pressure (see Tooby & Cosmides 1990). However, this fails to be take into account such phenomena as frequency dependent selection and other forms of polymorphism, whereby different individuals within a breeding population adopt, for example, quite different reproductive strategies. It is also difficult to reconcile with the finding of behavioural geneticists that there is substantial heritable variation in intelligence as between individuals, despite the fact that the expansion of human brain-size over the course of evolution suggests that intelligence has been subject to strong selection pressures.

[3] For example, in 1997, the journal Ethology and Sociobiology, which had by then become, and remains, the leading scholarly journal in the field of what would then have been termed ‘human sociobiology’, and now usually goes by the name of ‘evolutionary psychology’, changed its name to Evolution and Human Behavior.

[4] An irony is that, while science is built on the assumption of determinism, namely the assumption that observed phenomena have causes that can be discovered by controlled experimentation, one of the findings of science is that, at least at the quantum level, determinism is actually not true. This is among the reasons why quantum theory is paradoxically popular among people who don’t really like science (and who, like virtually everyone else, don’t really understand quantum theory). Thus, Richard Dawkins has memorably parodied quantum mysticism as as based on the reasoning that: 

Quantum mechanics, that brilliantly successful flagship theory of modern science, is deeply mysterious and hard to understand. Eastern mystics have always been deeply mysterious and hard to understand. Therefore, Eastern mystics must have been talking about quantum theory all along.”

[5] Indeed, although since reconciled, Wilson and Watson seem to have shared a deep personal animosity for one another, Wilson once describing how he had once considered Watson, with whom he later reconciled, “the most unpleasant human being I had ever met” – see Wilson’s autobiography, Naturalist. A student of Watson’s describes how, when Wilson was granted tenure at Harvard before Watson:

It was a ‘big, big day in our corridor’. Watson could be heard coming up the stairwell to the third floor  shouting ‘fuck, fuck, fuck’” (Watson and DNA: p98)  

Wilson’s description of Watson’s personality in his memoir is interesting in the light of the later controversy regarding the latters comments regarding the economic implications of racial differences in intelligence with Wilson writing: 

Watson, having risen to historic fame at an early age, became the Caligula of biology. He was given license to say anything that came to his mind and expect to be taken seriously. And unfortunately, he did so, with a casual and brutal offhandedness.” 

In contrast, geneticist David Reich suggests that Watson’s abrasive personality predated his scientific discoveries and may even have been partly responsible for them, writing: 

His obstreperousness may have been important to his success as a scientist” (Who We are and how We Got Here: p263).

[6] Group selection has recently, however, enjoyed something of a resurgence in the form of multi-level selection theory. Wilson himself is very much a supporter of this trend.

[7] Of course, it goes without saying that the persecution to which Wilson was subjected was as nothing compared to that to which Galileo was subjected (see my post, A Modern McCarthyism in Our Midst). 

References 

Nowak et al (2010) The evolution of eusociality Nature 466:1057–1062. 

Price (1996) ‘In Defence of Group Selection, European Sociobiological Society Newsletter. No. 42, October 1996 

Ruse & Wilson (1986) Moral Philosophy as Applied SciencePhilosophy 61(236):173-192 

Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and AdaptationJournal of Personality 58(1): 17-67. 

Trivers (1971) The evolution of reciprocal altruism. Quarterly Review of Biology 46:35–57 

Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals.

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studies, adoption studies and family studies.

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorce, family breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons why parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour quite independently of any parental treatment.

This is well illustrated by the question of the relationship between corporal punishment and behaviour in children.

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once you think about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times may seemingly have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

After all, if a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, it becomes entirely redundant.

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here):

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (The Ethnic Phenomenon: p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, they report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982; Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on purely a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, she reports, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nannies, governoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes.

On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and America, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here, here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the financial assistance of the taxpayer, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom, at least with respect to sexual abuse, is tantamount to contemporary heresy, with attendant persecution

Thus, for example, it is claimed that criminals who are abusive towards their children were themselves almost invariably abused, mistreated or neglected as children, which is what has led to their own abusive, behaviour.

A particularly eloquent expression of this theory is found in the novel Clockers, by Richard Price, where one of the lead characters, a police officer, explains how, during his first few years on the job, a senior colleague had restrained him from attacking an abusive mother who had left her infant son handcuffed to a radiator, telling him:

Rocco, that lady you were gonna brain? Twenty years ago when she was a little girl. I arrested her father for beating her baby brother to death. The father was a piece of shit. Now that she’s all grown up? She’s a real piece of shit. That kid you saved today. If he lives that long, if he grows up? He’s gonna be a real piece of shit. It’s the cycle of shit and you can’t do nothing about it” (Clockers: p96).

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called cycle of abuse

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle came to be abusive. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Therefore, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, and most importantly for present purposes, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their past victimization caused their current sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser.

After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

In short, the ‘cycle of shit’ referred to by Price’s fictional police officer may well be real, but mediated by genetics rather than childhood experience.

However, this conclusion is not entirely clear. On the contrary, Harris is at pains to emphasize that the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole.

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families sampled in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between human races, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores.[8]

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[9]

Jensen has termed this mysterious factor the X-factor

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited X-factor, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining unexplained test-score gap. 

However, Harris takes Jensen at his word and takes the search for the X-factor very seriously. Indeed, she apparently believes she has discovered and identified it. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Unfortunately, Harris does not really develop this fascinating claim. Indeed, she cites no direct evidence in support of this claim, and evidently seems to regard the alternative possibility – namely, that race differences in intelligence are at least partly genetic in origin – as so unpalatable that it can safely ruled out a priori.

In fact, however, although not discussed by Harris, there is at least some evidence in support of her theory. Indeed, her theory potentially reconciles the apparently conflicting findings of two of the most widely-cited studies in this vexed area of research and debate.

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same race differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among black, white and mixed-race populations in the community at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, in a follow-up study it was found that by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

Although Scarr, Weinberg and Waldman took pains to present their findings as compatible with a purely environmentalist theory of race differences, this study has, not unreasonably, been widely cited by hereditarians as evidence for the existence of innate racial differences in intelligence (e.g. Levin 1994; Lynn 1994; Whitney 1996).

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood.[10]

One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as ‘black’, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore, according to Harris, come to score lower in IQ than their white peers and adoptive siblings. 

But how can we ever test this theory? Is it not untestable, and is this not precisely the problem identified by Jensen with previous positedX-factors.

Actually, however, although not discussed by Harris, there is a way of testing this theory – namely, looking at the IQs of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?

This, then, brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the famous Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in an almost entirely monoracial, white Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This, then, is perhaps the last lifeline for a purely environmentalist theory of race differences in intelligence – namely the theory that African-American culture depresses intelligence. 

Unfortunately, however, this proposition – namely, that African-American culture depresses your IQ – is almost as politically unpalatable and politically-incorrect as is the notion that race differences in intelligence reflect innate genetic differences.[11]

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services. (In other cases, such children may have been orphaned. However, this is less common today. At any rate, parents who die before their offspring reach maturity are surely also unrepresentative of parents in general. For example, many may live high-risk lifestyles that contribute to their early deaths.)

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, at least in theory, be ruled out by longitudinal studies, which could investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. Such a study would seem all but impossible and certainly, to my knowledge, has yet to be conducted.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being called to make good on his claims and thereby proven wrong.

[8] Actually, even if race differences in IQ are found to disappear after controlling for socioeconomic status, it would be a fallacy to conclude that this means that the differences in IQ are entirely a result of differences in social class and that there is no innate difference in intelligence between the races. After all, differences in socioeconomic status are in large part a consequence of differences in cognitive ability, as more intelligent people perform better at school, and at work, and hence rise in socioeconomic status. Therefore, in controlling for socioeconomic status, one is, in effect, also controlling for differences in intelligence, since the two are so strongly correlated. The contrary assumption has been termed by Jensenthe sociologist’s fallacy’.
This fallacy involves the assumption that it is differences in socioeconomic status that cause differences in IQ, rather than differences in intelligence that cause differences in socioeconomic status. As Arthur Jensen explains it:

If SES [i.e. socioeconomic status] were the cause of IQ, the correlation between adults’ IQ and their attained SES would not be markedly higher than the correlation between children’s IQ and their parents’ SES. Further, the IQs of adolescents adopted in infancy are not correlated with the SES of their adoptive parents. Adults’ attained SES (and hence their SES as parents) itself has a large genetic component, so there is a genetic correlation between SES and IQ, and this is so within both the white and the black populations. Consequently, if black and white groups are specially selected so as to be matched or statistically equated on SES, they are thereby also equated to some degree on the genetic component of IQ” (The g Factor: p491).

[9] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous populations of sub-Saharan Africa are actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia, Liberia), scores are not any higher. Also, other minority groups ostensibly or historically subject to racial oppression and discrimination (e.g. Ashkenazi Jews, Overseas Chinese) actually score higher in IQ than the host populations that ostensibly oppress them. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[10] The fact that the heritability of intelligence is higher in adulthood than during childhood, and the influence of the shared family environment correspondingly decreases, has been interpreted as reflecting the fact that, during childhood, our environments are shaped, to a considerable extent, by our parents. For example, some parents may encourage activities that may conceivably enhance intelligence, such as reading books and visiting museums. In contrast, as we enter adulthood, we begin to have freedom to choose and shape our own environments, in accordance with our interests, which may be partly a reflection of our heredity.
Interestingly, this theory suggests that what is biologically inherited is not necessarily intelligence itself, but rather a tendency to seek out intelligence-enhancing environments, i.e. intellectual curiosity rather than intelligence as such. In fact, it is probably a mixture of both factors. Moreover, intellectual curiosity is surely strongly correlated with intelligence, if only because it requires a certain level of intelligence to appreciate intellectual pursuits, since, if one lacks the ability to learn or understand complex concepts, then intellectual pursuits are necessarily unrewarding.

[11] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2, 102–114

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27

Scarr & Weinberg (1976) IQ test performance of black children adopted by White families. American Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.

A Modern McCarthyism in our Midst

Anthony Browne, The Retreat of Reason: Political Correctness and the Corruption of Public Debate in Modern Britain (London: Civitas, 2006) 

Western civilization has progressed. Today, unlike in earlier centuries, we no longer burn heretics at the stake.

Instead, according to sociologist Steven Goldberg, himself no stranger to contemporary heresy, these days: 

All one has to lose by unpopular arguments is contact with people one would not be terribly attracted to anyway” (Fads and Fallacies in the Social Sciences: p222). 

Unfortunately, however, Goldberg underplays, not only the psychological impact of ostracism, but also the more ominous consequences that sometimes attach to contemporary heresy.

“While columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst”

Thus, bomb and death threats were issued repeatedly to women such as Erin Pizzey and Suzanne Steinmetz for pointing out that women were just as likely, or indeed somewhat more likely, to perpetrate acts of domestic violence against their husbands and boyfriends as their husbands and boyfriends were to perpetrate acts of domestic violence against them – a finding now replicated in literally hundreds of studies (see also Domestic Violence: The 12 Things You Aren’t Supposed to Know). 

Similarly, in the seventies, Arthur Jensen, a psychology professor at the University of California, had to be issued with an armed guard on campus after suggesting, in a sober and carefully argued scientific paper, that it was a “not unreasonable” hypothesis that the IQ difference between blacks and whites in America was partly genetic in origin.

Political correctness has also cost people their jobs. 

Academics like Chris BrandHelmuth NyborgLawrence SommersFrank EllisNoah Carl and, most recently, Bo Winegard have been forced to resign or lost their academic positions as a consequence of researching, or, in some cases, just mentioning, politically incorrect theories such as the possible social consequences of, or innate basis for, sex and race differences in intelligence

Indeed, even the impeccable scientific credentials of James Watson, a man jointly responsible for among the most important scientific discoveries of the twentieth century, did not spare him this fate when he was reported in a newspaper as making some controversial but eminently defensible comments regarding population differences in cognitive ability and their likely impact on prospects for economic development.  

At the time of (re-)writing this piece, the most recent victim of this process of purging in academia is the celebrated historian, and long-term controversialist, David Starkey, excommunicated for some eminently sensible, if crudely expressed, remarks about slavery. 

Meanwhile, as proof of the one-sided nature of the witch-hunt, during the very same month as that in which Starkey was excommunicated from public life, a decidedly less eminent non-white leftist female academic, Priyamvada Gopal, now a professor of Postcolonial Studies, tweeted the borderline genocidal tweet

White lives don’t matter. As white lives.[1]

Yet the only repercussions she faced from her employer, Cambridge University, was to be almost immediately promoted to a full professorship

Cambridge University also issued a defence of their employees’ right to academic freedom, the institution itself tweeting in response to the controversy that: 

“[Cambridge] University defends the right of its academics to express their own lawful opinions which others might find controversial

This is indeed an admirable and principled stance – if applied consistently. 

Unfortunately, however, although this tweet was phrased in general terms, and actually included no mention of Gopal by name, it was evidently not of general application. 

For Cambridge University is, not only among the institutions from which Starkey was forced to tender his resignation this very same month, but also itself the very same institution that, only a year before, had denied a visiting fellowship to Jordan Peterson, the eminent public intellectual, for his controversial stances and statements on a range of topics, and which, only two years before, had denied an academic fellowship to sociologist Noah Carl, after a letter calling for his dismissal which was signed by, among others, none other than the loathsome Priyamvada Gopal herself. 

The inescapable conclusion is the freedom of “academics to express lawful opinions which others might find controversial” at Cambridge University applies, despite the general wording of the tweet from which these words are taken, only to those controversial opinions of which the leftist academic and cultural establishment currently approves. 

Losing Your Livelihood 

If I might be accused here of focusing excessively on freedom of speech in an academic context, this is only because academia is among the arenas where freedom of expression is most essential, as it is only if all ideas, howsoever offensive to certain protected groups, are able to freely circulate, and compete, in the marketplace of ideas that knowledge is able to progress through a selective process of testing and falsification.[2]

However, although the university environment is, today, especially intolerant, nevertheless similar fates have also befallen non-academics, many of whom have been deprived of their livelihoods on account of their politics. 

For example, in The Retreat of Reason, the 2006 book of which this post is ostensibly a review, Anthony Browne points to the case of a British headmaster sacked for saying Asian pupils should be obliged to learn English, a policy that was then, only a few years later, actually adopted as official government policy (p50). 

In the years since the publication of ‘The Retreat of Reason’, such examples have only multiplied. 

Indeed, today it is almost taken for granted that anyone caught saying something controversial and politically incorrect on the internet in his own name, or even under a pseudonym if subsequently ‘doxed’, is liable to lose his job.

Likewise, Browne noted that police and prison officers in the UK were then (and are stillbarred from membership of the BNP, a legal and constitutional political party, but not from membership of Sinn Fein, who until quite recently had supported domestic terror against the British state, including the murder of soldiers, civilians and the police themselves, nor of various Marxist groups that openly advocate the violent overthrow of the state and indeed the whole capitalist system (p51-2). 

Today, meanwhile, even believing that a person cannot change their biological sex is said to be a bar on admission into the British police force.

Moreover, employees sacked on account of their political views cannot always even turn to their unions for support.

On the contrary, trade unions have themselves expelled members for their political views, and indeed for membership of this same political party (p52). They have also successfully defended themselves in the European Court of Human Rights for doing precisely this, citing the right to freedom of association (see ASLEF v UK [2007] ECHR 184). 

Yet, ironically, freedom of association is not only the precise same freedom denied to employers by anti-discrimination laws, but also the very same freedom that surely guarantees a person’s right to be a member of a constitutional, legal political party, or express controversial political views outside of their work, without being at risk of losing their job or being banished from their union.

Browne concludes:

One must be very disillusioned with democracy not to find it at least slightly unsettling that in Europe in the twenty-first century government employees are being banned from joining certain legal political parties but not others, legal democratic party leaders are being arrested in dawn raids for what they have said and political parties leading the polls are being banned by judges” (p57). 

Of course, racists and members of parties like the BNP hardly represent a fashionable cause célèbre for civil libertarians. But, then, neither did other groups targeted for political persecution at the time of their political persecution. This is, of course, precisely what rendered them so vulnerable to persecution. 

Political correctness is often dismissed as a trivial issue, which only bigots and busybodies bother complaining about when there are so many supposedly more serious problems in the world today. 

Yet free speech is never trivial. When people lose their jobs and livelihoods because of currently unfashionable opinions, what we are witnessing is a modern form of McCarthyism. 

Indeed, as conservative commentator David Horowitz observes: 

The era of the progressive witch-hunt has been far worse in its consequences to individuals and freedom of expression than was the McCarthy era… [not least because] unlike the McCarthy era witch-hunt, which lasted only a few years, the one enforced by left-wing ‘progressives’ is now entering its third decade and shows no signs of abating” (Left Illusions: An Intellectual Odyssey).[3] 

Thus, the McCarthyism of the 1950s positively pales into insignificance as compared to the McCarthyism that operates in the west today. The former involved a few communists, suspected communists and communist sympathizers being forced out of their jobs at the height of the Cold War and of Soviet infiltration (which was very real); the latter involves untold numbers of people losing their jobs, being excommunicated from public life and polite society, harassed, demonized and sometimes criminally prosecuted for currently unfashionable and politically-incorrect opinions.

Yet, while columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism during the Second Red Scare), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst.

Recent Developments 

Browne’s ‘The Retreat of Reason’ was first published in 2006. Unfortunately, however, in the intervening decade and a half, despite Browne’s wise counsel, the situation has only worsened.

Thus, what was then called ‘political correctness’ has now itself transmorphed into what is now called ‘wokeness’ and cancel culture, phenomena which predate the coinage of these terms, but which, though representing a difference of degree rather than of kind, nevertheless reflect a more than merely semantic transformation. 

Thus, in 2006, Browne rightly championed New Media facilitated by the internet age, such as blogs (like this one, hopefully), for disseminating controversial, politically-incorrect ideas and opinion, and thereby breaking the mainstream media monopoly on the dissemination of information and ideas (p85). 

Here, Browne was surely right. Indeed, new media, such as blogs, have not only been responsible for disseminating ideas that are largely taboo in the mainstream media, but even for breaking news stories that had been suppressed by mainstream media, such as the predominant racial background of the men responsible for the 2015-2016 New Year’s Eve sexual assaults in Germany

However, in the decade and a half since ‘The Retreat of Reason’ was published, censorship has become increasingly restrictive even in the virtual sphere. 

Thus, internet platforms like YouTubePatreon, Facebook and Twitter increasingly deplatform content-creators with politically incorrect viewpoints, and, in a particularly disturbing move, even some websites have been, at least temporarily, forced offline, or banished to the darkweb, by their web hosting providers.

Doctrinaire libertarians respond that this is not a free speech issue, but rather a freedom of association issue, because these are private businesses with the right to deny service to anyone with whom they, for whatever reason, choose not to contract.

In reality, however, platforms like Facebook and Twitter are far more than merely private businesses. As virtual market monopolies, they are part of the infrastructure of everyday life in the twenty-first century.

To be banned from communicating on Facebook is tantamount to being barred from communication in a public place.

Moreover, the problem is only exacerbated by the fact that the few competitors seeking to provide an alternative to these ‘Big Tech’ monopolies are themselves being de-platormed by their hosting providers as a direct consequence of their commitment to free speech and willingness to host controversial content.

Likewise, the denial of financial services, such as bank accounts, loans and payment processing, to groups or individuals on the basis of their politics is particularly troubling, effectively making it all but impossible those afflicted to remain financially viable. The result is effectively tantamount to being made an ‘unperson’.

Moreover, far from remaining a hub of free expression, social media in particular has increasingly provided a rallying and recruiting ground for moral outrage and repression, not least in the form of so-called twittermobs,, intent on publicly shaming, harassing and denying employment opportunities to anyone of whose views they disapprove.

In short, if the internet has facilitated free speech, it has also facilitated political persecution, since today, it seems, one can enjoy all the excitement and exhilaration of joining a witchhunt, pitchfork proudly in hand, without ever straying from the comfort of your computer screen.

Explaining Political Correctness 

For Browne, PC represents “the dictatorship of virtue” (p7) and replaces “reason with emotion” and subverts “objective truth to subjective virtue” (xiii). 

Political correctness is an assault on both reason and… democracy. It is an assault on reason, because the measuring stick of the acceptability of a belief is no longer its objective, empirically established truth, but how well it fits in with the received wisdom of political correctness. It is an assault on… democracy because [its] pervasiveness… is closing down freedom of speech” (p5). 

Yet political correctness is not wholly without precedents. 
 
On the contrary, every age has its taboos. Thus, in previous centuries, it was compatibility with religious dogma rather than leftist orthodoxy that represented the primary “measuring stick of the acceptability of a belief” – as Galileo, among others, was to discover for his pains.

Although, as a conservative, Browne might be expected to be favourably disposed to traditional religion, he nevertheless acknowledges the analogy between political correctness and the religious dogmas of an earlier age: 

Christianity… has shown many of the characteristics of modern political correctness and often went far further in enforcing its intolerance with violence” (p29). 

Indeed, this intolerance is not restricted to Christianity. Thus, whereas Christianity, in an earlier age, persecuted heresy with even greater intolerance than does the contemporary left, in many parts of the world, including increasingly the West, Islam still does.  

As well as providing an analogous justification for the persecution of heretics, political correctness may also, Browne suggests, serve a similar psychological function to religion, in representing: 

A belief system that echoes religion in providing ready, emotionally-satisfying answers for a world too complex to understand fully and providing a gratifying sense of righteousness absent in our otherwise secular society” (p6).

Defining Political Correctness

What, then, do we mean by ‘political correctness’? 

Political correctness evaluates a claim, not on its truth, but on its offensiveness to certain protected groups. Some views are held to be not only false, indeed sometimes not even false, but rather unacceptable, unsayable and beyond the bounds of acceptable opinion. 

Indeed, for the enforcers of the politically correct orthodoxy, the truth or falsehood of a statement seems ultimately to be of little interest. 

Browne provides a useful definition of political correctness as: 

An ideology which classifies certain groups of people as victims in need of protection from criticism and which makes believers feel that no dissent should be tolerated” (p4). 

Refining this, I would say that, for an opinion to be ‘politically incorrect’, two criteria must be met:

1) The existence of a group to whom the opinion in question is regarded as ‘offensive’
2) The group in question must be perceived as ‘oppressed’

Thus, it is perfectly acceptable to disparage and offend supposedly ‘privileged’ groups (e.g. males, white people, Americans or the English), but groups with ‘victim-status’ are deemed sacrosanct and beyond reproach, at least as a group. 

Victim Status

Victim-status itself, however, seems to be rather arbitrarily bestowed. 

Certainly, actual poverty or economic deprivation has little to do with it. 

“It is acceptable to denigrate the white working class as ‘chavs’ , and ‘rednecks’, but multi-millionaires who happen to be black, female or homosexual can perversely pose as oppressed’. The ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.”

Thus, it is perfectly acceptable to denigrate the white working-class. Thus, pejorative epithets aimed at the white working class, such as redneck, chav and ‘white trash’, are widely employed and considered socially-acceptable in polite (and not so polite) conversation (see The Redneck Manifesto).

Yet the use of comparably derogatory terms in respect of, say, black people, is considered wholly beyond the pale, and sufficient to end media careers in Britain and America.

However, multi-millionaires who happen to be black, female or homosexual are permitted to perversely pose as ‘oppressed’, and wallow in their ostensible victimhood.

Thus, in the contemporary West, the Left has largely abandoned its traditional constituency, namely the working class, in favour of ethnic minorities, homosexuals and feminists.

In the process, the ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.

Likewise, men are widely denigrated in popular culture. Yet, contrary to the feminist dogma which maintains that men have disproportionate power and are privileged, it is in fact men who are overwhelmingly disadvantaged by almost every sociological measure.

Thus, Browne writes: 

Men were overwhelmingly underachieving compared with women at all levels of the education system, and were twice as likely to be unemployed, three times as likely to commit suicide, three times as likely to be a victim of violent crime, four times as likely to be a drug addict, three times as likely to be alcoholic and nine times as likely to be homeless” (p49). 

Indeed, overt discrimination against men, such as the different ages at which men and women were then eligible for state pensions in the UK (p25; p60; p75) and the higher levels of insurance premiums demanded of men (p73) are widely tolerated.[4]

The demand for equal treatment only goes as far as it advantages the [ostensibly] less privileged sex” (p77). 

“‘Victim status’ is a relative concept. Thus, feminists may have victim power over men, but, as soon as the men in question decide to don lipstick and dresses and identify as ‘transwomen’, suddenly the feminists find that the high heel stilettos are, both literally and metaphorically, very much on the other foot.”

Victim status is not only seemingly arbitrarily accorded, it is also a relative concept.

Thus, the Scots and Irish may have a degree of victim-status in relation to their historical enemy, the English, such that the vitriolic anti-English and anti-British rhetoric of many Scottish and Irish nationalists tends to receive a free pass – so long as it remains safely directed against the English.

However, as soon as Scottish or Irish nationalism comes to be directed, not against the British or English, but rather at recent nonwhite immigrants to Scotland and Ireland, who, unlike the English, arguably represent the real threat to Scottish and Irish identity and nationhood today, it suddenly becomes anathema and beyond the pale.

Likewise, women may indeed, as we have seen, possess victim power vis a vis men. However, as soon as the men in question decide to put on lipstick and dresses and identify as ‘transwomen’, suddenly the feminists find, much to their chagrin, that the high heel stilettos are, both literally and metaphorically, very much on the other foot.[5]

The arbitrary way in which recognition as an ‘oppressed group’ is accorded, together with the massive benefits accruing to demographics that have secured such recognition, has created a perverse process that Browne aptly terms “competitive victimhood” (p44). 

Few things are more powerful in public debate than… victim status, and the rewards… are so great that there is a large incentive for people to try to portray themselves as victims” (p13-4) 

Thus, groups currently campaigning for ‘victim status’ include, he reports, “the obese, Christians, smokers and foxhunters” (p14). 

The result is what economists call perverse incentives

By encouraging people to strive for the bottom rather than the top, political correctness undermines one of the main driving forces in society, the individual pursuit of self-improvement” (p45) 

This outcome can perhaps even be viewed as the ultimate culmination of what Nietzsche called the transvaluation of values, whereby, under the influence of Christian ethics, disadvantage, weakness and oppression are converted into positive virtues and even, paradoxically, into strength. 

Euroscepticism & Brexit

Unfortunately, despite his useful definition of the phenomenon of political correctness, Browne goes on to use the term ‘political correctness’ in a broader fashion that goes beyond this original definition, and, in my opinion, extends the concept beyond its sphere of usefulness. 

For example, he classifies Euroscepticism – i.e. opposition to the further integration of the European Union – as a politically incorrect viewpoint (p60-62). 

Here, however, there is no obvious ‘oppressed group’ in need of protection. 

“The term ‘political correctness’ serves a similar function for conservatives as the term ‘fascist’ does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.”

Moreover, although widely derided as ignorant and jingoistic, Eurosceptical opinions have never been actually deemed ‘offensive’ or beyond the bounds of acceptable opinion.

On the contrary, they are regularly aired in mainstream media outlets, and even on the BBC, and recently scored a final victory in Britain with the Brexit campaign of 2016.  

Browne’s extension of the concept of political correctness in this way is typical of many critics of political correctness, who succumb to the temptation to define as ‘political correctness’ as any view with which they themselves happen to disagree. 

This enables them to tar any views with which they disagree with the pejorative label of ‘political correctness’.

It also, perhaps more importantly, allows ostensible opponents of political correctness to condemn the phenomenon without ever actually violating its central taboos by discussing any genuinely politically incorrect issues. 

They can therefore pose as heroic opponents of the inquisition while never actually themselves incurring its wrath. 

The term ‘political correctness’ therefore serves a similar function for conservatives as the term fascist does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.[6]

Jews, Muslims and the Middle East 

Another example of Browne’s extension of the concept of political correctness beyond its sphere of usefulness is his characterization of any defence of the policies of Israel as ‘politically incorrect’. 

“While the left endlessly recycles statistics demonstrating the overrepresentation of white males in positions of power and privilege, to cite similar statistics showing the even greater per capita overrepresentation of Jews in these exact same positions of power and privilege is somehow beyond the pale.”

Yet, here, the ad hominem and guilt-by-association methods of debate (or rather of shutting down debate), which Browne rightly describes as characteristic of political correctness (p21-2), are more often used by defenders of Israel than by her critics – though, here, the charge of ‘anti-Semitism’ is substituted for the usual refrain of ‘racism’.[7]

Thus, in the US, any suggestion that the US’s small but disproportionately wealthy and influential Jewish community influences US foreign policy in the Middle East in favour of Israel is widely dismissed as anti-Semitic and roughly tantamount to proposing the existence of a world Jewish conspiracy led by the Learned Elders of Zion.

Admittedly, Browne acknowledges: 

The dual role of Jews as oppressors and oppressed causes complications for PC calculus” (p12).  

In other words, the role of the Jews as victims of persecution in National Socialist Germany conflicts with, and weighs against, their current role as perceived oppressors of the Palestinians in the Middle East. 

However, having acknowledged this complication, Browne immediately dismisses its importance, all too hastily going on to conclude in the very same sentence that: 

PC has now firmly transferred its allegiance from the Jews to Muslims” (p12). 

However, in many respects, the Jews retain their ‘victim-status’ despite their hugely disproportionate wealth and political power

Indeed, perhaps the best evidence of this is the taboo on referring to this disproportionate wealth and power. 

Thus, while the political Left never tires of endlessly recycling statistics demonstrating the supposed overrepresentation of ‘white males’ in positions of power and privilege, to cite similar statistics demonstrating the even greater per capita overrepresentation of Jews in these exact same positions of power and privilege is deemed somehow deemed beyond the pale, and evidence, not of leftist sympathies, but rather of being ‘far right

This is despite the fact that the average earnings of American-Jews and their level of overrepresentation in influential positions in government, politics, media and business relative to population size surely far outstrips that of any other demographic – white males very much included.

The Myth of the Gender Pay Gap 

One area where Browne claims that the “politically correct truth” conflicts with the “factually correct truth” is the causes of the gender pay-gap (p8; p59-60). 

This is also included by philosopher David Conway as one of six issues, raised by Browne in the main body of his text, for which Conway provides supportive evidence in an afterword entitled ‘Commentary: Evidence supporting Anthony Browne’s Table of Truths Suppressed by PC’, included as a sort of appendix in later editions of Browne’s book. 

Although still standard practice in mainstream journalism at the time his book was written, it is regrettable that Browne himself offers no sources to back up the statistics he cites in his text.

This commentary section therefore provides the only real effort to provide sources or citations for many of Browne’s claims. Unfortunately, however, it covers only a few of the many issues addressed by Browne in preceding pages. 

In support of Browne’s contention that “different work/life choices” and “career breaks” underlie the gender pay gap (p8), Conway cites the work of sociologist Catherine Hakim (p101-103). 

Actually, more comprehensive expositions of the factors underlying the gender pay gap are provided by Warren Farrell in Why Men Earn More (which I have reviewed here, here and here) and Kingsley Browne in Biology at Work: Rethinking Sexual Equality (which I have reviewed here and here). 

Moreover, while it indeed true that the pay-gap can largely be explained by what economists call ‘compensating differentials’ – e.g. the fact that men work longer hours, in more unpleasant and dangerous working conditions, and for a greater proportion of their adult lives – Browne fails to factor in the final and decisive feminist fallacy regarding the gender pay gap, namely the assumption that, because men earn more money than women, this necessarily means they have more money than women and are wealthier.

In fact, however, although men earn more money than women, much of this money is then redistributed to women via such mechanisms as marriage, alimony, maintenance, divorce settlements and the culture of dating.

Indeed, as I have previously provocatively proposed:

The entire process of conventional courtship is predicated on prostitution, from the social expectation that the man will pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife through alimony and maintenance for anything up to ten or twenty years after he has belatedly rid himself of her.

Therefore, much of the money earned by men is actually spent by, or on, their wives, ex-wives and girlfriends (not to mention daughters) such that, although women earn less than men, women have long been known to researchers in the marketing industry to dominate about 80% of consumer spending

However, Browne does usefully debunk another area in which the demand for equal pay has resulted in injustice – namely the demand for equal prizes for male and female athletes at the Wimbledon Tennis Championships (a demand since cravenly capitulated to). Yet, as Browne observes: 

Logically, if the prize doesn’t discriminate between men and women, then the competition that leads to those prizes shouldn’t either… Those who insist on equal prizes, because anything else is discrimination, should explain why it is not discrimination for men to be denied an equal right to compete for the women’s prize.” (p77).[8]

Thus, Browne perceptively observes: 

It would currently be unthinkable to make the same case for a ‘whites only’ world athletics championship… [Yet] it is currently just as pointless being a white 100 metres sprinter in colour-blind sporting competitions as it would be being a women 100 metres sprinter in gender-blind sporting competitions” (p77). 

International Aid 

Another topic addressed by both Browne (p8) and Conway (p113-115) is the reasons for African poverty

The politically correct explanation, according to Browne, is that African poverty results from inadequate international aid (p8). However, Browne observes: 

No country has risen out of poverty by means of international aid and cancelling debts” (p20).[9]

Moreover, Browne points out that fashionable policies such as “writing off Third World debt” produce perverse incentives by “encourag[ing] excessive and irresponsible borrowing by governments” (p48), while international aid encourages economic dependence, bureaucracies and corruption (p114).

Actually, in my experience, the usual explanation given for African underdevelopment is not, as Browne and Conway suggest, inadequate international aid as such.

After all, this explanation only raises the question as to how many developed countries, such as those in Europe, managed to achieve First World living standards back when there were no other wealthy First World countries around to provide them with international aid to assist with their development.

Instead, in my experience, most leftists blame African poverty and underdevelopment on the supposed legacy of European colonialism. Thus, it is argued that European nations, and indeed white people in general, are themselves to blame for the poverty of Africa. International aid is then reimagined as a form of recompense for past wrongs. 

Unfortunately, however, this explanation for African poverty fares barely any better. 

For one thing, it merely raises the question why it was that Africa was colonized by Europeans rather than vice versa?

The answer, of course, is that much of sub-Saharan Africa was ‘underdeveloped (i.e. socially and technologically backward) even before colonization. This was what allowed Africa to be so easily and rapidly conquered and colonized during the late-nineteenth and early-twentieth centuries. 

Moreover, if European colonization is really to blame for the poverty of so much of sub-Saharan Africa, then why is it that those few African countries largely spared European colonization, such as Liberia and Ethiopia, are, if anything, even worse off than their neighbours in part precisely because they lack the infastructure (e.g. roads, railroads) that the much-maligned European colonial overlords were responsible for bequeathing other African states.

In other words, far from holding Africa back, European colonizers often built what little infrastructure and successful industry sub-Saharan Africa still has, and African countries are poor despite colonialism rather than because of it.[10]

Further falsifying the assumption that the experience of European colonialism invariably impeded the economic development of those regions formerly subject to European colonial rule is the experience of former European colonies in parts of the world other than Africa.

Here, there have been many notable success stories, including Malaysia, Singapore, Hong Kong, even India, not to mention Canada, Australia, New Zealand, all of which were former European colonies, and many of which gained their independence around the same time as African polities.

An experience with European colonization is then, it seems, no bar to economic development outside of Africa. Why then has the experience in Africa itself been so different?

Browne and Conway, for their part, place the blame firmly on Africans themselves – but on African rulers rather than the mass of African people. The real reason for African poverty, they report, is simply “bad governance” on the part of Africa’s post-colonial rulers (p8).

Poverty in African has been caused by misrule rather than insufficient aid” (p113).

Unfortunately, however, this is hardly a complete explanation, since it only merely raises the question as to why Africa has been so prone to “misrule” and “bad governance” in the first place.

It also raises the question as to why regions outside of Africa, but nevertheless populated by people of predominantly sub-Saharan African ancestry, such as Haiti and Jamaica (or even Baltimore and Detriot), are seemingly beset by many of the same problems (e.g. high levels of violent crime, poverty).

This last observation, of course, suggests that the answer lies, not in African soil or geography, as suggested by, for example, Jared Diamond in his book Guns, Germs and Steel (which I have reviewed here), but rather in differences between races in personality, intelligence and behaviour.[11]

However, this is, one suspects, a conclusion too politically incorrect even for Browne himself to consider.

Is Browne a Victim of Political Correctness Himself? 

The foregoing discussion converges in suggesting a single overarching problem with Browne’s otherwise admirable dissection of the nature and effects of political correctness – namely that Browne, although ostensibly an opponent of political correctness, is, in reality, neither immune to the infection nor ever able to effect a full recovery. 

Brown himself observes: 

Political correctness succeeds, like the British Empire, through divide and rule… The politically incorrect often end up appeasing political correctness by condemning fellow travellers” (p37). 

Indeed, this is indeed a characteristic feature of witch-hunts, from Salem to McCarthy, whereby victims were able to partially absolve themselves by ‘outing’ fellow-travellers to be persecuted in their place. 

However, although bemoaning this trend, Browne nevertheless himself provides a prime example it when, having rightly deplored the treatment of BNP supporters deprived of employment on account of their political views, he nevertheless issues the almost obligatory disclaimer, condemning the party as “odious” (p52).

In doing so, he thereby ironically perfectly illustrates the very appeasement of political correctness which he has himself identified as central to its power.

Similarly, it is notable that, in his discussion of the suppression of politically incorrect facts and theories, Browne nevertheless fails to address any of the most incendiary such facts and theories, such as those that resulted in death threats to the likes of Jensen, Pizzey and Steinmetz

After all, to discuss the really taboo topics would not only bring upon him even greater opprobrium than that which he already faced, but also likely deny him a platform (or at least a mainstream platform) in which to express his views altogether. 

Browne therefore provides his ultimate proof of the power of political correctness, not through the topics he addresses, but rather through those he conspicuously avoids. 

In failing to address these issues, either out of fear of the consequences or genuine ignorance of the facts due to the media blackout on their discussion, Browne provides the definitive proof of his own fundamental thesis, namely the political correctness indeed corrupts public debate and subverts free speech.

Endnotes

[1] After the resulting outcry, Gopal insisted she stood by her tweets, which, she insists, “were very clearly speaking to a structure and ideology, not about people”, something actually not at all clear from her phraseology, and arguably inconsistent with it, given that, save in a metaphoric sense, it is only people who have, and lose, “lives”, not institutions or ideology, and indeed only people, not institutions or ideology, who can properly be described as “white.
At best, her tweet was incendiary and grossly irresponsible in a time of increasing, sometimes overtly genocidal, anti-white animosity, rhetoric, violence and rioting. At worst, it is perhaps not altogether paranoid to compare it to the sort of dehumanizing racist rhetoric that has historically often served as a precursor to genocide.
Thus, far right neo-fascist philosopher Greg Johnson points out: 

When the Soviets spoke of ‘eliminating the kulaks as a class’, that was simply a euphemism for mass murder” (Johnson 2018: p21). 

Similarly, it is notable that not even the Nazis openly talked openly about the mass killings of the Jews, even when this process was already underway. Instead, they employed such coded euphemisms as resettlement in the East and the Final Solution to the Jewish Question.
In this light, it is notable that those leftists like Noel Ignatiev who talk of “abolishing the white race” but insist they are only talking of deconstructing the concept of ‘whiteness, which is, they argue, a social construct, strangely never talk about ‘abolishing the black race’, or indeed any race other than whites, even though, according to their own ideology, all racial categories are mere social constructs with no real basis in biology, invented to justify oppression, slavery, colonialism and other such malign and supposedly uniquely western practices, and hence presumably similarly artificial and malignant.

[2] Thus, according to the sort of evolutionary epistemology championed by, among others, Karl Popper, it is only if different theories are tested and subjected to falsification that we are able to assess their merits and thereby choose between them, and scientific knowledge is able to progress. If some theories are simply deemed beyond the pale a priori, then clearly this process of testing and falsification cannot properly occur.

[3] The book in which Horowitz wrote these words was published in 2003. Yet, today, some seventeen years later, “the era of the progressive witch-hunt”, far from abating, seems to be only accelerating. By Horowitz’s reckoning, then, “the era of the progressive witch-hunt” is now approaching its fourth decade, and, not only, in Horowitz’s words, “shows no signs of abating”, but rather recently seems to be going into overdrive.

[4] Discrimination against men in the provision of insurance policies remains legal in most jurisdictions (e.g. the USA). However, sex discrimination in the provision of insurance policies was belatedly outlawed throughout the European Union at the end of 2012, due to a ruling of the European Court of Justice. This was many years after other forms of sex discrimination had been outlawed in most member-states.
For example, in the UK, most other forms of gender discrimination were outlawed almost forty years previously under the 1975 Sex Discrimination Act. However, section 45 of this Act explicitly exempted insurance companies from liability for sex discrimination if they could show that the discriminatory practice they employed was based on actuarial data and was “reasonable”.
Yet actuarial data could also be employed to justify other forms of discrimination, such as employers deciding not to employ women of childbearing age. However, this remained unlawful.
This exemption was preserved by Section 22 of Part 5 of Schedule 3 of the new Equality Act 2010. As a result, as recently as 2010 insurance providers routinely charged young male drivers double the premiums demanded of young female drivers.
Yet, curiously, the only circumstances in which insurance policy providers were barred from discriminating on the grounds of sex was where the differences result from the costs associated with pregnancy or to a woman’s having given birth under section 22(3)(d) of Schedule 3 – in other words, the only readily apparent circumstance where insurance providers might be expected to discriminate against women rather than men!
Interestingly, even after the ECJ ruling, there is evidence that indirect discrimination against males continues, simply by using occupation as a marker for gender.

[5] It is difficult to muster much sympathy for the feminists for at least three reasons. First, the central tenet of transgender ideology, namely the denial of the reality of biological sex, is itself a direct inheritance from feminism.
Thus, feminists have long contended that there are few if any innate biological differences between the sexes in psychology and behaviour and that instead, to use a favourate phrase of feminists, sociologists and other such ‘professional damned fools’, such differences in psychology and behaviour as are observed are entirely ‘socially constructed’ in origin. From this absurd position, it is surely only one step further to claim that a person of one sex can unilaterally declare himself to be of the other sex and will henceforth, and even retroactively, be of the sex she/he/they/‘ze’/‘xe’/etc. declare themselves and should be henceforth referred to and treated as such.
Indeed, if biological sex differences are indeed as trivial and next to nonexistent as the feminists have so often and so loudly claimed, then this raises the question as to why a person should not be able to unilaterally declare themselves as of the opposite sex to that which they were arbitrarily assigned at birth. Transsexual ideology is then the logical conclusion, or perhaps the reductio ad absurdum, of feminist sex denial. In short, the feminists have only themselves to blame.
Second, the feminist TERFs who who complain so loudly and incessantly about so-called ‘trans women’ invading ‘female only spaces’ are often the exact same feminists, or at least heirs to those exact same feminists, who, in a previous generation, had loudly and incessantly sought entry for women into what were previously ‘male only spaces’ (e.g. golf clubs, gentleman’s clubs). Thus, the opening up of so-called ‘female only spaces’ to biological males is arguably the logical conclusion of feminist campaigning and rhetoric, and the opposition of many feminists to this development illustrates only their logical inconsistency, double-standards and hypocrisy.
The final reason that one should not waste one’s sympathy on feminist TERFs who find themselves ostricized, persecuted and sometimes cancelled by the transgender lobby is that the feminists, including many of those subsequently demonized as ‘TERFs’, have themselves been responsible for exactly the same sort of persecution and intolerance of which they they now find themselves the victims, namely the persecution and demonization of anyone who questions the central tenets of feminism, including those who question sex denialism, prominent victims having included Lawrence SummersJames Damore, Suzanne Steinmetz, Erin Pizzey and Neil Lyndon among countless others.

[6] Actually, the term fascist is sometimes employed in this way by conservatives as well, as when they refer to certain forms of Islamic fundamentalism as Islamofascism or indeed when they refer to the stifling of debate, and of freedom of expression, by leftists (i.e. politically correctness itself) as a form of fascism

[7] This use of the phrase ‘anti-Semitism’ in the context of criticism of Israel’s policies towards the Palestinians is ironic, at least from a pedantic etymological perspective, since the Palestinian people actually have a rather stronger claim to being a Semitic people, in both a racial and a linguistic sense, than do either Ashkenazi or Sephardi (if not Mizrahi) Jews.

[8] Of course, at the time Browne wrote these words in 2006, his proposal that, for sports is to be truly non-gender discriminatory, then men should be allowed to enter women’s events was nothing more than a hypothetical thought experiment. It was not a serious proposal, but rather a reductio ad absurdum to illustrate what taking the feminist rhetoric of non-discrimination in sports would actually look like if taken to its logical conclusion. Now, of course, it has become a hilarious reality, with biologically male transgender athletes entering and outcompeting women in women’s sporting events.
Feminists have, of course, been the first to cry foul. However, they really have only themselves to blame. As Browne argues, if sports is really to be non-discriminatory as between men and women, then presumably men should indeed have a right to enter women’s athletic events. Indeed, the entire rhetoric of transgender ideology is based on the feminist claim that sex (or ‘gender’ to use the preferred feminist term) is a social construct with no basis in biology.

[9] Actually, contrary to what Browne says, international aid may sometimes be partially successful in alleviating povery. For example, the Marshall Plan for post-WWII Europe is sometimes credited as a success story, though some economists disagree. The success, or otherwise, of foreign aid seems, then, to depend, at least in part, on the identity of the recipients.

[10] Relatedly, it is surely no accident that the two sub-Saharan African countries that, until relatively recently, remained under white rule, namely South Africa and Rhodesia (now Zimbabwe), at that time enjoyed some of the highest living-standards in Africa, with Rhodesia famously being described as ‘the breadbasket of Africa’ and South Africa long regarded as the only ‘developed economy’ in the entire continent during the apartheid-era.
Since the transition to black majority rule, however, the decline in both countries, especially in the sphere of law and order, has been dramatic and, given the experience elsewhere in post-colonial Africa, wholly predictable. Thus, South Africa, regarded as the only ‘developed economy’ in Africa during the heyday of apartheid in the 1960s, is now, in the post-apartheid era, and despite the lifting of economic sanctions, usually classed as a ‘developing economy. This suggests that the economy of South Africa is indeed ‘developing’ in some sense, it just happens to be ‘developing’ in an altogether undesirable direction.
Interestingly, a few studies have investigated the relationship between a history of European colonization and economic development, most, but not all, of which seem to have found that a history of colonialism by European powers is actually associated with increased eonomic development. For example, Easterly and Levine (2016) found that a history of European colonization was associated with increased levels of economic development; Grier (1999) similarly found that, among former colonies, the duration of the period of colonial rule was positively associated with greater levels of economic growth; and Feyrer & Sacerdote (2009) found that, among islands, there is robust positive correlation between years spent as a European colony and present day GDP.
However, interestingly and directly contrary to what I have claimed here, Bertocchi & Canova (2002), in a study restricted to economies on the African continent, purported to find an inverse correlation between degree of European colonial penetration and economic growth.

[11] For more on this plausible but incendiary theory, see IQ and the Wealth of Nations by Richard Lynn and Tatu Vanhanen and Understanding Human History by Michael Hart.

John Gray’s ‘Straw Dogs’: Unrelenting Pessimism Has Never been So Invigorating

‘Straw Dogs: Thoughts on Humans and Other Animals’, by John Gray, Granta Books, 2003.

The religious impulse, John Gray argues in a later work elaborating on the themes first set out in ‘Straw Dogs’, is as universal as the sex drive. Like the latter, when repressed, it re-emerges in the form of perversion.[1]

Humanism replaces an irrational faith in an omnipotent God with an even more irrational faith in the omnipotence of Man himself

Thus, the Marxist faith in our passage into communism after the revolution represents a perversion of the Christian belief in our passage into heaven after death or Armageddon – the former, communism (i.e. heaven on earth), being quite as unrealistic as the otherworldly, celestial paradise envisaged by Christians, if not more so. 

Marxism is thus, as Edmund Wilson was the first to observe, the opiate of the intellectuals

What is true of Marxism is also, for Gray, equally true of what he regards as the predominant secular religion of the contemporary West – namely humanism. 

Its secular self-image notwithstanding, humanism is, for Gray, a substitute religion that replaces an irrational faith in an omnipotent god with an even more irrational faith in the omnipotence of Man himself (p38). 

Yet, in doing so, Gray concludes, humanism renounces the one insight that Christianity actually got right – namely the notion that humans are “radically flawed” as captured by the doctrine of original sin.[2]

Progress and Other Delusions

Of course, in its ordinary usage, the term ‘humanism’ is hopelessly broad, pretty much encompassing anyone who is neither, on the one hand, religious nor, on the other, a Nazi.

Belief in the inevitability of progress is, Gray contends, a faith universal across the political spectrum – from neoconservatives who think they can transform Islamic tribal theocracies and Soviet Republics into liberal capitalist democracies, to Marxists who think Islamic tribal theocracies and liberal capitalist democracies alike will themselves ultimately give way to communism.

For his purposes, Gray defines humanism more narrowly, namely as a “belief in progress” (p4).

More specifically, however, he seems to have in mind a belief in the inevitability of social, economic, moral and political progress.

Belief in the inevitability of progress is, he contends, a faith universal across the political spectrum – from neoconservatives who think they can transform Islamic tribal theocracies and Soviet Republics into liberal capitalist democracies, to Marxists who think Islamic tribal theocracies and liberal capitalist democracies alike will themselves ultimately give way to communism.

Gray, however, rejects the notion of any grand narrative arc in human history.

Looking for meaning in history is like looking for patterns in clouds” (p48). 

Scientific Progress and Social Progress 

Although in an early chapter he digresses on the supposed “irrational origins” of western science,[3] Gray does not question the reality of scientific progress.

Science and technology advance, but human nature itself remains stubbornly intransigent. Hence Gray declares, “The uses of knowledge will always be as shifting and crooked as humans are themselves”.

Instead, what Gray questions is the assumption that social, moral and political progress will inevitably accompany scientific progress. 

Progress in science and technology, does not invariably lead to social, moral and political progress. On the contrary, new technologies can readily be enlisted in the service of governmental repression and tyranny. Thus, Gray observes: 

Without the railways, telegraph and poison gas, there could have been no Holocaust” (p14). 

Thus, by Gray’s reckoning, “Death camps are as modern as laser surgery” (p173).

Scientific progress is, he observes, unstoppable and self-perpetuating. Thus, if any nation unilaterally renounces modern technology, it will be economically outcompeted, or even militarily conquered, by other nations who harness modern technologies in the service of their economy and military: 

Any country that renounces technology makes itself prey to those that do not. At best it will fail to achieve the self-sufficiency at which it aims – at worst it will suffer the fate of the Tasmanians” (p178). 

However, the same is not true of political, social and moral progress. On the contrary, a nation excessively preoccupied with moral considerations would surely be defeated in war or indeed in economic competition by an enemy willing to cast aside morality for the sake of success.

Thus, Gray concludes:

Technology is not something that humankind can control. It is an event that has befallen the world” (p14). 

Thus, Gray anticipates: 

Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (p123). 

This leads him to predict: 

If one thing about the present century is certain, it is that the power conferred on humanity by new technologies will be used to commit atrocious crimes against it” (p14). 

Human Nature

This is because, according to Gray, although technology progresses, human nature itself remains stubbornly intransigent. 

Though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive animal that is also one of the most predatory and destructive” (p4). 

As a result, Gray concludes:

The uses of knowledge will always be as shifting and crooked as humans are themselves” (p28).[4]

Thus, in Gray’s view, the fatal flaw in the humanist theory that political progress will inevitably accompany scientific progress is, ironically, its failure to come to grips with one particular sphere of scientific progress – namely progress in the scientific understanding of human nature itself.

Sociobiological theory suggests that humans are innately selfish and nepotistic to an extent incompatible with the utopias envisaged by reformers and revolutionaries.

Evolutionary psychologists like to emphasize how natural selection has paradoxically led to the evolution of cooperation and altruism. They are also at pains to point out that innate psychological mechanisms are responsive to environmental variables and hence amenable to manipulation.

This has led some thinkers to suggest that, even if utopia is forever beyond our grasp, nevertheless society can be improved by social engineering and well-meaning reform (see Peter Singer’s A Darwinian Left: which I have reviewed here).

However, this ignores the fact that the social engineers themselves (e.g. politicians, civil servants) are possessed of the same essentially selfish and nepotistic nature as those whose behaviour they are seeking to guide and manipulate. Therefore, even if they were able to successfully reengineer society, they would do so for their own ends, not those of society or humankind as a whole.

Of course, human nature itself could itself be altered through genetic engineering or eugenics. However, once again, those charged with doing the work (scientists) and those from whom they take their orders (government, big business) will, at the time their work is undertaken, be possessed of the same nature that it is their intention to improve upon. 
 
Therefore, Gray concludes, if human nature itself is remodelled: 

It will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (p6). 

It will hence reflect the interests, not of humankind as a whole, but of rather those responsible for undertaking the project. 

The Future

“War or revolution… may seem apocalyptic possibilities, but they are only history carrying on as it has always done. What is truly apocalyptic is the belief that history will come to a stop

In contrast to the optimistic vision of such luminaries as Steven Pinker in The Better Angels of Our Nature and Enlightenment Now and Matt Ridley in his book The Rational Optimist (which I have reviewed here), Gray’s vision of the future is positively dystopian. He foresees a return of resource wars and “wars of scarcity… waged against the world’s modern states by the stateless armies of the militant poor” (p181-2).

This is an inevitable result of a Malthusian trap

So long as population grows, progress will consist in labouring to keep up with it. There is only one way that humanity can limit its labours, and that is by limiting its numbers. But limiting human numbers clashes with powerful human needs” (p184).[5]

These “powerful human needs” include, not just the sociobiological imperative to reproduce, but also the interests of various ethnic groups in ensuring their survival and increasing their military and electoral strength (Ibid.). 

Zero population growth could be enforced only by a global authority with draconian powers and unwavering determination” (p185). 

Unfortunately (or perhaps fortunately, depending on your perspective), he concludes: 

There has never been such a power and never will be” (Ibid.). 

Thus, Gray compares the rise in human populations to the temporary “spikes that occur in the numbers of rabbits, house mice and plague rats” (p10). Thus, he concludes: 

Humans… like any other plague animal…cannot destroy the earth, but… can easily wreck the environment that sustains them” (p12). 

Thus, Gray darkly prophesizes, “We may well look back on the twentieth century as a time of peace” (p182). 

As Gray points out in his follow-up book: 

War or revolution… may seem apocalyptic possibilities, but they are only history carrying on as it has always done. What is truly apocalyptic is the belief [of Marx and Fukuyamathat history will come to a stop” (Heresies: Against Progress and Other Illusions: p67).[6]

Morality

While Gray doubts the inevitability of social, political and moral progress, he perhaps does not question sufficiently its reality. 

For example, citing improvements in sanitation and healthcare, he concludes that, although “faith in progress is a superstition”, progress itself “is a fact” (p155).

The Romans, transported to our times, would accept the superiority of our science and technology and, if they did not, we would outcompete them both economically and militarily and thereby prove it ourselves. However, they would view our social, moral and political values as decadent and we would have no way of proving them wrong.

Yet every society, by definition, views its own moral and political values as superior to those of other societies. Otherwise, they would not be its own values. They therefore view the recent changes in moral and political values that led to their own moral and political values as a form of moral progress.

However, what constitutes moral, social and political progress is entirely a subjective assessment.

For example, the ancient Romans, transported to our times, would surely accept the superiority of our science and technology and, if they did not, we would outcompete them both economically and militarily and thereby prove it ourselves. 

However, they would view our social, moral and political values as decadent, immoral and misguided and we would have no way of proving them wrong.

In other words, while scientific and technological progress can be proven objectively, what constitutes moral and political progress is a mere matter of opinion.

Gray occasionally hints in this direction (namely, moral relativism), declaring in one of his many countless quotable aphorisms 

Ideas of justice are as timeless as fashions in hats” (p103). 

He even flirts with outright moral nihilism, describing “values” as “only human needs and the needs of other animals turned into abstractions” (p197), and even venturing, “the idea of morality” may be nothing more than “an ugly superstition” (p90). 
 
However, Gray remains somewhat confused on this point. For example, among his arguments against morality is that observation that: 

Morality has hardly made us better people” (p104). 

However, the very meaning of “better people” is itself dependent on a moral judgement. If we reject morality, then there are no grounds for determining if some people are “better” than others and therefore this can hardly be a ground for rejecting morality. 

Free Will

On the issue of free will, Gray is more consistent. Relying on the controversial work of neuroscientist Benjamin Libet, he contends: 

In nearly all our life willing decides nothing – we cannot wake up or fall asleep, remember or forget our dream, summon or banish our thoughts, by deciding to do so… We just act and there is no actor standing behind what we do” (p69). 

Thus, he observes, “Our lives are more like fragmentary dreams then the enactments of conscious selves” (p38) and “Our actual experience is not of freely choosing the way we live but of being driven along by our bodily needs – by fear, hunger and, above all, sex” (p43).

Rejection of free will is, moreover, yet a further reason to reject morality.

Whether one behaves morally or not, and what one regards as the moral way to behave, is, Gray contends, entirely a matter of the circumstances of one’s upbringing (p107-8).[7] Thus, according to Gray “being good is good luck” and not something for which one deserves credit or blame (p104).

Gray therefore concludes: 

The fact that we are not autonomous subjects deals a death blow to morality – but it is the only possible ground of ethics” (p112). 

Yet, far from truly free, Gray contends: 

We spend our lives coping with what comes along” (p70). 

However, in expecting humankind to take charge of its own destiny: 

We insist that mankind can achieve what we cannot: conscious control of its existence” (p38). 

Self-Awareness

For Gray, then, what separates us from the remainder of the animal kingdom is not then free will, or even consciousness, but rather merely self-awareness.

What separates us from animals is not free will or consciousness, but rather merely self-awareness. But this is a mixed blessing. Musicians and sportsmen often perform best, not when self-aware, but rather when momentarily lost in what positive psychologists call ‘flow’ or ‘the zone’

Yet this, for Gray, is a mixed blessing at best.

After all, it has long been known that musicians and sportsmen often perform best, not when consciously thinking about, or even aware of, the movements and reactions of their hands and bodies, but rather when acting ‘on instinct’ and momentarily lost in what positive psychologists call flow or being in the zone (p61). 

This is a theme Gray returns to in The Soul of the Marionette, where he argues that, in some sense, the puppet is freer, and more unrestrained in his actions, than the puppet-master.

The Gaia Cult

Given the many merits of his book, it is regrettable that Gray has an unfortunate tendency to pontificate about all manner of subjects, many of them far outside his own field of expertise. As a result, almost inevitably, he sometimes gets it completely wrong on certain specific subjects.

A case in point is environmentalist James Lovelock’s Gaia theory, which Gray champions throughout his book. 

According to ‘Gaia Theory’, the planet is analogous to a harmonious self-regulating organism – in danger of being disrupted only by environmental damage wrought by man.

Given his cynical outlook, not to mention his penchant for sociobiology, Gray’s enthusiasm for Gaia is curious.

As Richard Dawkins explains in Unweaving the Rainbow, the adaptation of organisms to their environment, which consists largely of other organisms, may give the superficial appearance of eco-systems as harmonious wholes, as some organisms exploit and hence come to rely on the presence of other organisms in order to survive (Unweaving the Rainbow: p221).

However, a Darwinian perspective suggests that, far from existing in benign harmony, organisms are in a state of continuous competition and conflict. Indeed, it is paradoxically precisely their exploitation of one another that gives the superficial appearance of harmony. 
 
In other words, as Dawkins concludes: 

Individuals work for Gaia only when it suits them to do so – so why bother to bring Gaia into the discussion” (Unweaving the Rainbow: p225). 

Yet, for many of its adherents, Gaia is not so much a testable, falsifiable scientific theory as it is a kind of substitute religion. Thus, Dawkins describes ‘Gaia theory’ as “a cult, almost a religion” (Ibid: p223).

It is therefore better viewed, within Gray’s own theoretical framework, as yet another secular perversion of humanity’s innate religious impulse.

Perhaps, then, Gray’s own curious enthusiasm for this particular pseudo-scientific cult suggests that Gray is himself no more immune from the religious impulse than those whom he attacks. If so, this, paradoxically, only strengthens his case that the religious impulse is indeed universal and innate.

The Purpose of Philosophy

Gray is himself a philosopher by background. However, he is contemptuous of most of the philosophical tradition that has preceded him. 

Thus, he contends:  

As commonly practised, philosophy is the attempt to find good reasons for conventional beliefs” (p37). 

In former centuries such conventional beliefs were largely religious dogma. Yet, from the nineteenth century on, they increasing became political creeds emphasizing human progress, such as Whig historiography, and the theories of Marx and Hegel.

Thus, Gray writes:  

In the Middle Ages, philosophy gave intellectual scaffolding to the Church; in the nineteenth and twentieth centuries it served a myth of progress” (p82). 

Today, however, despite the continuing faith in progress that Gray so ably dissects, philosophy has ceased to fulfil even this function and hence abandoned even these dubious raisons d’être.

The result, according to Gray, is that:

Serving neither religion nor a political faith, philosophy is a subject without a subject-matter; scholasticism without the charm of dogma” (p82). 

Yet Gray reserves particular scorn for moral philosophy, which is, according to him, “an exercise in make-believe” (p89) and “very largely a branch of fiction” (p109), albeit one “less realistic in its picture of human life than the average bourgeois novel” (p89), which, he ventures, likely explains why “a philosopher has yet to write a great novel” (p109).

In other words, compared with outright fiction, moral philosophy is simply less realistic. 

Anthropocentrism

Although Gray sees it as a malign inheritance from Christianity, I suspect that anthropocentrism – the belief that humans are special and unique – is a universal delusion. Indeed, paradoxically, it may not even be limited to humans. To the extent they were, or are, capable of conceptualizing such a thought, probably earthworms and rabbits would also consider themselves as special and unique over and above all other species in just the same way that we do.

Although, at the time ‘Straw Dogs’ was first published, Gray held the title ‘Professor of European Thought’ at the London School of Economics, he is particularly scathing in his comments regarding Western philosophy. 

Thus, like Schopenhauer, his pessimist precursor, (who is, along with Hume, one of the few Western philosophers whom he mentions without also disparaging), Gray purports to prefer Eastern philosophical traditions.

These and other non-Western religious and philosophical traditions are, he claims, unpolluted by the influence of Christianity and hence view humans as merely another animal, no different from the rest.

I do not have sufficient familiarity with Eastern philosophical traditions to assess this claim. However, I suspect that anthropocentrism and the concomitant belief that humans are somehow special, unique and different from all other organisms is a universal and indeed innate human delusion.

Indeed, paradoxically, it may not even be limited to humans.

Thus, I suspect that, to the extent they were, or are, capable of conceptualizing such a thought, earthworms and rabbits would also conceive of themselves as special and unique over and above all other species in just the same way we do.

Death Before Nirvāna?

Ultimately, however, Gray rejects eastern philosophical and religious traditions too – including Buddhism.

There is no need, he contends, to spend lifetimes striving to achieve nirvāna and the cessation of suffering as the Buddha proposed. On the contrary, he observes, there is no need for any such effort, since: 

Death brings to everyone the peace Buddha promised only after lifetimes of striving” (p129). 

All one needs to do, therefore, is to let nature take its course, or, if one is especially impatient, perhaps hurry things along by suicide or an unhealthy lifestyle.

Aphoristic Style

I generally dislike books written in the sort of pretentious aphoristic style that Gray adopts. In my experience, they generally replace the argumentation necessary to support their conclusions with bad poetry.

I generally dislike books written in an aphoristic style. They usually replace good philosophy with bad poetry. With ‘Straw Dogs’, however, for once the aphoristic style is appropriate, since Gray’s arguments, though controversial, are straightforward. The failure of earlier thinkers to reach the same conclusions reflects a failure of ‘The Will’ rather than ‘The Intellect’ – an unwillingness to face up to and come to terms with the reality of the human condition.

Indeed, sometimes the poetic style is so obscurantist that it is difficult even to discern what these conclusions are in the first place.

However, in ‘Straw Dogs’, the aphoristic style seems for once appropriate. This is because Gray’s arguments, though controversial, are actually quite straightforward and not requiring of additional explication.

Indeed, one suspects the inability of earlier thinkers to reach the same conclusions reflects a failure of The Will rather than The Intellect – an unwillingness to face up to and come to terms with the reality of the human condition. 

A Saviour to Save us from Saviours’?

Unlike other works dealing with political themes, Gray does not conclude with a chapter proposing solutions to the problems identified in previous chapters. Instead, his conclusion is as bleak as the pages that precede it.

At its worst, human life is not tragic, but unmeaning… the soul is broken but life lingers on… what remains is only suffering” (p101).

Personally, however, I found it refreshing that, unlike other self-important, self-professed saviours of humanity, Gray does not attempt to portray himself as some kind of saviour of mankind. On the contrary, his ambitions are altogether more modest.

Moreover, he does not hold our saviours in particularly high esteem but rather seems to regard them as very much part of the problem.

He does therefore consider briefly what he refers to as the Buddhist notion that we actually require “A Saviour to Save Us From Saviours”. 

Eventually, however, Gray renounces even this role. 

Humanity takes its saviours too lightly to need saving from them… When it looks to deliverers it is for distraction, not salvation” (p121). 

Gray thus reduces our self-important, self-appointed saviours – be they philosophers, religious leaders, self-help gurus or political leaders – to no more than glorified competitors in the entertainment industry.

Distraction as Salvation?

Indeed, for Gray, it is not only saviours who function as a form of distraction for the masses. On the contrary, for Gray, ‘distraction’ is now central to life in the affluent West.

Thus, in the West today, standards of living have improved to such an extent that obesity is now a far greater health problem than starvation, even among the so-called ‘poor’ (indeed, one suspects, especially among the so-called ‘poor’!).

In the contemporary west, where obesity is a bigger medical problem than starvation even among the poor (indeed, especially among the poor) and clinical depression is fast becoming the biggest medical problem of all, economic life is now geared, not towards production, but rather distraction. In other words, where once the common people required only ‘bread and circuses’, now they demand cake, alcohol, ice cream, soap operas, Playstations, Premiership football and reality TV.

Yet clinical depression is now rapidly expanding into the greatest health problem of all.

Thus, Gray concludes: 

Economic life is no longer geared chiefly to production… [but rather] to distraction” (p162). 

In other words, where once, to acquiesce in their own subjugation, the common people required only bread and circuses, today they seem to demand cake, ice cream, alcohol, soap operas, Playstations, Premiership football and reality TV!

Indeed, Gray views most modern human activity as little more than distraction and escapism. 

It is not the idle dreamer who escapes from reality. It is practical men and women who turn to a life of action as a refuge from insignificance” (194). 

Indeed, for Gray, even meditation is reduced to a form of escapism: 

The meditative states that have long been cultivated in Eastern traditions are often described as techniques for heightening consciousnessIn fact they are ways of by-passing self-awareness” (p62).

Yet Gray does not disparage escapism as a superficial diversion from serious and worthy matters.

On the contrary, he views distraction, or even escapism, as the key to, if not happiness, then at least to the closest we can ever approach to this elusive but chimeric state.

Moreover, the great mass of mankind instinctively recognizes as much:

Since happiness is unavailable, the mass of mankind seeks pleasure” (p142). 

Thus, in a passage which is perhaps the closest Gray comes to self-help advice, he concludes: 

Fulfilment is found, not in daily life, but in escaping from it” (p141-2). 

Perhaps then, escapism is not such a bad thing, and there is something is to be said for sitting around watching TV all day after all. 
____________ 

 
By his own thesis then, it is perhaps as a form of ‘Distraction’ that Gray’s own book ought ultimately to be judged. 
 
By this standard, I can only say that, with its unrelenting cynicism and pessimism, ‘Straw Dogs’ distracted me immensely – and, according to the precepts of Gray’s own philosophy, there can surely be no higher praise!

Endnotes

[1] John Gray, Heresies: Against Progress and Other Illusions: p7; p41. 

[2] John Gray, Heresies: Against Progress and Other Illusions: p8; p44. 

[3] John Gray, ‘Straw Dogs’: p20-23.

[4] Interestingly, here Gray seemingly echoes Immanuel Kant who famously wrote:

From such crooked timber as humankind is made of nothing entirely straight can be made” (Idea for a Universal History with a Cosmopolitan Aim, proposition six).

Yet, elsewhere, Gray is largely dismissive of the celebrated German idealist philosopher. Citing Schopenhauer’s critique, Gray argues that “like most philosophers, Kant worked to shore up the conventional beliefs of his time”, and memorably concludes:

Kant’s dogmatic slumber may have been disturbed by Hume’s scepticism, but it was not long before he was snoring soundly again” (p42).

[5] Of course, the assumption that human population will continue to grow contradicts the demographic transition model, whereby it is assumed that a decline in fertility inevitably accompanies economic development. However, while it is true that declining fertility has accompanied increasing prosperity in many parts of the world, it is not at all clear why this has occurred. Indeed, from sociobiological perspective, increases in wealth should lead to an increased reproductive rate, as organisms channel their greater material resources into increased reproductive success, the ultimate currency of natural selection. It is therefore questionable how much faith we should place in the universality of a process the causes of which are so little understood. Moreover, the assumption that improved living-standards in the so-called ‘developing world’ will inevitably lead to reductions in fertility obviously presupposes that the so-called ‘developing world’ will indeed ‘develop’ and that living standards will indeed improve, a obviously questionable assumption. Ultimately, the very term ‘developing world’ may turn out to represent a classic case of wishful thinking. 

[6] Thus, of the bizarre pseudoscience of cryonics, whereby individuals pay private companies for the service of freezing their brains or whole bodies after death, in the hope that, with future advances in technology, they can later be resurrected, he notes that the ostensible immortality promised by such a procedure is itself dependent on the very immortality of the private companies offering the service, and of the very economic and legal system (including contractual obligations) within which such companies operate.

If the companies that store the waiting cadavers do not go under in stock market crashes, they will be swept away by war or revolutions” (Heresies: Against Progress and Other Illusions: p67).

[7] Actually, heredity surely also plays a role, as traits such as empathy and agreeableness are partly heritable, as is sociopathy and criminality.

Richard Dawkins’ ‘The Selfish Gene’: Selfish Genes, Selfish Memes and Altruistic Phenotypes

‘The Selfish Gene’, by Richard Dawkins, Oxford University Press, 1976.

Selfish Genes ≠ Selfish Phenotypes

Richard Dawkins’s ‘The Selfish Gene’ is among the most celebrated, but also the most misunderstood, works of popular science.

Thus, among people who have never read the book (and, strangely, a few who apparently have) Dawkins is widely credited with arguing that humans are inherently selfish, that this disposition is innate and inevitable, and even, in some versions, that behaving selfishly is somehow justified by our biological programming, the titular ‘Selfish Gene’ being widely misinterpreted as referring to a gene that causes us to behave selfishly.

Actually, Dawkins is not concerned, either directly or primarily, with humans at all.

Indeed, he professes to be “not really very directly interesting in man”, whom he dismisses as “a rather aberrant species” and hence peripheral to his own interest, namely how evolution has shaped the bodies and especially the behaviour of organisms in general (Dawkins 1981: p556).

‘The Selfish Gene’ is then, unusually, if not uniquely, for a bestselling work of popular science, a work, not of human biology nor even of non-human zoology, ethology or natural history, but rather of theoretical biology.

Moreover, in referring to genes as ‘selfish’, Dawkins has in mind not a trait that genes encode in the organisms they create, but rather a trait of the genes themselves.

In other words, individual genes are themselves conceived of as ‘selfish’ (in a metaphoric sense), in so far as they have evolved by natural selection to selfishly promote their own survival and replication by creating organisms designed to achieve this end.

Indeed, ironically, as Dawkins is at pains to emphasise, selfishness at the genetic level can actually result in altruism at the level of the organism or phenotype.

This is because, where altruism is directed towards biological kin, such altruism can facilitate the replication of genes shared among relatives by virtue of their common descent. This is referred to as kin selection or inclusive fitness theory and is one of the central themes of Dawkins’ book.

Yet, despite this, Dawkins still seems to see organisms themselves, humans very much included, as fundamentally selfish – albeit a selfishness tempered by a large dose of nepotism.

Thus, in his opening paragraphs no less, he cautions:

If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from our biological nature. Let us try to teach generosity and altruism, because we are born selfish” (p3).

The Various Editions

In later editions of his book, namely those published since 1989, Dawkins tempers this rather cynical view of human and animal behaviour by the addition of a new chapter – Chapter 12, titled ‘Nice Guys Finish First’.

This new chapter deals with the subject of reciprocal altruism, a topic he had actually already discussed earlier, together with the related, but distinct, phenomenon of mutualism,[1] in Chapter 10 (entitled, ‘You Scratch My Back, I’ll Ride on Yours’).

In this additional chapter, he essentially summarizes the work of political scientist Robert Axelrod, as discussed in Axelrod’s own book The Evolution of Co-Operation. This deals with evolutionary game theory, specifically the iterated prisoner’s dilemma, and the circumstances in which a cooperative  strategy can, by cooperating only with those who have a history of reciprocating, survive, prosper, evolve, and, in the long-term, ultimately outcompete  and hence displace those strategies which maximize only short-term self-interest.

Post-1989 editions also include another new chapter titled ‘The Long Reach of the Gene’ (Chapter 13).

If, in Chapter 12, the first additional chapter, Dawkins essentially summarised the contents of of Axelrod’s book, The Evolution of Cooperation, then, in Chapter 13, he summarizes his own book, The Extended Phenotype.

In addition to these two additional whole chapters, Dawkins also added extensive endnotes to these post-1989 editions.

These endnotes clarify various misunderstandings which arose from how he explained himself in the original version, defend Dawkins against some criticisms levelled at certain passages of the book and also explain how the science progressed in the years since the first publication of the book, including identifying things he and other biologists got wrong.

With still more recent new editions, the content of ‘The Selfish Gene’ has burgeoned still further. Thus, he 30th Anniversary Edition boasts only a new introduction; the recent 40th Anniversary Edition, published in 2016, boasts a new Epilogue too. Meanwhile, the latest so-called Extended Selfish Gene boasts, in addition to this, two whole new chapters.

Actually, these two new chapters are not all that new, being lifted wholesale from, once again, The Extended Phenotype, a work whose contents Dawkins has already, as we have seen, summarized in Chapter 13 (‘The Long Reach of the Gene’), itself an earlier addition to the book’s seemingly ever expanding contents list.

The decision not to entirely rewrite ‘The Selfish Gene’ was apparently that of Dawkins’ publisher, Oxford University Press.

This was probably the right decision. After all, ‘The Selfish Gene’ is not a mere undergraduate textbook, in need of revision every few years in order to keep up-to-date with the latest published research.

Rather, it was a landmark work of popular science, and indeed of theoretical biology, that introduced a new approach to understanding the evolution of behaviour and physiology to a wider readership, composed of biologist and non-biologist alike, and deserves to stand in its original form as a landmark in the history of science.

However, while the new introductions and the new epilogue is standard fare when republishing a classic work several years after first publication, the addition of four (or two, depending on the edition) whole new chapters strikes me less readily defensible.

For one thing, they distort the structure of the book, and, though interesting in and of themselves, always read for me rather as if they have been tagged on at the end as an afterthought – as indeed they have.

The book certainly reads best, in a purely literary sense, in its original form (i.e. pre-1989 editions), where Dawkins concludes with an optimistic, if fallacious, literary flourish (see below).

Moreover, these additional chapters reek of a shameless marketing strategy, designed to deceive new readers into paying the full asking price for a new edition, rather than buying a cheaper second-hand copy or just keeping their old one.

This is especially blatant in respect of the book’s latest incarnation, The Extended Selfish Gene, which, according to the information provided on Oxford University Press’s own website, was released only three months after the previous 40th Anniversary Edition, yet includes two additional chapters.

One frankly expects better from so presigious a publisher such as Oxford University Press, and indeed so celebrated a biologist and science writer as Richard Dawkins, especially as I suspect neither are especially short of money.

If I were recommending someone who has never read the book before on which edition to buy, I would probably advise them to get a second-hand copy of any post-1989 editions, since these can now be picked up very cheap, and include the additional endnotes which I personally often found very interesting.

On the other hand, if you want to read three additional chapters either from or about The Extended Phenotype then you are probably best to buy, instead, well… The Extended Phenotype – as this is also now a rather old book of which, as with ‘The Selfish Gene’, old copies can now be picked up very cheap.

The ‘Gene’s-Eye-View’ of Evolution

The Selfish Gene is a seminal work in the history of biology primarily because Dawkins takes the so-called gene’s-eye-view of evolution to its logical conclusion. To this extent, contrary to popular opinion, Dawkins’ exposition is not merely a popularization, but actually breaks new ground theoretically.

Thus, John Maynard Smith famously talked of kin selection by analogy with ‘group selection’ (Smith 1964). However, William Hamilton, who formulated the theory underlying these concepts, always disliked the term ‘kin selection’ and talked instead of the direct, indirect and inclusive fitness of organisms (Hamilton 1964a; 1964b).

However, Dawkins takes this line of thinking to its logical conclusion by looking – not at the fitness or reproductive success of organisms or phenotypes – but rather at the success in self-replication of genes themselves.

Thus, although he certainly stridently rejects group-selection, Dawkins replaces this, not with the familiar individual-level selection of classical Darwinism, but rather with a new focus on selection at the level of the gene itself.

Abstract Animals?

Much of the interest, and no little of the controversy, arising from ‘The Selfish Gene’ concerned, of course, the potential application of its theory to humans. However, in the book itself, humans, whom, as mentioned above, Dawkins dismisses as a “rather aberrant species” in which he professes to be “not really very directly interested” (Dawkins 1981: p556) are actually mentioned only occasionally and briefly.

Indeed, most of the discussion is purely theoretical. Even the behaviour of non-human animals is described only for illustrative purposes, and even these illustrative examples often involve simplified hypothetical creatures rather than descriptions of the behaviour of real organisms.

For example, he illustrates his discussion of the relative pros and cons of either fighting or submitting in conflicts over access to resources by reference to ‘hawks’ and ‘doves’ – but is quick to acknowledge that these are hypothetical and metaphoric creatures, with no connection to the actual bird species after whom they are named:

The names refer to conventional human usage and have no connection with the habits of the birds from whom the names are derived: doves are in fact rather aggressive birds” (p70).

Indeed, even Dawkins’ titular “selfish genes” are rather abstract and theoretical entities. Certainly, the actual chemical composition and structure of DNA is of only peripheral interest to him.

Indeed, often he talks of “replicators” rather than “genes” and is at pains to point out that selection can occur in respect of any entity capable of replication and mutation, not just DNA or RNA. (Hence his introduction of the concept of memes: see below).

Moreover, Dawkins uses the word ‘gene’ in a somewhat different sense to the way the word is employed by most other biologists. Thus, following George C. Williams in Adaptation and Natural Selection, he defines a “gene” as:

Any portion of chromosomal material that potentially lasts for enough generations to serve as a unit of natural selection” (p28).

This, of course, makes his claim that genes are the principle unit of selection something approaching a tautology or circular argument.

Sexual Selection in Humans?

Where Dawkins does mention humans, it is often to point out the extent to which this “rather aberrant species” apparently conspicuously fails to conform to the predictions of selfish-gene theory.

For example, at the end of his chapter on sexual selection (Chapter 9, titled, “Battle of the Sexes”) he observes that, in contrast to most other species, among humans, at least in the West, it seems to be females who are most active in using physical appearance as a means of attracting mates:

One feature of our own society that seems decidedly anomalous is the matter of sexual advertisement… It is strongly to be expected on evolutionary grounds that where the sexes differ, it should be the males that advertise and the females that are drab… [Yet] there can be no doubt that in our society the equivalent of the peacock’s tail is exhibited by the female, not the male” (p164).

Thus, among most other species, it is males who have evolved more elaborate plumages and other flashy, sexually selected ornaments. In contrast, females of the same species are often comparatively drab in appearance.

Yet, in modern western societies, Dawkins observes, it is more typically women who “paint their faces and glue on false eyelashes” (p164).

Here, it is notable that Dawkins, being neither an historian nor an anthropologist, is careful to restricts his comments to “our own society” and, elsewhere, to “modern western man”.

Thus, one explanation is that it is only our own WEIRD, western societies that are anomalous?

Thus, Matt Ridley, in The Red Queen, proposes that maybe:

Modern western societies have been in a two-century aberration from which they are just emerging. In Regency England, Louis XIV’s France, medieval Christendom, ancient Greece, or among the Yanomamö, men followed fashion as avidly as women. Men wore bright colours, flowing robes, jewels, rich materials, gorgeous uniforms, and gleaming, decorated armour. The damsels that knights rescued were no more fashionably accoutred than their paramours. Only in Victorian times did the deadly uniformity of the black frock coat and its dismal modern descendant, the grey suit, infect the male sex, and only in this century have women’s hemlines gone up and down like yo-yos” (The Red Queen: p292).

There is an element of truth here. Indeed, the claim is corroborated by Darwin, who observed in The Descent of Man:

In most, but not all parts of the world, the men are more highly ornamented than the women, and often in a different manner; sometimes, though rarely, the women are hardly ornamented at all” (The Descent of Man).

However, I suspect Ridley’s observation partly reflects a misunderstanding of the different purposes for which men and women use clothing, including bright and elaborate clothing.

Indeed, it rather reminds me of Margaret Mead’s claim that, among the Tschambuli of Papua New Guinea, sex-roles were reversed because, here, she reported, it was men who painted their faces and wore ‘make-up’, not women.

Yet what Mead neglected to mention, or perhaps failed to understand, was that the ‘make-up’ and face-paint that she evidently found so effeminate was actually war-paint that a Tschambuli warrior was only permitted to wear after killing his first enemy warrior, an obviously very male activity (see Homicide: Foundations of Human Behavior: p152).

Darwin himself, incidentally, although alluding to the “highly orgnamented” appearance of men of many cultures in the passage from The Descent of Man quoted above, well understood the different purposes of male and female ornamentation, writing in this same work:

Women are everywhere conscious of the value of their own beauty; and when they have the means, they take more delight in decorating themselves with all sorts of ornaments than do men” (The Descent of Man).

Of course, clothes and makeup are an aspect of behaviour rather than morphology, and thus more directly analogous to, say, the nests (or, more precisely, the bowers) created by male bowerbirds than the tail of the peacock.

However, behaviour is, in principle, no less subject to natural selection (and sexual selection) than is morphology, and therefore the paradox remains.

Moreover, even concentrating our focus exclusively on morphology, the sex difference still seems to remain.

Thus, perhaps the closest thing to a ‘peacock’s tail’ in humans (i.e. a morphological trait designed to attract mates) is a female trait, namely breasts.

Thus, as Desmond Morris first observed, in humans, the female breasts seem to have been co-opted for a role in sexual selection, since, unlike among other mammals, women’s breasts are permanent, from puberty on, not present only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016).

In contrast, men possess no obvious equivalent of the peacock’s tail’ (i.e. a trait that has evolved in response to female choice) – though Geoffrey Miller makes a fascinating (but ultimately unconvincing) case that the human brain may represent a product of sexual selection (see The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature).[2]

Interestingly, in an endnote to post-1989 editions of The Selfish Gene, Dawkins himself tentatively speculates that maybe the human penis might represent a sexually-selected ‘fitness indicator’.

Thus, he points out that the human penis is large as compared to that of other primates, yet also lacks a baculum (i.e. penis bone) that facilitates erections. This, he speculates, could mean that the capacity to maintain an erection might represent an honest signal of health in accordance with Zahavis handicap principle (307-8).

However, it is more likely that the large size, or more specifically the large width, of the human penis reflects instead a response to the increased size of the vagina, which itself increased in size to enable human females to give birth to large-brained, and hence large-headed, infants (see Bowman 2008; Sexual Selection and the Origins of Human Mating Systems: pp61-70).[3]

How then can we make sense of this apparent paradox, whereby, contrary to Bateman’s principle, sexual selection appears to have operated more strongly on women than on men?

For his part, Dawkins himself offers no explanation, merely lamenting:

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (p165).

However, in respect of what David Buss calls short-term mating strategies (i.e. casual sex, hook-ups and one night stands), this is certainly not the case.

On the contrary, patterns of everything from prostitution and rape to erotica and pornography consumption confirm that, in respect of short-term ‘commitment’-free casual sex, it remains women who are very much in demand and men who are the ardent pursuers (see The Evolution of Human Sexuality: which I have reviewed here).

Thus, in one study conducted on a University campus, 72% of male students agreed to go to bed with a female stranger who approached them with a request to this effect. In contrast, not a single one of the 96 females approached agreed to the same request from a male questioner (Clark and Hatfield 1989).

(What percentage of the students sued the university for sexual harassment was not revealed.)

However, humans also form long-term pair-bonds to raise children, and, in contrast to males of most other mammalian species, male parents often invest heavily in the offspring of such unions.

Men are therefore expected to be relatively choosier in respect of long-term romantic partners (e.g. wives) than they are for casual sex partners. This may then explain the relatively high levels of reproductive competition engaged in by human females, including high levels of what Dawkins calls ‘sexual advertising’.

Reproductive competition between women may be especially intense in western societies practising what Richard Alexander termed socially-imposed monogamy.

This refers to societies where there are large differences between males in social status and resource holdings, but where even wealthy males are prohibited by law from marrying multiple women at once.[4]

Here, there may be intense competition as between females for exclusive rights to resource-abundant ‘alpha male’ providers (Gaulin and Boser 1990).

Thus, to some extent, the levels of sexual competition engaged in by women in western societies may indeed be higher than in non-western, polygynous societies.

This, then, might explain why females use what Dawkins terms ‘sexual advertising’ to attract long-term mates (i.e. husbands). However, it still fails to explain why males don’t – or, at least, don’t seem to do so to anything like the same degree.

Darwin himself may have come closer than many of his successors to arriving at an answer, observing that:

Man is more powerful in body and mind than woman, and in the savage state he keeps her in a far more abject state of bondage than does the male of any other animal; therefore it is not surprising that he should have gained the power of selection” (The Descent of Man).

Therefore, in contrast to mating patterns in modern western societies, female choice may actually have played a surprisingly limited role in human evolutionary history, given that, in most pre-modern societies, arranged marriages were, and are, the norm.

Male mating competition may then have taken the form of male-male contest competition (e.g. fighting) rather than displaying to females – i.e. what Darwin called intra-sexual selection’ rather than ‘inter-sexual selection’.

Thus, while men indeed possess no obvious analogue to the peacock’s tail, they do seem to possess traits designed for fighting – namely considerably greater levels of upper-body musculature and violent aggression as compared to women (see Puts 2010).

In other words, human males may not have any obvious ‘peacock’s tail’, but we perhaps we do have, if you like, stag’s antlers.

From Genes to Memes

Dawkins’ eleventh chapter, which was, in the original version of the book (i.e. pre-1989 editions), the final chapter, is also the only chapter to focus exclusively on humans.

Entitled ‘Memes: The New Replicators’, it focuses again on the extent to which humans are indeed an “aberrant species”, being subject to cultural as well as biological evolution to a unique degree.

Interestingly, however, Dawkins argues that the principles of natural selection discussed in the preceding chapters of the book can be applied just as usefully to cultural evolution as to biological evolution.

In doing so, he coins the concept of the meme as the cultural unit of selection, equivalent to a gene, passing between minds analogously to a virus.

This term has been enormously influential in intellectual discourse, and indeed in popular discourse, and even passed into popular usage.

The analogy of memes to genes certainly makes for an interesting thought-experiment. However, like any analogy, it can be taken too far.

Certainly ideas can be viewed as spreading between people, and as having various levels of fitness depending on the extent to which they catch on.

Thus, to take one famous example, Dawkins famously described religions such as Islam and Christianity as Viruses of the Mind, which travel between, and infect, human minds in a manner analogous to a virus.

Thus, proponents of Darwinian medicine contend that pathogens such as flu and the common cold produce symptoms such as coughing, sneezing and diarrhea precisely because these behaviours promote the spread and replication of the pathogen to new hosts through the bodily fluids thereby expelled.

Likewise, rabies causes dogs and other animals to become aggressive and bite, which likewise facilitates the spread of the rabies virus to new hosts.[5]

By analogy, successful religions are typically those that promote behaviours that facilitate their own spread.

Thus, a religion that commands its followers to convert non-believers, persecute apostates, ‘be fruitful and multiply’ and indoctrinate your offspring with their beliefs is, for obvious reasons, likely to spread faster and have greater longevity than a religious doctrine that commands adherents become celibate hermits and that proselytism is a mortal sin.

Thus, Christians are admonished by scripture to save souls and preach the gospel among heathens; while Muslims are, in addition, admonished to wage holy war against infidels and persecute apostates.

These behaviour facilitate the spread of Christianity and Islam just as surely as coughing and sneezing promote the spread of the flu.[6]

Like genes, memes can also be said to mutate, though this occurs not only through random (and not so random) copying errors, but also by deliberate innovation by the human minds they ‘infect’. Memetic mutation, then, is not entirely random.

However, whether this way of looking at cultural evolution is a useful and theoretically or empirically productive way of conceptualizing cultural change remains to be seen.

Certainly, I doubt whether ‘memetics’, as it is sometimes termed, will ever be a rigorous science comparable to genetics, after which it is named, as some of the concept’s more enthusiastic champions have sometimes envisaged. Neither, I suspect, did Dawkins ever originally intend or envisage it as such, having seemingly coined the idea as something of an afterthought.

At any rate, one of the main factors governing the ‘infectiousness’ or ‘fitness’ of a given meme, is the extent to which the human mind is receptive to it and the human mind is itself a product of biological evolution.

The basis for understanding human behaviour, even cultural behaviour, is therefore how natural selection has shaped the human mind – in other words evolutionary psychology not memetics.

Thus, humans will surely have evolved resistance to memes that are contrary to their own genetic interests (e.g. celibacy) as a way of avoiding exploitation and manipulation by third-parties.

For more recent discussion of the status of the meme concept (the ‘meme meme’, if you like) see The Meme Machine; Virus of the Mind; The Selfish Meme; and Darwinizing Culture.

Escaping the Tyranny of Selfish Replicators?

Finally, at least in the original, non-‘extended’ editions of the book, Dawkins concludes ‘The Selfish Gene’, with an optimistic literary flourish, emphasizing once again the alleged uniqueness of the “rather aberrant” human species.[7]

Thus, his final paragraph ends:

We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators” (p201).

This makes for a dramatic, and optimistic, conclusion. It is also flattering to anthropocentric notions of human uniqueness, and of free will.

Unfortunately, however, it ignores the fact that the “we” who are supposed to be doing the rebelling are ourselves a product of the same process of natural selection and, indeed, of the same selfish replicators against whom Dawkins calls on us to rebel. Indeed, even the (alleged) desire to revolt is a product of the same process.[8]

Likewise, in the book’s opening paragraphs, Dawkins proposes:

Let us try to teach generosity and altruism, because we are born selfish. Let us understand what our selfish genes are up to, because we may then at least have the chance to upset their designs.” (p3)

However, this ignores, not only that the “us” who are to do the teaching and who ostensibly wish to instill altruism in others are ourselves the product of this same evolutionary process and these same selfish replicators, but also that the subjects whom we are supposed to indoctrinate with altruism are themselves surely programmed by natural selection to be resistant to any indoctrination or manipulation by third-parties to behave in ways that conflict with their own genetic interests.

In short, the problem with Dawkins’ cop-out Hollywood Ending is that, as anthropologist Vincent Sarich is quoted as observing, Dawkins has himself “spent 214 pages telling us why that cannot be true”. (See also Straw Dogs: Thoughts on Humans and Other Animals: which I have reviewed here).[9]

The preceding 214 pages, however, remain an exciting, eye-opening and stimulating intellectual journey, even over thirty years after their original publication.

__________________________

Endnotes

[1] Mutualism is distinguished from reciprocal altruism by the fact that, in the former, both parties receive an immediate benefit from their cooperation, whereas, in the latter, for one party, the reciprocation is delayed. It is reciprocal altruism that therefore presents the greater problem for evolution, and for evolutionists, because, here, there is the problem policing the agreement – i.e. how is evolution to ensure that the immediate beneficiary does indeed reciprocate, rather than simply receiving the benefit without later returning the favour (a version of the free rider problem). The solution, according to Axelrod, is that, where parties interact repeatedly over time, they come to engage in reciprocal altruism only with other parties with a proven track record of reciprocity, or at least without a proven track record of failing to reciprocate. 

[2] Certainly, many male traits are attractive to women (e.g. height, muscularity). However, these also have obvious functional utility, not least in increasing fighting ability, and hence probably have more to do with male-male competition than female choice. In contrast, many sexually-selected traits are positive hindicaps to their bearers, in all spheres except attracting mates. Indeed, one influential theory of sexual selection claims that it is precisely because they represent a handicap that they serve as an honest indicator of fitness and hence a reliable index of genetic quality.

[3] Thus, Edwin Bowman writes:

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008).

Similarly, in their controversial book Human Sperm Competition: Copulation, Masturbation and Infidelity, Robin Baker and Mark Bellis persuasively contend:

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: p171).

In turn, larger vaginas probably select for larger penises in order to fill the vagina (Bowman 2008).

According to Baker and Bellis, this is because the human penis functions as a suction piston, functioning to remove the sperm deposited by rival males, as a form of sperm competition, a theory that actually has some experimental support, not least from some hilarious research involving sex toys of differing sizes and shapes (Gallup et al 2003; Gallup and Burch 2004; Goetz et al 2005; see also Why is the Penis Shaped Like That).

Thus, according to this view:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171).

However, even in the absence of sperm competition, Alan Dixson observes:

In primates and other mammals the length of the erect penis and vaginal length tend to evolve in tandem. Whether or not sperm competition occurs, it is necessary for males to place ejaculates efficiently, so that sperm have the best opportunity to migrate through the cervix and gain access to the higher reaches of the female tract” (Sexual Selection and the Origins of Human Mating Systems: p68).

[4] In natural conditions, it is assumed that, in egalitarian societies, where males have roughly equal resource holdings, they will each attract an equal number of wives (i.e. given an equal sex ratio, one wife for each man). However, in highly socially-stratified societies, where there are large differences in resource holdings between men, it is expected that wealthier males will be able to support, and provide for, multiple wives, and will use their greater resource-holdings for this end, so as to maximize their reproductive success (see here). This is a version of the polygyny threshold model (see Kanazawa and Still 1999).

[5] There are also pathogens that affect the behaviour of their hosts in more dramatic ways. For example, one parasite, Toxoplasma gondii, when it infects a mouse, reduces the mouse’s aversion to cat urine, which is theorized to increase the risk of its being eaten by a cat, facilitating the reproductive life-cycle of the pathogen at the expense of that of its host. Similarly, the fungus, ophiocordyceps unilateralis turns ants into so-called zombie ants, who willingly leave the safety of their nests, and climb and lock themselves onto a leaf, again in order to facilitate the life cycle of their parasite at the expense of their own. Another parasite, dicrocoelium dendriticum (aka the lancet liver fluke) also affect the behaviour of ants whom it infects, causing them to climb to the tip of a blade of grass during daylight hours, increasing the chance they will be eaten by cattle or other grazing animals, facilitating the next stage of the parasite’s life-history

[6] In contrast, biologist Richard Alexander in Darwinism and Human Affairs cites the Shakers as an example of the opposite type of religion, namely one that, because of its teachings (namely, strict celibacy) largely died out.
In fact, however, Shakers did not quite entirely disappear. Rather, a small rump community of Shakers the Sabbathday Lake Shaker Village survives to this day, albeit greatly reduced in number and influence. This is presumably because, although the Shakers did not, at least in theory, have children, they did proselytise.
In contrast, any religion which renounced both reproduction and proselytism would presumably never spread beyond its initial founder or founders, and hence never come to the attention of historians, theorists of religion, or anyone else in the first place.

[7]  As noted above, this is among the reasons that The Selfish Gene’ works best, in a purely literary sense, in its original incarnation. Later editions have at least two further chapters tagged on at the end, after this dramatic and optimistic literary flourish.

[8] Dawkins is then here here guilty of a crude dualism. Marxist neuroscientist Steven Rose, in an essay in Alas Poor Darwin (which I have reviewed here and here) has also accused Dawkins of dualism for this same passage, writing:

Such a claim to a Cartesian separation of these authors’ [Dawkins and Steven Pinker] minds from their biological constitution and inheritance seems surprising and incompatible with their claimed materialism” (Alas Poor Darwin: Arguments Against Evolutionary Psychology: p262).

Here, Rose may be right, but he is also a self-contradictory hypocrite, since his own views represent an even cruder form of dualism. Thus, in an earlier book, Not in Our Genes: Biology, Ideology, and Human Nature, co-authored with fellow-Marxists Leon Kamin and Richard Lewontin, Rose and his colleagues wrote, in a critique of sociobiological conceptions of a universal human nature:

Of course there are human universals that are in no sense trivial: humans are bipedal; they have hands that seem to be unique among animals in their capacity for sensitive manipulation and construction of objects; they are capable of speech. The fact that human adults are almost all greater than one meter and less than two meters in height has a profound effect on how they perceive and interact with their environment” (passage extracted in The Study of Human Nature: p314).

Here, it is notable that all the examples “human universals that are in no sense trivial” given by Rose, Lewontin and Kamin are physiological not psychological or behavioural. The implication is clear: yes, our bodies have evolved through a process of natural selection, but our brains and behaviour have somehow been exempt from this process. This of course, is an even cruder form of dualism than that of Dawkins.

As John Tooby and Leda Cosmides observe:

This division of labor is, therefore, popular: Natural scientists deal with the nonhuman world and the “physical” side of human life, while social scientists are the custodians of human minds, human behavior, and, indeed, the entire human mental, moral, political, social, and cultural world. Thus, both social scientists and natural scientists have been enlisted in what has become a common enterprise: the resurrection of a barely disguised and archaic physical/mental, matter/spirit, nature/human dualism, in place of an integrated scientific monism” (The Adapted Mind: Evolutionary Psychology and the Generation of Culture: p49).

A more consistent and thoroughgoing critique of Dawkins dualism is to be found in John Gray’s excellent Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here).

[9] This quotation comes from p176 of Marek Kohn’s The Race Gallery: The Return of Racial Science (London: Vintage, 1996). Unfortunately, Kohn does not give a source for this quotation.

__________________________

References

Bowman EA (2008) Why the human penis is larger than in the great apes Archives of Sexual Behavior 37(3): 361.

Clark & Hatfield (1989) Gender differences in receptivity to sexual offers, Journal of Psychology & Human Sexuality, 2:39-53.

Dawkins (1981) In defence of selfish genes, Philosophy 56(218):556-573.

Gallup et al (2003). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277-289.

Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology, 2, 12-23.

Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005.

Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: a preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences, 38: 749-763

Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52.

Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226.

Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.

Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236.

Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219.

Puts (2010) Beauty and the beast: mechanisms of sexual selection in humans, Evolution and Human Behavior 31:157-175.

Smith (1964). Group Selection and Kin Selection, Nature 201(4924):1145-1147.