Sarich and Miele’s ‘Race: The Reality of Human Differences’: A Rare Twenty-First Century Hereditarian Take on Race Differences Published by a Mainstream Publisher and Marketed to a General Readership

Vincent Sarich and Frank Miele, Race: The Reality of Human Differences (Cambridge, MA: Westport Press 2004)

First published in 2004, ‘Race: The Reality of Human Differences’ by anthropologist and biochemist Vincent Sarich and science writer Frank Miele is that rarest of things in this age of political correctness – namely, a work of popular science presenting a hereditarian perspective on that most incendiary of topics, namely the biology of race and of racial differences.

It is refreshing that, even in this age of political correctness, at the dawn of the twenty-first century, a mainstream publisher still had the courage to publish such a work.

On first embarking on reading ‘Race: The Reality of Human Differences’ I therefore had high expectations, hoping for something approaching an updated, and more accessible, equivalent to John R Baker’s seminal Race (which I have reviewed here).

Unfortunately, however, ‘Race: The Reality of Human Differences’, while it contains much interesting material, is nevertheless, in my view, a disappointment and something of a missed opportunity.

Race and the Law

Despite their subtitle, Sarich and Miele’s primary objective in authoring ‘Race: The Reality of Human Differences’ is, it seems, not to document, or to explain the evolution of, the specific racial differences that exist between populations, but rather to defend the race concept itself.

The latter has been under attack at least since Ashley Montagu’s Man’s Most Dangerous Myth: The Fallacy of Race, first published in 1942, perhaps the first written exposition of race denial.

Thus, Sarich and Miele frame their book as a response to the then-recent PBS documentary Race: The Power of an Illusion, which, like Montagu, also espoused the by-then familiar line that human races do not exist, save as a mere illusion or social construct.

As evidence that, on the contrary, race is indeed a legitimate biological and taxonomic category, Sarich and Miele begin by discussing, not the field of biology, but rather that of law, discussing the recognition accorded the race concept under the American legal system.

They report that, in the USA:

There is still no legal definition of race; nor… does it appear that the legal system feels the need for one” (p14).

Thus, citing various US legal cases where race of the plaintiff was at issue, Sarich and Miele conclude:

The most adversarial part of our complex society [i.e. the legal system], not only continues to accept the existence of race, but also relies on the ability of the average individual to sort people into races” (p14).

Moreover, Sarich and Miele argue, not only do the courts recognise the existence of race, they also recognise its ultimate basis in biology.

Thus, in response to the claim that race is a mere social construct, Sarich and Miele cite the recognition the criminal courts accord to the evidence of forensic scientists, who can reliably determine the racial background of a criminal from microscopic DNA fragments (p19-23).

If race were a mere social construction based upon a few highly visible features, it would have no statistical correlation with the DNA markers that indicate relatedness” (p23).[1]

Indeed, in criminal investigations, Sarich and Miele observe in a later chapter, racial identification can be a literal matter of life and death.

Thus, they refer to the Baton Rouge serial killer investigation, where, in accordance with the popular, but wholly false, notion that serial killers are almost invariably white males, the police initially focussed solely on white suspects, but, after DNA analysis showed that the offender was of predominantly African descent, shifted the focus of their investigation and eventually successfully apprehended the killer, preventing further killings (p238).[2]

Another area where they observe that racial profiling can be literally a matter of life and death is the diagnosis of disease and prescribing of appropriate and effective treatment – since, not only do races differ in the prevalence, and presentation, of different medical conditions, but they also differ in their responsiveness and reactions to different forms of medication. 

However, while folk-taxonomic racial categories do indeed have a basis in real biological differences, they are surely also partly socially-constructed as well.

For example, in the USA, black racial identity, including eligibility for affirmative action programmes, is still largely determined by the same so-called one-drop-rule that also determined racial categorization during the era of segregation and Jim Crow.

This is the rule whereby a person with any detectable degree of black African ancestry, howsoever small (e.g. Barack Obama, Colin Powell), is classed as ‘African-American’ right alongside a recent immigrant from Africa of unadulterated sub-Saharan African ancestry.

This obviously has far more to do with social and political factors, and with America’s unique racial history, than it does with biology and hence shows that folk-taxonomic racial categories are indeed part ‘socially-constructed’.[3]

Similarly, the racial category Hispanic’ or ‘Latino obviously has only a distant and indirect relationship to race in the biological sense, including as it does persons of varying degrees of European, Native American and also black African ancestry.[4]

It is also unfortunate that, in their discussion of the recognition accorded the race concept by the legal system, Sarich and Miele restrict their discussion entirely to the contemporary US legal system.

In particular, it would be interesting to know how the race of citizens was determined under overtly racialist regimes, such as under the Apartheid regime in South Africa,[5] under the Nuremberg laws in National Socialist Germany,[6] or indeed under Jim Crow laws in the South in the USA itself in the early twentieth century,[7] where the stakes were, of course, so much higher.

Also, given that Sarich and Miele rely extensively in later chapters on an analogy between human races and dog breeds (what he calls the “canine comparison”: p198-203; see discussion below), a discussion of the problems encountered in drafting and interpreting so-called breed-specific legislation to control so-called ‘dangerous dog breeds’ would also have been relevant and of interest.[8]

Such legislation, in force in many jurisdictions, restricts the breeding, sale and import of certain breeds (e.g. Pit Bulls, Tosas) and orders their registration, neutering and sometimes even their destruction. It represents, then, the rough canine equivalent of the Nuremberg laws.

A Race Recognition Module?

According to Sarich and Miele, the cross-cultural universality of racial classifications suggests that humans are innately predisposed to sort humans into races.

As evidence, they cite Lawrence Hirschfeld’s finding that, at age three, children already classify people by race, and recognise both the immutable and hereditary nature of racial characteristics, giving priority to race over characteristics such as clothing, uniform or body-type (p25-7; Hirschfeld 1996).[9]

Sarich and Miele go on to also claim:

The emerging discipline of evolutionary psychology provides further evidence that there is a species-wide module in the human brain that predisposes us to sort the members of our species into groups based on appearance, and to distinguish between ‘us’ and ‘them’” (p31).

However, they cite no source for this claim, either in the main body of the text or in the associated notes for this chapter (p263-4).[10]

Certainly, Pierre van den Berghe and some other sociobiologists have argued that ethnocentrism is innate (see The Ethnic Phenomenon: reviewed here). However, van den Berghe is also emphatic and persuasive in arguing that the same is not true of racism, as such.

Indeed, since the different human races were, until recent technological advances in transportation (e.g. ships, aeroplanes), largely separated from one another by the very oceans, deserts and mountain-ranges that reproductively isolated them from one another and hence permitted their evolution into distinguishable races, it is doubtful human races have been in contact for sufficient time to have evolved a race-classification module.[11]

Moreover, if race differences are indeed real and obvious as Sarich and Miele contend, then there is no need to invoke – or indeed to evolve – a domain-specific module for the purposes of racial classification. Instead, people’s tendency to categorise others into racial groups could simply reflect domain-general mechanisms (i.e. general intelligence) responding to real and obvious differences.[12]

History of the Race Concept

After their opening chapter on ‘Race and the Law’, the authors move on to discussing the history of the race concept and of racial thought in their second chapter, which is titled ‘Race and History’.

Today, it is often claimed by race deniers that the race concept is a recent European invention, devised to provide a justification for such nefarious, but by no means uniquely European, practices as slavery, segregation and colonialism.[13]

In contrast, Sarich and Miele argue that humans have sorted themselves into racial categories ever since physically distinguishable people encountered one another, and that ancient peoples used roughly the same racial categories as nineteenth-century anthropologists and twenty-first century bigots.

Thus, Sarich and Miele assert in the title of one of their subheadings:

“[The concept of] race is as old as history or even prehistory” (p57).

Indeed, according to Sarich and Miele, even ancient African rock paintings distinguish between Pygmies and Capoid Bushmen (p56).

Similarly, they report, the ancient Egyptians showed a keen awareness of racial differences in their artwork.

This is perhaps unsurprising since the ancient Egyptians’ core territory was located in a region where Caucasoid North Africans came into contact with black Africans from South of the Sahara through the Nile Valley, unlike in most other parts of North Africa, where the Sahara Desert represented a largely insurmountable barrier to population movement.

While not directly addressing the controversial question of the racial affinities of the ancient Egyptians, Sarich and Miele report that, in their own artwork:

The Egyptians were painted red; the Asiatics or Semites yellow; the Southerns or Negroes, black; and the Libyans, Westerners or Northerners, white, with blue eyes and fair beards” (p33).[14]

Indeed, rather than being purely artistic in intent, Sarich and Miele go further, even suggesting that at least some Egyptian artwork had an explicit taxonomic function:

“[Ancient] Egyptian monuments are not mere ‘portraits but an attempt at classification” (p33).

They even refer to what they call “history’s first [recorded] colour bar, forbidding blacks from entering Pharaoh’s domain”, namely an an Egyptian stele (i.e. stone slab functioning as a notice), which other sources describe as having been erected during the reign of Pharaoh Sesostris III (1887-1849 BCE) at Semna near the Second Cataract of the Nile, part of the inscription of which reads, in part:

No Negro shall cross this boundary by water or by land, by ship or with his flocks, save for the purpose of trade or to make purchases in some post” (p35).[15]

Sarich and Miele also interpret the famous caste system of India as based ultimately in racial difference, the lighter complexioned invading Indo-Aryans establishing the system to maintain their dominant social position and their racial integrity vis à vis the darker-complexioned indigenous Dravidian populations whom they conquered and subjugated.

Thus, Sarich and Miele claim:

The Hindi word for caste is varna. It means color (that is, skin color), and it is as old as Indian history itself” (p37).[16]

There is indeed evidence of racial prejudice and notions of racial supremacy in the earliest Hindu texts. For example, in the Rigveda, thought to be the earliest of ancient Hindu texts:

The god of the Aryas, Indra, is described as ‘blowing away with supernatural might from earth and from the heavens the black skin which Indra hates.’ The dark people are called ‘Anasahs’—noseless people—and the account proceeds to tell how Indra ‘slew the flat-nosed barbarians.’ Having conquered the land for the Aryas, Indra decreed that the foe was to be ‘flayed of his black skin’” (Race: The History of an Idea in America: p3-4).[17]

Indeed, higher caste groups have relatively lighter complexions than lower caste groups residing in the same region of India even today (Jazwal 1979Mishra 2017).

However, most modern Indologists reject the notion that the term ‘varna’ was originally coined in reference to differences in skin colour and instead argue that colour was simply used as a method of classification, or perhaps in reference to clothing.[18]

According to Sarich and Miele, ancient peoples also believed races differed, not only in morphology, but also in psychology and behaviour.

In general, ancient civilizations regarded their own race’s characteristics more favourably than those of other groups. This, Sarich and Miele suggest, reflected, not only ethnocentrism, which is, in all probability, a universal human trait, but also the fact that great civilizations of the sort that leave behind artwork and literature sophisticated enough to permit moderns to ascertain their views on race did indeed tend to be surrounded by less advanced neighbours (p56).

In the vast majority of cases, their opinions of other peoples, including the ancestors of the Western Europeans who supposedly ‘invented’ the idea of race, are far from flattering, at times matching modern society’s most derogatory stereotypes” (p31).

Thus, Thomas F Gossett, in his book Race: The History of an Idea in America, reports that:

Historians of the Han Dynasty in the third century B.C. speak of a yellow-haired and green-eyed barbarian people in a distant province ‘who greatly resemble monkeys from whom they are descended’” (Race: The History of an Idea in America: p4).

Indeed, the views expressed by the ancients regarding racial differences, or at least those examples quoted by Sarich and Miele, are also often disturbingly redolent of modern racial stereotypes.

Thus, in ancient Roman and Greek art, Sarich and Miele report:

“Black males are depicted with penises larger than those of white figures” (p41).

Likewise, during the Islamic Golden Age, Sarich and Miele report that:

Islamic writers… disparaged black Africans as being hypersexual yet also filled with simple piety, and with a natural sense of rhythm” (p53).

Similarly, the Arab polymath Al Masudi is reported to have quoted the Roman physician-philosopher Galen, as claiming blacks possess, among other attributes:

A long penis and great merriment… [which] dominates the black man because of his defective brain whence also the weakness of his intelligence” (p50).

From these and similar observations, Sarich and Miele conclude:

European colonizers did not construct race as a justification for slavery but picked up an earlier construction of Islam, which took it from the classical world, which in turn took it from ancient Egypt” (p50).

The only alternative, they suggest, is the obviously implausible suggestion that:

Each of these civilisations independently ‘constructed’ the same worldview, and that the civilisations of China and India independently ‘constructed’ similar worldviews, even though they were looking at different groups of people” (p50).

There is, of course, another possibility the authors never directly raise, but only hint at – namely, perhaps racial stereotypes remained relatively constant because they reflect actual behavioural differences between races that themselves remained constant simply because they reflect innate biological dispositions that have not changed significantly over historical time.

Race, Religion, Science and Slavery

Sarich and Miele’s next chapter, ‘Anthropology as the Science of Race’, continues their history of racial thought from biblical times into the age of science – and of pseudo-science.

They begin, however, not with science, or even with pseudo-science, but rather with the Christian Bible, which long dominated western thinking on the subject of race, as on so many other subjects.

At the beginning of the chapter, they quote from John Hartung’s controversial essay, Love Thy Neighbour: The Evolution of In-Group Morality, which was first published in the science magazine, Skeptic (p60; Hartung 1995).

However, although the relevant passages appear in quotation marks, neither Hartung himself, nor his essay is directly cited, and, where I not already familiar with this essay, I would be none the wiser as to where this series of quotations had actually been taken from.[19]

In the passage quoted, Hartung, who, in addition to being an anaesthesiologist, anthropologist and human sociobiologist, known for his pioneering cross-cultural studies of human inheritance patterns, is also something of an amateur (atheist) biblical scholar, argues that Adam, in the Biblical account of creation, is properly to be interpreted, not as the first human, but rather only as the first Jew, the implication being that, and the confusion arising because, in the genocidal weltanschauung of the Old Testament, non-Jews are, at least according to Hartung, not really to be considered human at all.[20]

This idea seems to have originated, or at least received its first full exposition, with theologian Isaac La Peyrère, whom Sarich and Miele describe only as a “Calvinist”, but who, perhaps not uncoincidentally, is also widely rumoured to be of Sephardi converso or even crypto-Jewish marrano ancestry.

Thus, Sarich and Miele conclude:

The door has always been open—and often entered—by any individual or group wanting to confine ‘adam’ to ‘us’ and to exclude ‘them’” (p60).

This leads to the heretical notion of the pre-Adamites, which has also been taken up by such delightfully bonkers racialist religious groups as the Christian Identity movement.[21]

However, mainstream western Christianity always rejected this notion.

Thus, whereas today many leftists associate atheism, the Enlightenment and secularism with anti-racist views, historically there was no such association.

On the contrary, Sarich and Miele emphasize, it was actually polygenism – namely, the belief that the different human races had separate origins, a view that naturally lent itself to racialism – that was associated with religious heresy, free-thinking and the Enlightenment.

In contrast, mainstream Christianity, of virtually all denominations, has always favoured monogenism – namely, the belief that, for all their perceived differences, the various human races nevertheless shared a common origin – as this was perceived as congruent with (the orthodox interpretation of) the Old Testament of the Bible.

Thus, for example, both Voltaire and David Hume identified as polygenists – and, although their experience with and knowledge of black people was surely minimal and almost entirely second-hand, each also both expressed distinctly racist views regarding the intellectual capacities of black Africans.

Moreover, although the emerging race science, and cranial measurements, of the nineteenth century American School’ of anthropology is sometimes credited with lending ideological support to the institution of slavery in the American South, or even as being cynically formulated precisely in order to defend this institution, in fact Southern slaveholders had little if any use for such ideas.

After all, the American South, as well as being a stronghold of slavery, racialism and white supremacist ideology, was also, then as now, the Bible Belt – i.e. a bastion of intense evangelical Protestant Christian fundamentalism.

But the leading American School anthropologists, such as Samuel Morton and Josiah Nott, were all heretical polygenists.

Thus, rather than challenge the orthodox interpretation of the Bible, Southern slaveholders, and their apologists, preferred to defend slavery by invoking, not the emerging secular science of anthropology, but rather Biblical doctrine.

In particular, they sought to justify slavery by reference to the so-called curse of Ham, an idea which derives from Genesis 9:22-25, a very odd passage of the Old Testament (odd even by the standards of the Old Testament), which was almost certainly not originally intended as a reference to black people.[22]

Thus, the authors quote historian William Stanton, who, in his book The Leopard’s Spots: Scientific Attitudes Toward Race in America 1815-59 concludes that, by rejecting polygenism and the craniology of the early American physical anthropologists:

The South turned its back on [what was by the scientific standards of the time] the only intellectually respectable defense of slavery it could have taken up” (p77)

As for Darwinism, which some creationists also claim was used to buttress slavery, Darwin’s On the Origin of Species was only published in 1959, just a couple of years before the Emancipation Proclamation of 1862 and final abolition of slavery in North America and the English-speaking world.[23]

Thus, if Darwinian theory was ever used to justify the institution of slavery, it clearly wasn’t very effective in achieving this end.

Into the ‘Age of Science’ – and of Pseudo-Science

The authors continue their history of racial thinking by tracing the history of the discipline of anthropology, from its beginnings as ‘the science of race’, to its current incarnation as the study of culture (and, to a lesser extent, of human evolution), most of whose practitioners vehemently deny the very biological reality of race, and some of whom deny even the possibility of anthropology being a science.

Giving a personal, human-interest focus to their history, Sarich and Miele in particular focus on three scientific controversies, and personal rivalries, each of which were, they report, at the same time scientific, personal and political (p59-60). These were the disputes between, respectively:

1) Ernst Haeckel and Rudolf Virchow;

2) Franz Boas and Madison Grant; and

3) Ashley Montagu and Carleton Coon.

The first of these rivalries, occurring as it did in Germany in the nineteenth century, is perhaps of least interest to contemporary North American audiences, being the most remote in both time and place.

However, the outcomes of the latter two disputes, occurring as they did in twentieth century America, are of much greater importance, and their outcome gave rise to, and arguably continues to shape, the current political and scientific consensus on racial matters in America, and indeed the western world, to this day.

Interestingly, these two disputes were not only about race, they were also arguably themselves racial, or at least ethnic, in character.

Thus, perhaps not uncoincidentally, whereas both Grant and Coon were Old Stock American patrician WASPs, the latter proud to trace his ancestry back among the earliest British settlers of the Thirteen Colonies, both Boas and Montagu were recent Jewish immigrants from Eastern Europe.[24]

Therefore, in addition to being personal, political and scientific, these two conflicts were also arguably racial, and ultimately indirectly concerned with the very definition of what it meant to be an ‘American’.

The victory of the Boasians was therefore both coincident with, and arguably both heralded and reflected (and perhaps even contributed towards, or, at least, was retrospectively adopted as a justification for), the displacement of Anglo-Americans as the culturally, socially, economically and politically dominant ethnic group in the USA, the increasing opening up of the USA to immigrants of other races and ethnicities, and the emergence of a new elite, no longer composed exclusively, or even predominantly, of people of any single specific ethnic background, but increasingly overwhelmingly disproportionately Jewish.

Sarich and Miele, to their credit, do not entirely avoid addressing the ethnic dimension to these disputes. Thus, they suggest that Boas and Montagu’s perception of themselves as ethnic outsiders in Anglo-America may have shaped their theories (p89-90).[25]

However, this is topic is explored more extensively by Kevin Macdonald in the second chapter of his controversial, anti-Semitic and theoretically flawed, The Culture of Critique (which I have reviewed here).

Boas, and his student Montagu, were ultimately to emerge victorious, not so much on account of the strength of their arguments, as on the success of their academic politicking, in particular Boas’s success in training students, including Montagu himself, who would go on to take over the social science departments of universities across America.

Among these students were many figures who were to become even more famous, and arguably more directly influential, than Boas himself, including, not only Montagu, but also Ruth Benedict and, most famous of all, the anthropologically inept Margaret Mead.[26]

Nevertheless, Sarich and Miele trace the current consensus, and sacrosanct dogma, of race-denial ultimately to Boas, whom they credit with effectively inventing anew the modern discipline of anthropology as it exists in America:

It is no exaggeration to say that Franz Boas (1858-1942) remade American anthropology in his own image. Through the influence of his students, Margaret Mead (Coming of Age in Samoa and Sex and Temperament in Three [Primitive] Societies[sic]), Ruth Benedict (Patterns of Culture) and Ashley Montagu (innumerable titles, especially the countless editions of Man’s Most Dangerous Myth) Boas would have more influence on American intellectual thought than Darwin did. For generations hardly anyone graduated an American college without having read at least one of these books” (p86).

Thus, today, Boas is regarded as the father of American anthropology, whereas both Grant and Coon are mostly dismissed (in Coon’s case, unfairly) as pseudo-scientists and racists.

The Legacy of Boas

As to whether the impact of Boas and his disciples was, on balance, a net positive or a net negative, Sarich and Miele are ambivalent:

The cultural determinism of the Boasians served as a useful corrective to the genetic determinism of racial anthropology, emphasizing the variation within races, the overlap between them and the plasticity of human behavior. The price, however, was the divorcing of the science of man from the science of life in general. The evolutionary perspective was abandoned, and anthropology began its slide into the abyss of deconstructionism” (p91).

My own view is more controversial: I have come to believe that the influence of Boas on American anthropology has been almost entirely negative.

Admittedly, the Nodicism of his rival, Grant, was indeed a complete non-starter. After all, civilization actually came quite late to Northern Europe, originating in North Africa, the Middle East and South Asia, arriving in Northern Europe much later, by way the Mediterranean region.

However, this view is arguably no less preposterous than the racial egalitarianism that currently prevails as a sacrosanct contemporary dogma, and which holds that all races are exactly equal in all abilities, which, quite apart from being contradicted by the evidence, represents a manifestly improbable outcome of human evolution.

Moreover, Nordicism may have been bad science, but it was at least science – or at least purported to be science – and hence was susceptible to falsification, and was indeed soon to be decisively falsified by pre-war and post-war rise of Japan among other events and indeed scientific findings.

In contrast, as persuasively argued by Kevin Macdonald in The Culture of Critique (which I have reviewed here), Boasian anthropology was not so much a science as an anti-science (not theory but an “anti-theory” according to Macdonald: Culture of Critique: p24), because, in its radical cultural determinism and cultural relativism, it rejected any attempt to develop a general theory of societal evolution, or societal differences, as premature, if not inherently misguided.

Instead, the Boasians endlessly emphasized, and celebrated (and indeed  sometimes exaggerated and fabricated), “the vast diversity and chaotic minutiae of human behavior”, arguing that such diversity precluded any general theory of social evolution as had formerly been favoured, let alone any purported ranking of societies and cultures (let alone races) as superior or inferior in relation to one another.

The Boasians argued that general theories of cultural evolution must await a detailed cataloguing of cultural diversity, but in fact no general theories emerged from this body of research in the ensuing half century of its dominance of the profession… Because of its rejection of fundamental scientific activities such as generalization and classification, Boasian anthropology may thus be characterized more as an anti-theory than a theory of human culture” (Culture of Critique: p24).

The result was that behavioural variation between groups, to the extent there was any attempt to explain it at all, was attributed to culture. Yet, as evolutionary psychologist David Buss, writes:

“[P]atterns of local within-group similarity and between-group differences are best regarded as phenomena that require explanation. Transforming these differences into an autonomous causal entity called ‘culture’ confuses the phenomena that require explanation with a proper explanation of those phenomena. Attributing such phenomena to culture provides no more explanatory power than attributing them to God, consciousness, learning, socialization, or even evolution, unless the causal processes that are subsumed by these labels are properly described. Labels for phenomena are not proper causal explanations for them” (Evolutionary Psychology: p411).

To attribute all cultural differences simply to culture and conclude that that is an adequate explanation is to imply that all cultural variation is simply random in nature. This amounts to effectively accepting the null hypothesis as true and ruling out a priori any attempt to generate a causal framework for explaining, or making predictions regarding, cultural differences. It therefore amounts, not to science, but to an outright rejection of science, or at least of applying science to human cultural differences, in favour of obscurantism.

Meanwhile, under the influence of postmodernism (i.e. “the abyss of deconstructionism” to which Sarich and Miele refer) much of cultural anthropology has ceased even pretending to be a science, dismissing all knowledge, science included, as mere disguised ideology, no more or less valid than the religious cosmologies, eschatologies and creation myths of the scientific and technologically primitive peoples whom anthropologists have traditionally studied, and hence precluding the falsification of post-modernist claims, or indeed any other claims, a priori.

Moreover, contrary to popular opinion, the Nordicism of figures such as Grant seems to have been rather less dogmatically held to, both in the scientific community and society at large, than is the contemporary dogma of racial egalitarianism.

Indeed, quite apart from the fact that it was not without eminent critics even in its ostensible late-nineteenth, early-twentieth century heyday (not least Boas himself), the best evidence for this is the speed with which this belief system was abandoned, and subsequently demonized, in the coming decades.

In contrast, even with the findings of population genetics increasing apace, the dogmas of both race denial and racial egalitarianism, while increasingly scientifically indefensible, seemingly remain ever more entrenched in the universities.

Digressions: ‘Molecular Clocks’, Language and Human Evolution

Sarich and Miele’s next chapter, ‘Resolving the Primate Tree’, recounts how the molecular clock method of determining when species (and races) diverged was discovered.

To summarize: Geneticists discovered they could estimate the time when two species separated from one another by measuring the extent to which the two species differ in selectively-neutral genetic variation – in other words, those parts of the genome that do not affect an organism’s phenotype in such a way as to affect its fitness, are therefore not subject to selection pressures and hence mutate at a uniform rate, hence serving as a ‘clock’ by which to measure when the species separated from one another.

The following chapter, ‘Homo Sapiens and Its Races’, charts the application of the ‘molecular clock’ method to human evolution, and in particular to the evolution of human races.

The molecular clock method of dating the divergence of species from one another is certainly relevant to the race question, since it allows us to estimate, not only when our ancestors split from those of the chimpanzee, but also when different human races separated from one another – though this latter question is somewhat more difficult to determine using this method, since it is complicated by the fact that races can continue to interbreed with one another even after their initial split, whereas species, once they have become separate species, by definition no longer interbreed, though there may be some interbreeding during the process of speciation itself (i.e. when the separate lineages were still only races or populations of the same species).

However, devoting a whole chapter to a narrative describing how the molecular clock methodology was developed seems excessive in a book ostensibly about human race differences, and is surely an unnecessary digression.

Thus, one suspects the attention devoted to this topic by the authors reflects the central role played by one of the book’s co-authors (Vincent Sarich) in the development of this scientific method. This chapter therefore permits Sarich to showcase his scientific credentials and hence lends authority to his later more controversial pronouncements in subsequent chapters.

The following chapter, ‘The Two Miracles that Made Mankind’, is also somewhat off-topic. Here, Sarich and Miele address the question of why it was that our own African ancestors who ultimately outcompeted and ultimately displaced rival species of hominid.[27]

In answer, they propose, plausibly but not especially originally, that our descendants outcompeted rival hominids on account of one key evolutionary development in particular – namely, our evolution of a capacity for spoken language.

Defining ‘Race

At last, in Chapters Seven and Eight, after a hundred and sixty pages and over half of the entire book, the authors address the topic which the book’s title suggested would be its primary focus – namely, the biology of race differences.

The first of these is titled ‘Race and Physical Differences’, while the next is titled ‘Race and Behavior’.

Actually, however, both chapters begin by defending the race concept itself.

Whether the human race is divisible into races ultimately depends on how one defines ‘races’. Arguments are to whether human races exist therefore often degenerate into purely semantic disputes regarding the meaning of the word ‘race.

For their purposes, Sarich and Miele themselves define ‘races as:

Populations, or groups of populations, within a species, that are separated geographically from other such populations or groups of populations and distinguishable from them on the basis of heritable features” (p207).[28]

There is, of course, an obvious problem with this definition, at least when applied to contemporary human populations – namely, members of different human races are often no longer “separated geographically” from one another, largely due to recent migrations and population movements.

Thus, today, people of many different racial groups can be found in a single city, like, say, London.

However, the key factor is surely, not whether racial groups remain “separated geographically” today, but rather whether they were “separated geographically” during the period during which they evolved into separate races.

To answer this objection, Sarich and Miele’s definition of ‘races’ should be altered accordingly.

Races as Fuzzy Sets

Sarich and Miele protest that other authors have, in effect, defined races out of existence by semantic sophistry, namely by defining the word ‘race’ in such a way as to rule out the possibility of races a priori.

Thus, some proposed definitions demand that, in order to qualify as true ‘races’, populations must have discrete, non-overlapping boundaries, with no racially-mixed, clinal or hybrid populations to blur the boundaries.

However, Sarich and Miele point out, any populations satisfying this criterium would not be ‘races’ at all, but rather entirely separate species, since, as I have discussed previously, it is the question of interfertility and reproductive isolation that defines a species (p209).[29]

In short, as biologist John Baker, in his excellent Race (reviewed here), also pointed out, since ‘race’ is, by very definition, a sub-specific classification, it is inevitable that members of different races will sometimes interbreed with one another and produce mixed, hybrid or clinal populations at their borders, because, if they did not interbreed with one another, then they would not be members of different races but rather of entirely separate species.

Thus, the boundaries between subspecies are invariably blurred or clinal in nature, the phenomenon being so universal that there is even a biological term for it, namely intergradation.

Of course, this means that the dividing line where one race is deemed to begin and another to end will inevitably be blurred. However, Sarich and Miele reject the notion that this means races are purely artificial or a social construction.

The simple answer to the objection that races are not discrete, blending into one another as they do is this: They’re supposed to blend into one another and categories need not be discrete. It is not for us to impose our cognitive difficulties upon the Nature.” (p211)

Thus, they characterize races as fuzzy sets – which they describe as a recently developed mathematical concept that has nevertheless been “revolutionarily productive” (p209).

By analogy, they discuss our colour perception when observing rainbows, observing:

Red… shade[s] imperceptibly into orange and orange into yellow but we have no difficulties in agreeing as to where red becomes orange, and orange yellow” (p208-9).

However, this is perhaps an unfortunate analogy. After all, physicists and psychologists are in agreement that different colours, as such, don’t really exist – at least not outside of the human minds that perceive and recognise them.[30]

Instead, the electromagnetic spectrum varies continuously. Colours are imposed on only by human visual system as a way of interpreting this continuous variation.[31]

If racial differences were similarly continuous, then surely it would be inappropriate to divide peoples into racial groups, because wherever one drew the boundary would be entirely arbitrary.[32]

Yet a key point about human races is that, as Sarich and Miele put it:

“[Although] races necessarily grade into one another, but they clearly do not do so evenly” (p209).

In other words, although racial differences are indeed clinal and continuous in nature, the differentiation does not occur at a constant and uniform rate. Instead, there is some clustering and definite if fuzzy boundaries are nevertheless discernible.

As an illustration of such a fuzzy but discernible boundary, Sarich and Miele give the example of the Sahara Desert, which formerly represented, and to some extent still does represent, a relatively impassable obstacle (a “a geographic filter”, in Sarich and Miele’s words: p210) that impeded population movement and hence gene flow for millennia.

The human population densities north and south of the Sahara have long been, and still are, orders of magnitude greater than in the Sahara proper, causing the northern and southern units to have evolved in substantial genetic independence from one another” (p210).

The Sahara hence represented the “ancient boundary” between the racial groups once referred to by anthropologists as the Caucasoid and Negroid races, politically incorrect terms which, according to Sarich and Miele, although unfashionable, nevertheless remain useful (p209-10).

Analogously, anthropologist Stanley Garn reports:

The high and uninviting mountains that mark the Tibetan-Indian border… have long restricted population exchange to a slow trickle” (Human Races: p15).

Thus, these mountains (the Himalayas and Tibetan Plateau), have traditionally marked the boundary between the Caucasoid and what was once termed the Mongoloid race.[33]

Meanwhile, other geographic barriers were probably even more impassable. For example, oceans almost completely prevented gene-flow between the Americas and the Old World, save across the Berring strait between sparsely populated Siberia and Alaska, for millennia, such that Amerindians remained almost completely reproductively isolated from Eurasians and Africans.

Similarly, genetic studies suggest that Australian Aboriginals were genetically isolated from other populations, including neighbouring South-East Asians and Polynesians, for literally thousands of years.

Thus, anthropologist Stanley Garn concludes:

The facts of geography, the mountain ranges, the deserts and the oceans, have made geographical races by fencing them in” (Human Races: p15).

However, with improved technologies of transportation – planes, ocean-going vessels, other vehicles – such geographic boundaries are becoming increasingly irrelevant.

Thus, increased geographic mobility, migration, miscegenation and intermarriage mean that the ‘fuzzy’ boundaries of these fuzzy sets are fast becoming even ‘fuzzier’.

Thus, if meaningful boundaries could once be drawn between races, and even if they still can, this may not be the case for very much longer.

However, it is important to emphasize that, even if races didn’t exist, race differences still would. They would just vary on a continuum (or a cline, to use the preferred biological term).

To argue that races differences do not exist simply because they are continuous and clinal in nature would, of course, be to commit a version of the continuum fallacy or sorties paradox, also sometimes called the fallacy of the heap or fallacy of the beard.

Moreover, just as populations differ in, for example, skin colour on a clinal basis, so they could also differ in psychological traits (such as average intelligence and personality) in just the same way.

Thus, paradoxically, the non-existence of human races, even if conceded for the sake of argument, is hardly a definitive, knock-down argument against the existence of innate race differences in intelligence, or indeed other racial differences, even though it is usually presented as such by those who espouse this view.

Whether ‘races’ exist is debatable and depends on precisely how one defines ‘races’—whether race differences exist, however, is surely beyond dispute.

Debunking Diamond

The brilliant and rightly celebrated scientific polymath and popular science writer Jared Diamond, in an influential article published in Discovery magazine, formulated another even less persuasive objection to the race concept as applied to humans (Diamond 1994).

Here, Diamond insisted that racial classifications among humans are entirely arbitrary, because different populations can be grouped into different ways if one uses different characteristics by which to group them.

Thus, if we classified races, not by skin colour, but rather by the prevalence of the sickle cell gene or of lactase persistence, then we would, he argues, arrive at very different classifications. For example, he explains:

Depending on whether we classified ourselves by antimalarial genes, lactase, fingerprints or skin color, we could place Swedes in the same race as (respectively) either Xhosas, Fulani, the Ainu of Japan or Italians” (p164).

Each of these classifications, Diamond insists, would be “equally reasonable and arbitrary” (p164).

To these claims, Sarich and Miele respond:

Most of us, upon reading these passages, would immediately sense that something was very wrong with it, even though one might have difficulty specifying just what” (p164).

Unfortunately, however, Sarich and Miele are, in my view, not themselves very clear in explaining precisely what is wrong with Diamond’s argument.

Thus, one of Sarich and Miele’s grounds for rejecting this argument is that:

The proportion of individuals carrying the sickle-cell allele can never go above about 40 percent in any population, nor does the proportion of lactose-competent adults in any population ever approach 100 percent. Thus, on the basis of the sickle-cell gene, there are two groups… of Fulani, one without the allele, the other with it. So those Fulani with the allele would group not with other Fulani, but with Italians with the allele” (p165).

Here their point seems to be that it is not very helpful to classify races by reference to a trait that is not shared by all members of any race, but rather differs only in relative prevalence.

Thus, they conclude:

The concordance issue… applies within groups as well as between them. Diamond is dismissive of the reality of the FulaniXhosas African racial unit because there are characters discordant with it [e.g. lactase persistence]… Well then, one asks in response, what about the Fulani unit itself? After all, exactly the same argument could be made to cast the reality of the category ‘Fulani’ into doubt” (p165).

However, this conclusion seems to represent exactly what many race deniers do indeed argue – namely that all racial and ethnic groups are indeed pure social constructs with no basis in biology, including terms such as ‘Fulani’ and ‘Italian’, which are, they would argue, as biologically meaningless and socially constructed as terms such as ‘Negroid’ and ‘Caucasoid’.[34]

After all, if a legitimate system of racial classification indeed demands that some Fulani tribesmen be grouped in the same race as Italians while others are grouped in an entirely different racial taxa, then this does indeed seem to suggest racial classifications are arbitrary and unhelpful.

Moreover, the fact that there is much within-population variation in genes such as those coding for sickle-cell or lactase persistence surely only confirms Richard Lewontin’s famous argument (see below) that there is far more genetic variation within groups than between them.

Sarich and Miele’s other rejoinder to Diamond is, in my view, more apposite. Unfortunately, however, they do not, in my opinion, explain themselves very well.

They argue that:

“[The absence of the sickle-cell gene] is a meaningless association because the character involved (the lack of the sickle-cell allele) is an ancestral human condition. Associating Swedes and Xhosas thus says only that they are both human, not a particularly profound statement” (p165).

What I think Sarich and Miele are getting at here is that, whereas Diamond proposes to classify groups on the basis of a single characteristic, in this case the sickle-cell gene, most biologists favour a so-called cladistic taxonomy, where organisms are grouped together not on the basis of shared characteristics as such at all, but rather on the basis of shared ancestry.

In other words, orgasms are grouped together because they are more closely related to one another (or shared a common ancestor more recently) than are other organisms that are put into a different group.

From this perspective, shared characteristics are relevant only to the extent they are (interpreted as) homologous and hence as evidence of shared ancestry. Traits that evolved independently through convergent or parallel evolution (i.e. in response to analogous selection pressures in separate lineages) are irrelevant.

Yet the genes responsible for lactase persistence, one of the traits used by Diamond to classify populations, evolved independently in different populations through gene-culture co-evolution in concert with the independent development of dairy farming in different parts of the world, an example of convergent evolution that does not suggest relatedness. Indeed, not only did lactase continuance evolve independently in different races, it also seems to have evolved quite different mutations in different genes (Tishkoff et al 2007).[35]

However, Diamond’s proposed classification is especially preposterous. Even pre-Darwinian systems of taxonomy, which did indeed classify species (and subspecies) on the basis of shared characteristics rather than shared ancestry, nevertheless did so on the basis of a whole suite of traits that were clustered together.

In contrast, Diamond proposes to classify races on the basis of a single trait, apparently chosen arbitrarily – or, more likely, to illustrate the point he is attempting to make.

Genetic Differences

In an even more influential and widely-cited paper, Marxist biologist Richard Lewontin claimed that 85% of genetic variation occurred within populations and only 6% accounted for the differences between races (Lewontin 1972).[36]

The most familiar rejoinder to Lewontin’s argument is that of Edwards who pointed out that, while Lewontin’s figures are correct when one looks at individual genetic loci, if one looks at multiple loci, then one can identify an individual’s race with precision that approaches 100% the more loci that are used (Edwards 2003).

However, Edwards’ paper was only published in 2003, just a year before ‘Race: The Reality of Human Differences’ itself came off the presses, so Sarich and Miele may not have been aware of Edwards’ critique at the time they actually wrote the book.[37]

Perhaps for this reason, then, Sarich and Miele respond rather differently to Lewontin’s arguments.

First, they point out:

“[Lewontin’s] analysis omits a third level of variability–the within-individual one. The point is that we are diploid, getting one set of chromosomes from one parent and a second from the other” (p168-9).

Thus Sarich and Miele conclude:

The… 85 percent will then split half and half (42.5%) between the intra- and inter-individual within-population comparisons. The increase in variability in between-population comparisons is thus 15 percent against the 42.5 percent that is between individual within-population. Thus, 15/4.5 = 32.5 percent, a much more impressive and, more important, more legitimate value than 15 percent.” (p169).

However, this seems to me to be just playing around with numbers in order to confuse and obfuscate.

After all, if as Lewontin claims, most variation is within-group rather than between group, then, even if individuals mate endogamously (i.e. with members of the same group as themselves), offspring will show substantial variation between the portion of genes they inherit from each parent.

But, even if some of the variation is therefore within-individual, this doesn’t change the fact that it is also within-group.

Thus, the claim of Lewontin that 85% of genetic variation is within-group remains valid.

Morphological Differences

Sarich and Miele then make what seems to me to be a more valid and important objection to Lewontin’s figures, or at least to the implication he and others have drawn from them, namely that racial differences are insignificant. Again, however, they do not express themselves very clearly.

Their argument seems to be that, if we are concerned with the extent of physiological and psychological differentiation between races, then it actually makes more sense to look directly at morphological differences, rather than genetic differences.

After all, a large proportion of our DNA may be of the nonfunctional non-coding or ‘junk’ variety, some of which may have little or no effect an organism’s phenotype.

Thus, in their chapter ‘Resolving the Primate Tree’, Sarich and Miele themselves claim that:

Most variation and change at the level of DNA and proteins have no functional consequences” (p121; p126).

They conclude:

Not only is the amount of between-population genetic variation very small by the standards of what we observe in other species… but also… most variation that does exist has no functional, adaptive significance” (p126).

Thus, humans and chimpanzees may share around 98% of each other’s DNA, but this does not necessarily mean that we are 98% identical to chimpanzees in either our morphology, or our psychology and behaviour. The important thing is what the genes in question do, and small numbers of genes can have great effects while others (e.g. non-coding DNA) may do little or nothing.[38]

Indeed, one theory has it that such otherwise nonfunctional biochemical variation may be retained within a population by negative frequency dependent selection because different variants, especially when recombined in each new generation by sexual reproduction, confer some degree of protection against infectious pathogens.

This is sometimes referred to as ‘rare allele advantage’, in the context of the ‘Red Queen theory’ of host-parasite co-evolutionary arms race.

Thus, evolutionary psychologists John Tooby and Leda Cosmides explain:

The more alternative alleles exist at more loci—i.e., the more genetic polymorphism there is—the more sexual recombination produces genetically differentiated offspring, thereby complexifying the series of habitats faced by pathogens Most pathogens will be adapted to proteins and protein combinations that are common in a population, making individuals with rare alleles less susceptible to parasitism, thereby promoting their fitness. If parasitism is a major selection pressure, then such frequency-dependent selection will be extremely widespread across loci, with incremental advantages accruing to each additional polymorphic locus that varies the host phenotype for a pathogen. This process will build up in populations immense reservoirs of genetic diversity coding for biochemical diversity” (Tooby & Cosmides 1990: p33).

Yet, other than conferring some resistance to fast-evolving pathogens, such “immense reservoirs of genetic diversity coding for biochemical diversity” may have little adaptive or functional significance and have little or no effect on other aspects of an organism’s phenotype.

Lewontin’s figures, though true, are therefore potentially misleading. To see why, behavioural geneticist Glayde Whitney suggested that we “might consider the extent to which humans and macaque monkeys share genes and alleles”. On this basis, he reported:

If the total genetic diversity of humans plus macaques is given an index of 100 percent, more than half of that diversity will be found in a troop of macaques or in the [then quite racially homogenous] population of Belfast. This does not mean Irishmen differ more from their neighbors than they do from macaques — which is what the Lewontin approach slyly implies” (Whitney 1997).

Anthropologist Peter Frost, in an article for Aporia Magazine critiquing Lewontin’s analysis, or at least the conclusions he and others have drawn from them, cites several other examples where:

Wild animals… show the same pattern of genes varying much more within than between populations, even when the populations are related species and, sometimes, related genera (a taxonomic category that ranks above species and below family)“ (Frost 2023).

However, despite the minimal genetic differentiation between races, different human races do differ from one another morphologically to a significant degree. This much is evident simply from looking at the facial morphology, or bodily statures, of people of different races – and indirectly apparent by observing which races predominate in different athletic events at the Olympics.

Thus, Sarich and Miele point out, when one looks at morphological differences, it is clear that, at least for some traits, such as “skin colorhair formstaturebody build”, within-group variation does not always dwarf between-group variation (p167).

On the contrary, Sarich and Miele observe:

Group differences can be much greater than the individual differences within them; in, for example, hair from Kenya and Japan, or body shape for the Nuer and Inuit” (p218).

Indeed, in respect of some traits, there may be almost no overlap between groups. For example, excepting suffers of rare, abnormal and pathological conditions like albinism, even the lightest complexioned Nigerian is still darker in complexion and skin colour than is the darkest indigenous Swede.

If humans differ enough genetically to cause the obvious (and not so obvious) morphological differences between races, differences which are equally obviously genetic in origin, then it necessarily follows that they also differ enough genetically to allow for a similar degree of biological variation in psychological traits, such as personality and intelligence.

That human populations are genetically quite similar to one another indicates, Sarich and Miele concede, that the different races separated and became reproductively isolated from one another only quite recently, such that random variation in selectively-neutral DNA has not had sufficient time to accumulate through random mutation and genetic drift.

However, the fact that, within this short period, quite large morphological differences have nevertheless evolved suggests the presence of strong selective pressures selecting for such morphological differentiation.

They cite archaeologist Glynn Isaac as arguing:

It is the Garden-of-Eden model [i.e. out of Africa theory], not the regional continuity model [i.e. multiregionalism], that makes racial differences more significant functionally… because the amount of time involved in the raciation process is much smaller, but the degree of racial differentiation is the same and, for human morphology, large. The shorter the period of time required to produce a given amount of morphological difference, the more selectively/adaptively/functionally important those differences become” (p212).

Thus, Sarich and Miele conclude:

So much variation developing in so short a period of time implies, indeed almost requires, functionality; there is no good reason to think that behavior should somehow be exempt from this pattern of functional variability” (p173).

In other words, if different races have been subjected to divergent selection pressures that have led them to diverge morphologically, then these same selection pressures will almost certainly also have led them to psychologically diverge from one another.

Indeed, at least one well-established morphological difference seems to directly imply a corresponding psychological difference – namely, differences in brain size as between races would seem to suggest differences in intelligence, as I have discussed in greater detail both previously and below.

Measuring Morphological Differences

Continuing this theme, Sarich and Miele argue that human racial groups actually differ more from one another morphologically than do many non-human mammals that are regarded as entirely separate species.

Thus, Sarich quotes himself as claiming:

Racial morphological distances within our species are, on the average, about equal to the distances among species within other genera of mammals. I am not aware of another mammalian species whose constituent races are as strongly marked as they are in ours… except, of course, for dogs” (p170).

I was initially somewhat skeptical of this claim. Certainly, it seems to us that, say, a black African looks very different from an East Asian or a white European. However, this may simply be because, being human, and in close day-to-day contact with humans, we are far more readily attuned to differences between humans than differences between, say, chimpanzees, or wolves, or sheep.[39]

Indeed, there is even evidence that we possess an innate domain-specificface recognition module’ that evolved to help us to distinguish between different individuals, and which seems to be localized in certain areas of the brain, including the so-called ‘fusiform facial area’, which is located in the fusiform gyrus.

Indeed, as I have already noted in an earlier endnote, a commenter on an earlier version of this book review plausibly suggested that our tendency to group individuals by race could represent a by-product of our facial recognition faculty.

However, the claim that the morphological differences between human races are comparable in magnitude to those between some different species or nonhuman organism is by no means original to Sarich and Miele.

For example, John R Baker makes a similar claim in his excellent book, Race (which I have reviewed here), where he asserts:

Even typical Nordids and typical Alpinids, both regarded as subraces of a single race (subspecies), the Europid [i.e. Caucasoid), are very much more different from one another in morphological characters—for instance in the shape of the skull—than many species of animals that never interbreed with one another in nature, though their territories overlap” (Race: p97).

Thus, Baker claims:

Even a trained anatomist would take some time to sort out correctly a mixed collection of the skulls of Asiatic jackals (Canis aureus) and European red foxes (vulpes vulpes), unless he had made a special study of the osteology of the Canidae; whereas even a little child, without any instruction whatever, could instantly separate the skulls of Eskimids from those of Lappids” (Race: p427).

Indeed, Darwin himself made a not dissimilar claim in The Descent of Man, where he observed:

If a naturalist, who had never before seen a Negro, Hottentot, Australian, or Mongolian, were to compare them, he would at once perceive that they differed in a multitude of characters, some of slight and some of considerable importance. On enquiry he would find that they were adapted to live under widely different climates, and that they differed somewhat in bodily constitution and mental disposition. If he were then told that hundreds of similar specimens could be brought from the same countries, he would assuredly declare that they were as good species as many to which he had been in the habit of affixing specific names” (The Descent of Man and Selection in Relation to Sex).

However, Sarich and Miele attempt to go one better than both Baker and Darwin – namely, by not merely claiming that human races differ morphologically from one another to a similar or greater extent than many separate species of non-human animal, but also purporting to prove this claim statistically as well.

Thus, relying on “cranial/facial measurements on 29 human populations, 2,500 individuals 28 measurements… 17 measurements on 347 chimpanzees… and 25 measures on 590 gorillas” (p170), Sarich and Miele’s conclusion is dramatic: reporting the “percent increases in distance going from within-group to between-group comparisons of individuals”, measured in terms of “the percent difference per size corrected measurement (expressed as standard deviation units)”, a greater percentage of the total variation among humans is found between different human groups than is found between some separate species of non-human primate.

Thus, Sarich and Miele somewhat remarkably conclude:

Racial morphological distances in our species [are] much greater than any seen among chimpanzees or gorillas, or, on the average, some tenfold greater than those between the sexes” (p172-3).

Interestingly, and consistent with the general rule that Steve Sailer has termed ‘Rushton’s Rule of Three, whereby blacks and Asians respectively cluster at opposite ends of a racial spectrum for various traits, Sarich and Miele report:

The largest differences in Howells’s sample are found when comparing [black sub-Saharan] Africans with either Asians or Asian-derived (Amerindian) populations” (p172).

Thus, for example, measured in this way, the proportion of the total variation that separates East Asians from African blacks is more than twice that separating chimpanzees from bonobos.

This, however, is perhaps a misleading comparison, since chimpanzees and bonobos are known to be morphologically very similar to one another, to such an extent that, although now recognized as separate species, they were, until quite recently, considered as merely different subspecies of a single species.

Another problem with Sarich and Miele’s conclusion is that, as they themselves report, it relies entirely on “cranial/facial measurements” and thus it is unclear whether the extent of these differences generalize to other parts of the body.

Yet, despite this limitation, Sarich and Miele report their results as applying to “racial morphological distances” in general, not just facial and cranial differences.

Finally, Sarich and Miele’s analysis in this part of their book is rather technical.

I feel that the more appropriate place to publish such an important and provocative finding would have been a specialist journal in biological anthropology, which would, of course, include a full methodolgy section and also be subject to full peer review before publication.

Domestic Dog Breeds and Human Races

Sarich and Miele argue that the only mammalian species with greater levels of morphological variation between subspecies than humans are domestic dogs.

Thus, psychologist Daniel Freedman, writing in 1979, claimed:

A breed of dog is a construct zoologically and genetically equivalent to a race of man” (Human Sociobiology: p144).

Of course, morphologically, dog breeds differ enormously, far more than human races.

However, the logistical problems of a Chihuahua mounting a mastiff notwithstanding, all are thought to be capable of interbreeding with one another, and also with wild wolves, and are hence all dog breeds, together with wild wolves, are generally considered by biologists to represent a single species.

Moreover, Sarich and Miele report that genetic differences between dog breeds, and between dogs and wolves, were so slight that, at the time Sarich and Miele were writing, researchers had only just begun to be able to genetically distinguish some dog breeds from others (p185).

Of course, this was written in 2003, and genetic data in the years since then has accumulated at a rapid pace.

Moreover, even then, one suspects that the supposed inability of geneticists to distinguish one dog breed from another reflected, not so much the limited genetic differentiation between breeds, as the fact that, understandably, far fewer resources had been devoted to decoding the canine genome that were devoted to decoding that of humans ourselves.

Thus, today, far more data is available on the genetic differences between breeds and these differences have proven, unsurprisingly given the much greater morphological differences between dog breeds as compared to human races, to be much greater than those between human populations.

For example, as I have discussed above, Marxist-biologist Richard Lewontin famously showed that, for humans, there is far greater genetic variation within races than between races (Lewontin 1972).

It is sometimes claimed that the same is true for dog breeds. For example, self-styled ‘race realist’ and ‘white advocate’, and contemporary America’s leading white nationalist public intellectual (or at least the closest thing contemporary America has to a white nationalist public intellectual), Jared Taylor claims, in a review of Edward Dutton’s Making Sense of Race, that:

People who deny race point out that there is more genetic variation within members of the same race than between races — but that’s true for dog breeds, and not many people think the difference between a terrier and a pug is all in our minds” (Taylor 2021).

Actually, however, Taylor appears to be mistaken.

Admittedly, some early mitochondrial DNA studies did seemingly support this conclusion. Thus, Coppinger and Schneider reported in 1994 that:

Greater mtDNA differences appeared within the single breeds of Doberman pinscher or poodle than between dogs and wolves… To keep the results in perspective, it should be pointed out that there is less mtDNA difference between dogs, wolves and coyotes than there is between the various ethnic groups of human beings, which are recognized as belonging to a single species” (Coppinger & Schneider 1994).

However, while this may be true for mitochondrial DNA, it does not appear to generalize to the canine genome as a whole. Thus, in her article ‘Genetics and the Shape of Dogs’ geneticist Elaine Ostrander, an expert on the genetics of domestic dogs, reports:

Genetic variation between dog breeds is much greater than the variation within breeds. Between-breed variation is estimated at 27.5 percent. By comparison, genetic variation between human populations is only 5.4 percent” (Ostrander 2007).[40]

However, the fact that both morphological and genetic differentiation between dog breeds far exceeds that between human races does not necessarily mean that an analogy between dog breeds and human races is entirely misplaced.

All analogies are imperfect, otherwise they would not be analogies, but rather identities (i.e. exactly the same thing).

Indeed, one might argue that dog breeds provide a useful analogy for human races precisely because the differences between dog breeds are so much greater, since this allows us to see the same principles operating but on a much more magnified scale and hence brings them into sharper focus.

Breed and Behaviour

As well as differing morphologically, dog breeds are also thought to differ behaviourally as well.

Anecdotally, some breeds are said to be affectionate and ‘good with children’, others standoffish, independent, territorial and prone to aggression, either with strangers or with other dogs.

For example, psychologist Daniel Freedman, whose study of average differences in behaviour among both dog breeds, conducted as part of his PhD, and his later analogous studies of differences in behaviour of neonates of different races, are discussed by Sarich and Miele in their book (p203-7), observed:

I had worked with different breeds of dogs and I had been struck by how predictable was the behavior of each breed” (Human Sociobiology: p144).

Freedman’s scientifically rigorous studies of breed differences in behaviour confirmed that at least some such differences are indeed real and seem to have an innate basis.

Thus, studying the behaviours of newborn puppies to minimize the possibility of environmental effects affecting behaviour differences, just as he later studied differences in the behaviour of human neonates, Freedman reports:

The breeds already differed in behavior. Little beagles were irrepressibly friendly from the moment they could detect me, whereas Shetland sheepdogs were most sensitive to a loud voice or the slightest punishment; wire-haired terriers were so tough and aggressive, even as clumsy three-week olds, that I had to wear gloves in playing with them; and, finally, basenjis, barkless dogs originating in central Africa, were aloof and independent” (Human Sociobiology: p145).

Similarly, Hans Eysenck reports the results of a study of differences in behaviour between different dog breeds raised under different conditions then left alone in a room with food they had been instructed not to eat. He reports:

Basenjis, who are natural psychopaths, ate as soon as the trainer had left, regardless of whether they had been brought up in the disciplined or the indulgent manner. Both groups of Shetland sheep dogs, loyal and true to death, refused the food, over the whole period of testing, i.e. eight days! Beagles and fox terriers responded differentially, according to the way they had been brought up; indulged animals were more easily conditioned, and refrained longer from eating. Thus, conditioning has no effect on one group, regardless of upbringing—has a strong effect on another group, regardless of upbringing—and affects two groups differentially, depending on their upbringing” (The IQ Argument: p170).

These differences often reflect the purpose for which the dogs were bred. For example, breeds historically bred for dog fighting (e.g. Staffordshire bull berriers) tend to be aggressive with other dogs, but not necessarily with people; those bred as guard dogs (e.g. mastiffs, Dobermanns) tend to be highly territorial; those bred as companions sociable and affectionate; while others have been bred to specialize in certain highly specific behaviours at which they excel (e.g. pointers, sheep dogs).

For example, the author of one recent study of behavioural differences among dog breeds interpreted her results thus:

Inhibitory control may be a valued trait in herding dogs, which are required to inhibit their predatory responses. The Border Collie and Australian Shepherd were among the highest-scoring breeds in the cylinder test, indicating high inhibitory control. In contrast, the Malinois and German Shepherd were some of the lowest-scoring breeds. These breeds are often used in working roles requiring high responsiveness, which is often associated with low inhibitory control and high impulsivity. Human-directed behaviour and socio-cognitive abilities may be highly valued in pet dogs and breeds required to work closely with people, such as herding dogs and retrievers. In line with this, the Kelpie, Golden Retriever, Australian Shepherd, and Border Collie spent the largest proportion of their time on human-directed behaviour during the unsolvable task. In contrast, the ability to work independently may be important for various working dogs, such as detection dogs. In our study, the two breeds which were most likely to be completely independent during the unsolvable task (spending 0% of their time on human-directed behaviour) were the German Shepherd and Malinois” (Juntilla et al 2022).

Indeed, recognition of the different behaviours of dog breeds even has statutory recognition, with controversial breed-specific legislation restricting the breeding, sale and import of certain so-called dangerous dog breeds and ordering their registration, neutering and in some cases destruction.

Of course, similar legislation restricting the import and breeding, let alone ordering the neutering or destruction, of ‘dangerous human races’ (perhaps defined by reference to differences in crime rates) is currently politically unthinkable.

Therefore, as noted above, breed-specific legislation is the rough canine equivalent of the Nuremberg Laws.

Breed Differences in Intelligence

In addition, just as there are differences between human races in average IQ (see below; see also here, here and especially here) so some studies have suggested that, on average, dog breeds differ in average intelligence.

However, there are some difficulties, for these purposes, in measuring, and defining, what constitutes intelligence among domestic dogs.[41]

Since the subject of race differences in intelligence almost always lurks in the background of any discussion of the biology of race, and, since this topic is indeed discussed at some length by Sarich and Miele in a later chapter (and indeed in a later part of this review), it is perhaps worth discussing some of these difficulties and the extent to which they mirror similar controversies regarding how to define and measure human intelligence, especially differences between races.

Thus, research by Stanley Coren, reported in his book, The Intelligence of Dogs, and also widely reported upon in the popular press, purported to rank dog breeds by their intelligence.

However, the research in question, or at least the part reported upon in the media, actually seems to have relied exclusively on measurements of the ability of the different dogs to learn, and obey, new commands from their masters/owners with the minimum of instruction.[42]

Moreover, this ability also seems, in Coren’s own account, to have been assessed on the basis of the anecdotal impression of dog contest judges, rather then direct quantitative measurement of behaviour.

Thus, the purportedly most intelligent dogs were those able to learn a new command in less than five exposures and obey at least 95 percent of the time, while the purportedly least intelligent were those who required more than 100 repetitions and obey around 30 percent of the time.

An ability to obey commands consistently with a minimum of instruction does indeed require a form and degree of social intelligence – namely the capacity to learn and understand the commands in question.

However, such a means of measurement not only measures only a single quite specific type of intelligence, it also measures another aspect of canine psychology that is not obviously related to intelligence – namely, obedience, submissiveness and rebelliousness.

This is because complying with commands requires not only the capacity to understand commands, but also the willingness to actually obey them.

Some dogs might conceivably understand the commands of an owner, or at least have the capacity to understand if they put their mind to it, but nevertheless refuse to comply, or even refuse to learn, out of sheer rebelliousness and independent spirit. Most obviously, this might be true of wild wolves which have not been domesticated or even tamed, though it may also be true of dog breeds.[43]

Analogously, when a person engages in a criminal act, we do not generally assume that this is because s/he failed to understand that the conduct complained of was indeed a transgression of the law. Instead, we usually assume that s/he knew that the behaviour complained of was criminal, but, for whatever reason, decided to engage in the behaviour anyway.[44]

Thus, a person who habitually refuses to comply with rules of behaviour set down by those in authority (e.g. school authorities, law enforcement) is more likely to be diagnosed with, say, oppositional defiant personality disorder or psychopathy than with low intelligence as such. Much the same might be true of some dog breeds, and indeed some individual dogs (and indeed wild or tame wolves).[45]

Sarich and Miele, in their discussion of Daniel Freedman’s research on behavioural differences among breeds, provide a good illustration of these problems. Thus, they describe how, one of the tests conducted by Freedman involved measuring how well the different breeds navigated “a series of increasingly difficult mazes”. This would appear to be a form of intelligence test measuring spatial intelligence. However, in fact, they report, perhaps surprisingly:

The major breed differences were not in the ability to master the mazes (a rough measure of canine IQ) but in what they would do when they were placed in a maze they couldn’t master. The beagles would howl, hoping perhaps that another member of their pack would howl back and lead them to the goal. The inhibited Shelties would simply lie down on the ground and wait. Pugnacious terriers would try to tear down the walls of the maze, but the basenjis saw no reason they had to play by a human’s rules and tried to jump over the walls of the maze” (p202).

Far from demonstrating low intelligence, the behaviour of the terriers, and especially the basenjis might even be characterized as an impressive form of lateral thinking, inventiveness and creativity – devising a different way to escape the maze than that intended by the experimenter.

However, it more likely reflects the independent and rebellious personality of basenjis, a breed which is, according to Sarich and Miele, more recently domesticated than other most breeds, related to semi-domesticated pariah dogs, and who, they report, “dislike taking orders and are born canine scofflaws” (p201-2).

You may also recall that psychologist Hans Eysenck, in a passage quoted in greater length in the preceding section of this review, described this same breed, perhaps only semi-jocularly, as “natural psychopaths” (The IQ Argument: p170).

Consistent with this, Stanley Coren reports that they are the second least trainable dog, behind only Afghan Hounds.

Natural, Artificial and Sexual Selection

Of course, domestic dog breeds are a product, not of natural selection, of rather of artificial selection, i.e. selective breeding by human breeders, often to deliberately produce strains with different traits, both morphological and behavioural.

This, one might argue, makes dog breeds quite different to human races, since, although many have argued that humans are ourselves, in some sense, a domesticated species, albeit a self-domesticated one (i.e. we have domesticated ourselves, or perhaps one another), nevertheless most traits that differentiate human races seems to be a product of natural selection, in particular adaptation to different geographic regions and their climates.[46]

However, the processes of natural and artificial selection are directly analogous to each other. Indeed, they are so similar that it was the selective breeding of domestic animals by agriculturalists that helped inspire Darwin’s theory of natural selection, and was also used by Darwin to explain and illustrate this theory in The Origin of Species.

Moreover, many eminent biologists have argued that at least some racial differences are the product, not of natural selection (in the narrow sense), but rather of sexual selection, in particular mate choice.

Yet mate choice is arguably even more analogous to artificial selection than is natural selection, since both mate choice and artificial selection involve deliberate choice as to which individual with whom to breed by a third-party, namely, in the case of artificial selection, the human breeder, or, in respect of mate choice, the prospective mate.

As Sarich and Miele themselves observe:

Unlike for dog breeds, no one has deliberately exercised that level of selection on humans, unless we exercised it on ourselves, a thought that has led evolutionary thinkers from Charles Darwin to Jared Diamond to attribute human racial variation to a process termed ‘sexual’ rather than ‘natural’ selection” (p236).

Thus, Darwin himself went as far as to claim in The Descent of Man that “as far as we are enabled to judge… none of the differences between the races of man are of any direct or special service to him”, and instead proposes:

The differences between the races of man, as in colour, hairiness, form of features, etc., are of a kind which might have been expected to come under the influence of sexual selection” (The Descent of Man: p189-90).

Darwin’s claim that none of the physical differences between races have any survival value is now clearly untenable, as anthropologists and biologists have demonstrated that many observed race differences, for example, in skin colour, nose shape, and bodily dimensions, represent, at least in part, climatic adaptations.[47]

However, the view that sexual selection has also played some role in human racial differentiation remains plausible, and has been championed in recent years by scientific polymath and populariser Jared Diamond in chapter six of his book The Third Chimpanzee, which he titles ‘Sexual Selection and the Origin of Human Races’ (The Third Chimpanzee: pp95-105), and especially by anthropologist Peter Frost in a series of papers and blog posts (e.g. Frost 2008).

For example, as emphasized by Frost, differences in hair colour, eye colour and hair texture, having no obvious survival benefits, yet often being associated with perceptions of beauty, might well be attributed, at least in part, to sexual selection (Frost 2006; Frost 2014; Frost 2015).

The same may be true of racial and sexual differentiation levels of muscularity and in the distribution of body fat, as discussed later in this review.

For example, John R Baker, in his monumental magnus opus, Race (reviewed here), argues that the large protruding buttocks evinced among some San women likely reflect sexual selection (Race: p318).[48]

Meanwhile, both Frost and Diamond argue that even differences in skin colour, although partly reflecting the level of exposure to ultraviolet radiation from the sun in different regions of the globe and at different latitudes, and affecting vitamin D synthesis and susceptibility to sunburn and melanoma, all of which were subject to natural selection to some degree, likely also reflects mate choice and sexual selection as well, given that skin tone does not perfectly correlate with levels of exposure to UV rays in different regions, yet a lighter than average complexion seems to be cross-culturally associated with female beauty (van den Berghe and Frost 1986; Frost 1994; Frost 2014).

Similarly, in his recent book A Troublesome Inheritance, science writer Nicholas Wade, citing a study suggesting that an allele carried by East Asian people is associated with both thicker hair and smaller breasts in mice, suggests that this gene may have spread among East Asians as a consequence of sexual selection, with males preferring females as mates who possess one or both of these traits (A Troublesome Inheritance: p89-90).

Similarly, Wade also proposes that the greater prevalence of dry earwax among Northeast Asians, and, to a lesser degree, among Southeast Asians, Native Americans and Northern Europeans may reflect sexual selection and mate choice, because this form of earwax is also associated with a less strong body odour, and, in colder regions, where people spend more of their time indoors, Wade surmises that this is likely to be more noticeable, as well as unpleasant in a sexual partner (A Troublesome Inheritance: p90-91).[49]

Finally, celebrated Italian geneticist Luigi Luca Cavalli-Sforza proposes, in his book Genes Peoples and Languages that, although the “fatty folds of skin” around the eyes characteristic of East Asian peoples likely evolved to protect against “the cold Siberian air” and represent “adaptions to the bitter cold of Siberia”, nevertheless, since “these eyes are often considered beautiful” they “probably diffused by sexual selection from northeastern Asia into Southeast Asia where it is not at all cold” (Genes Peoples and Languages: p11).

Curiously, in this context, however, Sarich and Miele, save for the passing mention of Darwin and Diamond quoted above, not only make no mention of sexual selection as a possible factor in human racial differentiation, but also make the odd claim in relation to sexual selection that:

There has been no convincing evidence of it [i.e. sexual selection] yet in humans” (p186).[50]

As noted, this is a rather odd, if not outright biologically ignorant, claim.

It is true that some of the more outlandish claims of evolutionary psychologists for sexual selection – for example, Geoffrey Miller’s intriguing theory that human intelligence evolved through sexual selection – remain unproven, as indeed does the claim that sexual selection played an important role in human racial differentiation.

However, there is surely little doubt, for example, human body-size dimorphism is a product of sexual selection (more specifically intra-sexual selection), since levels of body-size dimorphism is consistently correlated with levels of polygyny across many mammalian species.

A strong claim can also be made that the permanant breasts that are unique to human females evolved as a product of intersexual selection. (see discussion here).

Sexual selection has also surely acted on human psychology, resulting in, among other traits, the greater levels of violent aggression among males.

On the other hand, Sarich and Miele may be on firmer ground when, in a later chapter, while not denying that sexual selection may have played a role in other aspects of human evolution, they nevertheless insist:

No one has yet provided any hard evidence showing that process [i.e. sexual selection] has produced racial differences in our species” (p236).

However, while this may be true, the idea that sexual selection has played a key role in human racial differentiation certainly remains a plausible hypothesis.

Physical Differences and Athletic Performance

Although they emphasize that morphological differences between human races are greater than those among some separate species of nonhuman animal, and also that such morphological differences provide, for many purposes, a more useful measure of group differences than genetic differences, nevertheless, in the remainder of the chapter on ‘Physical Race Differences’, Sarich and Miele actually have surprisingly little to say about the actual physical differences that exist as between races, nor how and why such differences evolved.

There is no discussion of, for example, Thomson’s nose rule, which seems to explain much of the variation in nose shape among races, nor of Bergmann’s rule and Allen’s rule, which seem to explain much of the variation among humans in body-size and relative bodily proportions.

Instead, Sarich and Miele focus on what is presumably an indirect effect of physiological race differences – namely, differences in athletic performance as between races.

Even this topic is not treated thoroughly. Indeed, the authors talk of “such African dominance as exists in the sporting world” (p182) almost as if this applied to all sports equally.

Yet, just as people of black African descent are conspicuously dominant in certain athletic events (basketball, the 100m sprint), so they are noticeably absent among elite athletes in certain other sports, not least swimming – and, just as the overrepresentation of people of West African descent among elite sprinters, and East Africans among elite distance runners, has been attributed to biological differences, so has their relative absence among elite swimmers, which is most often attributed to differences in bone density and fat distribution, each of which affect buoyancy.

Yet, not only does Sarich and Miele’s chapter on ‘Physical Race Differences’ focus almost exclusively on differences in athletic ability, but a large part of the chapter is devoted to differences in performance in one particular sport, namely the performance of East Africans, especially Kenyans (and especially members a single tribe, the Kelenjin), in long-distance running.

Yet, even here, their analysis is almost exclusively statistical, demonstrating the improbability that this single tribe, who represent, of course, only a tiny proportion of the world’s population, would achieve such success by chance alone if they did not have some underlying innate biological advantage.

They say little of the actual physiological factors that actually make East Africans such as the Kelenjin such great distance runners, nor of the evolutionary factors that selected for these physiological differences.

Others have attributed this advantage to their having evolved to survive at a relatively high altitude, in a mountainous region on the borders of Kenya and Uganda, to which region they are indigenous, as well as their so-called ‘elongate’ body-type, which seems to have evolved as an adaptation to climate.

Amusingly, however, behavioural geneticist Glayde Whitney proposes yet another factor that might explain why the Kelenjin are such excellent runners – namely, according to him, they long had a notorious reputation among their East African neighbours as cattle thieves.

However, unlike cattle thieves in the Old West, they lacked access to horses (which, in sub-Saharan Africa are afflicted with sleeping sickness spread by the tsetse fly) and having failed to domesticate any equivalent indigenous African animal such as the zebra, had instead to escape with their plunder on foot. The result, Whitney posits, was strong selection pressure for running ability in order to outrun and escape any pursuers:

Why are the Kalenjin such exceptional runners? There is some speculation that it may be because the tribe specialized in cattle thievery. Anyone who can run a great distance and get away with the stolen cattle will have enough wealth to meet the high bride price of a good spouse. Because the Kalenjin were polygamous, a really successful cattle thief could afford to buy many wives and make many little runners. This is a good story, anyway, and it might even be true” (Whitney 1999).

The closest Sarich and Miele themselves come to providing a physiological explanation for black sporting success is a single sentence where they write:

Body-fat levels seem to be at a minimum among African populations; the levels do not increase with age in them, and Africans in training can apparently achieve lower body-fat levels more readily than is the case for Europeans and Asians” (p182).

This claim seems anecdotally plausible, at least in respect of young African-American males, many of whom appear able to retain lean, muscular physiques, despite seemingly subsisting on a diet composed primarily of fried chicken with a regrettable lack of healthy alternatives such as watermelon.

However, as was widely discussed in relation to the higher mortality rates experienced among black people (and among fat people) during the recent coronavirus pandemic, there is also some evidence of higher rates of obesity among African-Americans.

Actually, however, this problem seems to be restricted to black women, who evince much higher rates of obesity than do women of most other races in the USA.[51]

African-American males, on the other hand, seem to have similar rates of obesity to white American males.

Thus, according to data cited by the US Department of Health and Human Services and Office of Minority Health, more than 80% African American women are obese or overweight, as compared to only 65% of white women. However, among males the pattern is reversed, with a somewhat higher proportion of white men being overweight or obese than black men (75% of white men versus only about 71% of black men) (US Department of Health and Human Services and Office of Minority Health 2020).

This pattern is replicated in the UK, where black women have higher rates of obesity than white women, but, again, black men have rather lower rates of obesity than white men, with East Asians consistently having the lowest rates of obesity among both sexes.

That similar patterns are observed in both the UK and the USA suggests that the differences reflect an innate race difference – or rather an innate race difference in the magnitude of an innate sex difference, namely in body fat levels, which are higher among women than among men in all racial groups.[52]

This may perhaps be a product of sexual selection and mate choice.

Thus, if black men do indeed, as popular stereotype suggests, like big butts, then black women may well have evolved to have bigger butts through sexual selection.[53]

At least in the US, there is indeed some evidence that mating preferences differ between black and white men with regard to preferred body-types, with black men preferring somewhat heavier body-types (Allison et al 1993; Thompson et al 1996; Freedman et al 2004), though other research suggest little or no significant differences in preferences for body-weight as between black and white men (Singh 1994; Freedman et al 2006).[54]

Sexual selection or, more specifically, mate choice may similarly explain the evolution of fatty breasts among women of all races and the evolution of fatty protruding buttocks among Khoisan women of Southern Africa (which I have written about previously and alluded to above).

Conversely, if the greater fat levels observed among black women is a product of sexual selection and, in particular, of mate choice, then perhaps the greater levels of muscularity and athleticism apparently observed among black men may also be a product of intrasexual selection or male-male competition (e.g. fighting).

Thus, it is possible that levels of intrasexual selection operating on males may have been elevated in sub-Saharan Africa because of the greater prevalence of polygyny in this region, since polygyny intensifies reproductive competition by increasing the reproductive stakes (see Sanderson, Race and Evolution: p92-3; Draper 1989; Frost 2008).

At any rate, other physical differences between the races besides differences in body fat levels also surely play a role in explaining the success of African-descended athletes in many sports.

For example, African populations tend to have somewhat longer legs and arms relative to their torsos than do Europeans and Asians. This reflects Allen’s rule of thermal regulation, whereby organisms that evolved in colder climates evolve to have relatively shorter limbs and other appendages, both to minimize the ratio of surface area to volume, and hence proportion of the body is directly exposed to the elements, and also because it is the extremities that are especially vulnerable to frostbite.

Thus, blacks, having evolved in the tropics, have relatively longer legs and arms than do Europeans and Asians.[55]

Greater relative leg length, sometimes measured by the ratio of sitting to standing height, is surely an advantage in running events, which might partially explain black success in track events and indeed many other sports that also involve running It may also explain African-American performance in sports that involve jumping as well (e.g. basketball, the high jump and long jump), since leg length also confers an advantage here.

Meanwhile, greater relative arm length, sometimes measured by armspan to height ratio, is likely an advantage in sports such as basketball, boxing and racquet sports, since it confers greater reach.

Yet, at least some of the factors that benefit East Africans in distance events are opposite to those that favour West Africans in sprinting (e.g. the relative proportions of fast- versus slow-twitch muscle fibres; a mesomorphic versus an ectomorphic body-build). This suggests that it is, at best, a simplification to talk about a generalized African advantage in running, let alone in athletics as a whole.

Neither do the authors discuss the apparent anomaly whereby racially-mixed African-Americans and West Indians outcompete indigenous West Africans, who, being unmixed, surely possess whatever qualities benefit African-Americans in even greater abundance than do their transatlantic cousins.[56]

Sarich and Miele also advance another explanation for the superior performance of blacks in running events, which stikes me as a very odd argument and not at all persuasive. Here, they argue that, since anatomically modern humans first evolved in Africa:

Our basic adaptations are African. Given that, it would seem that we would have had to make adaptive compromises, such as to cold weather, when populating other areas of the world, thus taking the edge off our ‘African-ness’” (p182)

As a result of our distinctive adaptations having first evolved in Africa, Sarich and Miele argue:

Africans are better than the rest of us at some of those things that most make us human, and they are better because their separate African histories have given them, in effect, better genes for recently developed tests of some basic human adaptations. The rest of us (or, more fairly, our ancestors) have had to compromise some of those African specializations in adapting to more temperate climates and more varied environments. Contemporary Africans, through their ancestors, are advantaged in not having had to make such adaptations, and their bodies, along with their resulting performances, show it” (p183).

Primary among these “basic adaptations”, “African specializations” and “things that most make us human” are, they argue, bipedalism (i.e. walking on two legs). This then, they seem to be arguing, explains African dominance in running events, which represent, if you like, the ultimate measure of bipedal ability.

This argument strikes me as completely unpersuasive, if not wholly nonsensical.

After all, another of our “basic adaptations”, even more integral to what “makes us human” than bipedalism is surely our high levels of intelligence and large brains (see discussion below) as compared to other primates.

Yet Africans notoriously do not appear to have “better genesfor this trait, at least as measured in yet another of those “recently developed tests of some basic human adaptations”, namely IQ tests.

Athletic and Cognitive Ability

This, of course, leads us directly to another race difference that is the subject of even greater controversy – namely race differences in intellectual ability.

The real reason we are reluctant to discuss athletic superiority is, Sarich and Miele contend, because it is perceived as also raising the spectre of intellectual inferiority.

In short, if races differ sufficiently genetically to cause differences in athletic performance, then it is surely possible they also differ sufficiently genetically to cause differences in academic performance and performance on IQ tests.

However, American high school movie stereotypes of ‘dumb jocks’ and ‘brainy nerds’ notwithstanding, there is no necessary inverse correlation between intellectual ability and ability at sports.

Indeed, Sarich and Miele argue that athletic ability is actually positively correlated with intellectual ability.

I can see no necessary, or even likely, negative correlation between the physical and the mental. On the contrary, the data show an obvious, strong, positive correlation among class, physical condition, and participation in regular exercise in the United States” (p182).

Thus, they report:

Professional football teams have, in recent years, been known to use the results of IQ tests as one indicator of potential in rookies. And a monumental study of intellectually gifted California schoolchildren begun by Lewis Terman in the 1920s that followed them through their lives showed clearly that they were also more gifted physically than the average” (p183).[57]

It is likely true that intelligence and athletic ability are positively correlated – if only because many of the same things that cause physical disabilities (e.g. physical trauma, developmental disorders) also often cause mental disability. Down syndrome, for example, causes both mental and physical disability; and, if you are crippled in a car crash, you may also suffer brain damage.

Admittedly, there may be some degree of trade-off between performance in different spheres, if only because the more time one devotes to playing sports, then, all else being equal, the less time one has left to devote to one’s studies, and, in both sports and academics, performance usually improves with practice.

On the other hand, however, it may be that doing regular exercise and working hard at one’s studies are positively correlated because both reflect the same underlying personality trait of conscientiousness.

On this view, the real trade-off may be, not so much between spending time, on the one hand, playing sports and exercising and, on the other, studying, as it is between, on the one hand, engaging in any or all of these productive endeavours and, on the other hand, engaging in wasteful and unproductive endeavours such as watching television, playing computer games and shooting up heroin.

As for the American high school movie stereotype of the ‘dumb jock’, this, I suspect, may partly reflect the peculiar American institution of athletic scholarships, whereby athletically gifted students are admitted to elite universities despite being academically underqualified.

On the other hand, I suspect that the ‘brainy nerd’ stereotype may have something to do with a mild subclinical presentation of the symptoms of high-functioning autism.

This is not to say that ‘nerdishness’ and autism are the same thing, but rather that ‘nerdishness’ represents a milder subclinical presentation of autism symptoms not sufficient to justify a full-blown diagnosis of autism. Autistic traits are, after all, a matter of degree.

Thus, it is notable that the symptoms of autism include many traits that are also popularly associated with the nerd stereotype, such as social awkwardness, obsessive ‘nerdyspecial interests and perhaps even with that other popular stereotype of ‘nerds’, namely having to wear glasses.

More relevant for our purposes, high functioning autism is also associated with poor physical coordination and motor skills, which might explain the stereotype of ‘nerds’ performing poorly at sports.

On the other hand, however, contrary to popular stereotype, autism is not associated with above average intelligence.[58]

In fact, although autistic people can present the whole range of intelligence, from highly gifted to intellectually disabled, autism is overall said to be associated with somewhat lower average intelligence than is observed in the general population.

This is consistent with the fact that autism is indeed, contrary to the claims of some neurodiversity advocates, a developmental disorder and disability.

However, I suspect autism may be underdiagnosed among those of higher intelligence, precisely because they are able to use their higher general intelligence to compensate for and hence ‘mask’ their social impairments such that they go undetected and often undiagnosed.

Moreover, autism has a complex and interesting relationship with intelligence, and autism seems to be associated with special abilities in specific areas (Crespi 2016).

There is also some evidence, albeit mixed, that autistic people score relatively higher in performance IQ and spatio-visual ability than in verbal IQ. Given there is some evidence of a link between spatio-visual intelligence and mathematical ability, this might plausibly explain the stereotype of nerds being especially proficient in mathematics (i.e. ‘maths nerds’).

Overall, then, there is little evidence of, or any theoretical reason to anticipate, any trade-off or inverse correlation between intellectual and athletic ability. On the contrary, there is probably some positive correlation between the intelligence and athletic ability, if only because the same factors that cause intellectual disabilities – physical trauma, brain damage, birth defects, chromosomal abnormalities – also often cause physical disabilities.

On the other hand, however, Philippe Rushton, in the ‘Preface to the Third Edition’ of his book, Race Evolution and Behavior (which I have reviewed here), contends that some of the same physiological factors that cause blacks to excel in some athletic events are also indirectly associated with other racial differences that perhaps portray blacks in a less flattering light.

Thus, Rushton reports that the reason blacks tend, on average, to be faster runners is because:

Blacks have narrower hips [than whites and East Asians] which gives them a more efficient stride” (Race Evolution and Behavior: p11).

But, he continues, the reason why blacks are able to have narrower hips, and hence more efficient stride, is that they give birth to smaller-brained, and hence smaller headed, infants:

The reason why Whites and East Asians have wider hips than Blacks, and so make poorer runners, is because they give birth to larger brained babies” (Race Evolution and Behavior: p12).[59]

Yet, as discussed below, brain size is itself correlated with intelligence, both as between species, and as between individual humans.

Similarly, Rushton argues:

Blacks have from 3 to 19% more of the sex hormone testosterone than Whites or East Asians. These testosterone differences translate into more explosive energy, which gives Blacks the edge in sports like boxing, basketball, football, and sprinting” (Race Evolution and Behavior: p11).

However, higher levels of testosterone also has a downside, not least since:

The hormones that give Blacks an edge at sports makes them more masculine in general — physically active in school, and more likely to get into trouble” (Race Evolution and Behavior: p12).

In other words, if higher levels of testosterone gives blacks an advantage in some sports, they perhaps also result in the much higher levels of violent crime and conduct disorders reported among people of black African descent (see Ellis 2017).[60]

Intelligence

Whereas their chapter on ‘Race and Physical Differences’ focussed mostly on differences in athletic ability, Sarich and Miele’s chapter on ‘Race and Behavior’, focuses, perhaps inevitably, almost exclusively on race differences in intelligence.

However, though it certainly has behavioural correlates, intelligence is not, strictly speaking, an element of behaviour as such. The chapter would therefore arguably be more accurately titled ‘Race and Psychology’ – or indeed ‘Race and Intelligence’, since this is the psychologial difference upon which they focus almost to the exclusion of all others.[61]

Moreover Sarich and Miele do not even provide a general, let alone comprehensive, review of all the evidence on the subject of race differences in intelligence, their causes and consequences. Instead, they focus on two very specific issues and controversies:

  1. Race differences in brain size; and
  2. The average IQ of blacks in sub-Saharan Africa.

Yet, despite the title of the Chapter, neither of the these reflect a difference in behaviour as such.

Indeed, race differences in brain-size are actually a physical difference – albeit a physical difference presumed, not unreasonably, to be associated with a psychological difference – and therefore should, strictly speaking, have gone in their previous chapter on ‘Race and Physical Differences’.

Brain Size

Brain-size and its relation to both intelligence and race is a topic I have written about previously. As between individuals, there exists a well-established correlation between brain-size and IQ (Pietschnig et al 2015; Rushton and Ankney 2009).

Nicholas Mackintosh, himself by no means a doctrinaire hereditarian and a critic of hereditarian theories with respect to race differences in intelligence, nevertheless reports in the second edition of his undergraduate textbook on IQ and Human Intelligence, published in 2011:

Although the overall correlation between brain size and intelligence is not very high, there can be no doubt of its reliability” (IQ and Human Intelligence: p132).

Indeed, Sarich and Miele go further. In a critique of the work of infamous scientific charlatan Stephen Jay Gould, to whom they attribute the view that “brain size and intellectual performance have nothing to do with one another”, they retort:

Those large brains of ours could not have evolved unless having large brains increased fitness through what those large brains made possible-that is, through minds that could do more” (p213).

This is especially so given the metabolic expense of brain tissue and other costs of increased brain size, such that, to have evolved during the course of human evolution, our large brains must have conferred some compensating advantage.

Thus, dismissing Gould as a “behavioral creationist”, given his apparent belief that the general principles of natural selection somehow do not apply to behaviour, or at least not to human behaviour, the authors forthrightly conclude:

The evolutionary perspective demands that there be a relationship-in the form of a positive correlation-between brain size and intelligence… Indeed, it seems to me that a demonstration of no correlation between brain size and cognitive performance would be about the best possible refutation of the fact of human evolution” (p214).

Here, the authors go a little too far. Although, given the the metabolic expense of brain tissue and other costs associated with increased brain size, larger brains must have conferred some selective advantage to offset these costs, it need not necessarily have been an advantage in intelligence, certainly not in general intelligence. Instead, increased brain-size could, at least in theory, have evolved in relation to some specific ability, or cognitive or neural process, other than intellectual ability.

Yet, despite this forthright claim, Sarich and Miele then go on to observe that one study conducted by one of Sarich’s graduate students, in collaboration with Sarich himself, actually found no association between brain size and IQ as between siblings from the same family (Schoenemann et al 2000).

This, Sarich and Miele explain, suggests the relationship between brain-size and IQ is not causal, but rather that some factor that differs as between families is responsible for causing both larger brains and the higher IQs. However, they explain, “the obvious candidates” (e.g. socioeconomic status, nutrition) do not have nearly a big enough effect to account for this (p222).

However, they fail to note that other studies have found a correlation between brain size and IQ scores even within families, suggesting that brain size does indeed cause higher intelligence (e.g. Jensen & Johnson 1994; Lee et al 2019).

Indeed, according to Rushton and Ankney (2009: 695), even prior to the Lee et al study, four studies had already established a correlation between brain-size and IQ even within families, a couple of them published before Sarich and Miele’s book.

Of course, Sarich and Miele can hardly be faulted for failing to cite Lee et al (2019), since that study had not been published at the time their book was written. However, other studies (e.g. Jensen & Johnson 1994) had already been published at the time Sarich and Miele authored their book.

Brain-size is also thought to correlate with intelligence as between species, at least after controlling for body-size (see encephalization quotient).

However, comparing the intelligence of different species obviously represents a difficult endeavour.

Quite apart from the practical challenges (e.g. building a maze for a mouse to navigate in the laboratory is simple enough, building a comparable maze for elephants presents more difficulties), there is the fact that, whereas most variation in human intelligence, both between individuals and between groups, is captured by a single g factor, different species no doubt have many different specialist abilities.[62]

For example, migratory birds surely have special abilities in respect of navigation. However, these are not necessarily reflective of their overall general intelligence.

In other words, if you think a culture-fair’ IQ test is an impossibility, then try designing a ‘species-fair’ test!

If brain-size correlates with intelligence both as between species and as between individual humans, it seems probable that race differences in brain-size also reflect differences in intelligence.

However, larger brains do not automatically, or directly, confer, or cause, higher levels of intelligence.

For example, most dwarves have IQs similar to those of non-dwarves, despite having smaller brains, but, save in the case of ‘proportionate dwarves’, larger brains relative to their body-size. Neither is macrocephaly (i.e. abnormally and pathologically large head-size) associated with exceptional intelligence.

The reason that disproportionate dwarves and people afflicted with macrocephaly do not have especially high intelligence, despite larger brains relative to their body size, is probably because these are abnormal pathological conditions. The increased brain-size did not evolve through natural selection, but rather represents some kind of malfunction in development.

Therefore, whereas increases in brain size that evolved through natural selection must have conferred some advantage to offset the metabolic expense of brain tissue and other costs associated with increased brain size, these sort of pathological increases in brain-size need not have any compensating advantages, since they did not evolve through natural selection at all, and the increased relative brain size may indeed be wasted.

Likewise, although sex differences in brain-size are greater than those between races, at least before controlling for body-size, sex differences in IQ are either small or non-existent.[63]

Meanwhile, Neanderthals had larger brains than modern humans, despite a shorter, albeit more robust, stocky and more muscular frame, and with somewhat heavy overall body weight.

As with so much discussion of the topic of race differences in intelligence, Sarich and Miele focus almost exclusively on the topic of differences between whites and blacks, the authors reporting:

With respect to the difference between American whites and blacks, the one good brain-size study that has been done indicates a difference between them of about 0.8 SD [i.e. 0.80 of a standard deviation]; this could correspond to an IQ difference of about 5 points, or about one-third of the actual differential [actually] found [between whites and blacks in America]” (p217)

The remainder of the differential presumably relates to internal differences in brain-structure as between the races in question, whether these differences are environmental or innate in origin.

Yet Sarich and Miele say little if anything to my recollection about the brain-size of other groups, for example Australian Aboriginals or East Asians.

Neither, most tellingly, do they discuss the brain-size of the race of mankind gifted with the largest average brain size – namely, Eskimos.

Yet the latter are not renowned for their contributions to science, the arts or civilization.

Moreover, according to Richard Lynn, their average IQ is only 91, as compared to an average IQ of 100 for white Europeans – high for a people who, until recently, subsisted as largely hunter-gatherers (other such groups – Australian Aborigines, San Bushmen, Native Americans – have low average IQs), but well below whites, East Asians and Ashkenazi Jews, each of whom possess, on average, smaller brains than Eskimos  (see Race Differences in Intelligence: reviewed here).

In general, a clear pattern emerges in respect of the relative brain-size of different human populations. In general, the greater the latitude of the region in which a given population evolved, the greater their brain-size. Hence the large brains of Eskimos (Beals et al 1984).

This then seems to be a climatic adaptation. Some racialists like Richard Lynn and Philippe Rushton have argued that this reflects the greater cognitive demands of surviving in a cold climate (e.g. building shelter, making fire, clothes, obtaining sufficient foods in regions where plant foods are rare throughout the winter).

In contrast, to the extent that race and population differences in average brain size are even acknowledged by mainstream anthropologists, they are usually attributed to the Bergmann’s rule of temperature regulation. Thus, the authors of one recent undergraduate level anthropology textbook on biological anthropology contend:

Larger and relatively broader skulls lose less heat and are adaptive in cold climates; small and relatively narrower skulls lose more heat and are adaptive in hot climates” (Human Biological Variation: p285).[64]

As noted, this seems to be an extrapolation of Bergmann’s rule of temperature regulation. Put simply, in a cold climate, it is adaptive to minimize the proportion of the body that is directly exposed to the elements, or, in other words, to minimize the ratio of surface-area-to-volume.

As the authors of another undergraduate level textbook on physical anthropology explain:

“The closer a structure approaches a spherical shape, the lower will be the surface-to-volume ratio. The reverse is true as elongation occurs—a greater surface area to volume is formed, which results in more surface to dissipate heat generated within a given volume. Since up to 80 percent of our body heat may be lost through our heads on cold days, one can appreciate the significance of shape” (Human Variation: Races, Types and Ethnic Groups, 5th Ed: p188).

However, it seems implausible that an increase in metabolically expensive brain tissue would have evolved solely for regulating temperature, when the same result could have been achieved at less metabolic cost by modifying only the external shape of the skull.

Moreover, perhaps tellingly, it seems that brain size correlates more strongly with latitude than do other measures of body-size. Thus, in their review of the data on population differences in cranial capacity, Beals et al report:

Braincase volume is more highly correlated with climate than any of the summative measures of body size. This suggests that cranial morphology may be more influenced by the thermodynamic environment than is the body as a whole” (Beals et al 1984: p305).

Given that, contrary to popular opinion, we do not in fact lose an especially large proportion of our body heat from our heads, certainly not the eighty percent claimed by Molnar in the anthropology textbook quoted above, this is not easy to explain interms of temperature regulation alone.

At any rate, even if differences in brain size did indeed evolve solely for the purposes of temperature regulation, then it is still surely possible that differences in average intelligence evolved as a byproduct of such increases in brain-size.

Measured IQs in Sub-Saharan Africa

With regard to the second controversial topic upon which Sarich and Miele focus their discussion in their chapter on ‘Race and Behavior’, namely that of the average IQ in sub-Saharan Africa, the authors write:

Perhaps the most enigmatic and controversial results in the IQ realm pertain to sub-Saharan Africans and their descendants around the world. The most puzzling single finding is the apparent mean IQ of the former of about 70” (p225).

This figure applies, it ought to be emphasized, only to black Africans still resident within sub-Saharan Africa. Blacks resident in western economies (except Israel, oddly), whether due to racial admixture or environmental factors, or a combination of the two, generally score much higher, though still substantially below whites and Asians, with average IQs of about 85, compared, of course, to a white average of 100 (see discussion here).

The figure seems to come originally from the work of Richard Lynn on national IQs (reviewed here, for discussion of black IQs in particular: see here, here and here), and has inevitably provoked much criticism and controversy.[65]

While the precise figure has been questioned, it is nevertheless agreed that the average IQ of blacks in sub-Saharan Africa is indeed very low, and considerably lower than that of blacks resident in western economies, unsurprisingly given the much higher living standards of the latter.[66]

For their part, Sarich and Miele seem to accept Lynn’s conclusion, albeit somewhat ambiguously. Thus, they conclude:

One can perhaps accept this [figure] as a well-documented fact” (p225).

Yet including both the word “perhaps” and the phrase “well-documented” in a single sentence and in respect of the same ostensible “fact” strikes me as evidence of evasive fence-sitting.

An IQ of below 70 is, in Western countries, regarded as indicative of, and inconclusive evidence for, mental retardation, though mental disability is not, in practice, diagnosed by IQ alone.[67]

However, Sarich and Miele report:

Interacting with [Africans] belies any thought that one is dealing with an IQ 70 people” (p226).[68]

Thus, Sarich and Miele point out that, unlike black Africans:

Whites with 70 IQ are obviously substantially handicapped over and above their IQ scores” (p225).

In this context, an important distinction must be recognised between, on the one hand, what celebrated educational psychologist Arthur Jensen calls “biologically normal mental retardation” (i.e. individuals who are simply at the tail-end of the normal distribution), and, on the other, victims of conditions such as chromosomal abnormalities like Down Syndrome or of brain damage, who tend to be impaired in other ways, both physical and psychological, besides intelligence (Straight Talk About Mental Tests: p9).

Thus, as he explains in his more recent and technical book, The g Factor: The Science of Mental Ability:

There are two distinguishable types of mental retardation, usually referred to as ‘endogenous’ and ‘exogenous’ or, more commonly, as ‘familial’ and ‘organic’… In familial retardation there are no detectable causes of retardation other than the normal polygenic and microenvironmental sources of IQ variation that account for IQ differences throughout the entire range of IQ… Organic retardation, on the other hand, comprises over 350 identified etiologies, including specific chromosomal and genetic anomalies and environmental prenatal, perinatal, and postnatal brain damage due to disease or trauma that affects brain development. Nearly all of these conditions, when severe enough to cause mental retardation, also have other, more general, neurological and physical manifestations of varying degree… The IQ of organically retarded children is scarcely correlated with the IQ of their first-order relatives, and they typically stand out as deviant in other ways as well” (The g Factor: p368-9).

Clearly, given that the entire normal distribution of IQ among blacks is shifted downwards, a proportionally greater number of blacks with IQs below any given threshold will simply be at the tail-end of the normal distribution for their race rather than suffering from, say, chromosomal abnormalities, as compared to whites or East Asians with the same low IQs.

Thus, as Sarich and Miele themselves observe:

Given the nature of the bell curve for intelligence and the difference in group means, there are proportionately fewer whites with IQs below 75, but most of these are the result of chromosomal or single-gene problems and are recognizable as such by their appearance as much as by their behavior” (p230).

This, then, is why low-IQ blacks appear relatively more competent and less stereotypically ‘retarded’ than whites or East Asians with comparably low IQs, since the latter are more likely to have deficits in other areas, both physical and psychological.

Thus, leading intelligence researcher Nicholas Mackintosh reports that low-IQ blacks perform much better than whites of similarly low IQ in respect of so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in an apparently ‘normal’ manner).

Indeed, Mackintosh reports that, according to one sociological study first published in 1973:

If IQ alone was used as a criterion of disability, ten times as many blacks as whites would have been classified as disabled; if adaptive behaviour measures were added to IQ, this difference completely vanished” (IQ and Human Intelligence: p356-7).

This is indeed among the reasons that IQ alone is now no longer deemed a sufficient ground in and of itself for diagnosing a person as suffering from a mental disability.

Similarly, Jensen himself reports:

In social and outdoor play activities… black children with IQ below seventy seldom appeared as other than quite normal youngsters— energetic, sociable, active, motorically well coordinated, and generally indistinguishable from their age-mates in regular classes. But… many of the white children with IQ below seventy… appeared less competent in social interactions with their classmates and were motorically clumsy or awkward, or walked with a flatfooted gait” (The g Factor: p367).[69]

Indeed, in terms of physical abilities, some black people with what are, at least by white western standards, very low IQs, can even be talented athletes, a case-in-point being celebrated world heavyweight boxing champion, Muhammad Ali, who tested so low in an IQ test that was used by the armed services for recruitment purposes that he was initially rejected as unfit for military service.[70]

In contrast, I am unaware of any successful white or indeed Asian athletes with comparably low IQs.

In short, according to this view, most sub-Saharan Africans with an IQs less than or equal to 70 are not really mentally handicapped at all. On the contrary, they are within the normal range for the subspecies to which they belong.

Indeed, to adopt an admittedly provocative analogy or reductio ad absurdum, it would be no more meaningful to say that the average chimpanzee is mentally handicapped simply because they are much less intelligent than the average human.

Sarich and Miele adopt another, less provocative analogy, suggesting that, instead of comparing sub-Saharan Africans with mentally handicapped Westerners, we do better to compare them to Western eleven-year-old children, since 70 is also the average score for children around this age (p229-30).

Thus, they cite Lynn himself as observing:

Since the average white 12-year-old can do all manner of things, including driving cars and even fixing them, estimates of African IQ should not be taken to mean half the population is mentally retarded” (p230).

However, this analogy is, I suspect, just as misleading.

After all, just as people suffering from brain damage or chromosomal abnormalities such as Down Syndrome tend to differ from normal people in various ways besides intelligence, so children differ from adults in many ways other than intelligence.

Thus, even highly intelligent children often lack emotional maturity and worldly knowledge.[71]

Khoisan Intelligence

Interestingly, however, the authors suggest that one specific morphologically very distinct subgroup of sub-Saharan Africans, often recognised as a separate race (Capoid as opposed to Congoid, in Carleton Coon’s terminology and taxonomy) by many early twentieth century anthropologists, may be an exception when it comes to sub-Saharan African IQs – namely San Bushmen.

Thus, citing anecdotal evidence of a single individual Bushman who proved himself very technically adept and innovative in repairing a car motor, the authors quote population geneticist Henry Harpending, who has done fieldwork in Africa, as observing:

All of us have the impression that Bushmen are really quick and clever and are quite different from their neighbours” (p227).

They also quote Harpending as anticipating:

There will soon be real data available about the relative performance of Bushmen, Hottentot, and Bantu kids – or more likely, they will supress it” (p227).

Some two decades or so later, the only data I am aware of is that reported by Richard Lynn.

Relying on just two very limited studies of Khoisan intelligence, Lynn nevertheless does not hesitate to estimate Bushmen’s average IQ at just 54 – the lowest that he reports for any ethnic group anywhere in the world (Race Differences in Intelligence: p76).

However, we should be reluctant to accept these conclusions prematurely. Not only does Lynn rely on only two studies of Khoisan intelligence, but both these studies were very limited, neither remotely resembling a full modern IQ test.

Agriculture, Foraging and Intelligence

As to why higher intelligence might have been selected for among San Bushmen than among neighbouring tribes of Black Bantu, they consider the possibility that there was “lessened selection for intelligence (or at least cleverness) with the coming of agriculture, versus hunting-gathering”, since, whereas Bantu are agriculturalists, the San still subsist through hunting-gathering (p227).

On this view, hunting-gatherers must employ intelligence to track and capture prey and otherwise procure food, whereas farming, save for the occasional invention of a new agricultural technique, is little more than tedious, repetitious and mindless drudgery.

I am reminded of Jared Diamond’s provocative claim, in his celebrated book, Guns, Germs and Steel, that “in mental ability New Guineans are probably genetically superior to Westerners”, since the former must survive on their wits, avoid being murdered and procure prey to survive, whereas in densely populated agricultural and industrial societies most mortality comes from disease, which tends to strike randomly (Guns, Germs and Steel: p20-1).

Yet, how ever intuitively plausible this theory might appear, especially, perhaps, for those of us who have, throughout our entire lives, never either hunted or farmed, certainly not in the manner of the Bantu or San, it is not supported by the evidence.

According to data collected by Richard Lynn in his book, Race Differences in Intelligence (reviewed here), albeit on the basis of quite limited data, both New Guineans and San Bushmen have very low average IQs, lower even than other sub-Saharan Africans.[72]

Thus, they again quote Henry Harpending as concluding:

Almost any hypothesis about all this can be falsified with one sentence. For example:

  1. Hunting-gathering selects for cleverness. But then why do Australian Aborigines do so badly in school and on tests?
  2. Dense labor-intensive agriculture selects for cleverness, explaining the high IQ scores in the Far East and in South India. But then why is there not a high~IQ pocket in the Nile Valley?

And so on. I don’t have any viable theory about it all.”[73]

Indeed, if we rely on Lynn’s data in his book, Race Differences in Intelligence (which I have reviewed here), then it would seem that groups that have, until recently, subsisted primarily through a hunter-gatherer lifestyle, tend to have low IQs.

Thus, Lynn attributes exceptionally low average IQs not only to San Bushmen, but also to African Pygmies and Australian Aboriginals, and, while his data for the Bushmen and Pygmies is very limited, his data on Australian Aboriginals from the Australian school system is actually surprisingly abundant, revealing an average IQ of just 62.

Interestingly, other groups who had already partly, but not wholly, transitioned to agriculture by the time of European contact, such as Pacific Islanders and Native Americans, tend to score rather higher, each with average IQs of around 85, rather higher indeed than the average IQs of black Bantu agriculturalists in Africa.

Indeed, even cold-adapted Eskimos, also, until recently hunter-gatherers, but with the largest brain-size of any human population, score only around 90 in average IQ according to Lynn.

Interestingly, one study that I am aware of did find evidence that a genetic variant associated with intelligence, executive function and working memory was more prevalent among populations that had transitioned to agriculture than among hunter-gatherers (Piffer 2013).

Race Bombs’?

In their final chapter, ‘Learning to Live With Race’, Sarich and Miele turn to the vexed subject of the social and political implications of what they have reported and concluded regarding the biology of race and of race differences in previous chapters.

One interesting if somewhat sensationalist subject that they discuss is the prospect of what they call “ethnically targeted weapons” or “race bombs”. These are:

The ultimate in biological weapons… ethnically targeted weapons-biological weapons that selectively attack members of a certain race or races but, like the Death Angel in the Book of Exodus, ignore members of the attacker’s race” (p250).

This might sound more like dystopian science fiction than it does science, but Sarich and Miele cite evidence that some regimes have indeed attempted to develop such weapons.

Perhaps predicably, the regimes in question are the ‘usual suspects’, those perennial pariah states of liberal democratic western modernity, each of whom were/are, nevertheless, very much western states, which is, of course, the very reason for their pariah status, since, for this reason, they are held to relatively higher standards than are other African and Middle Eastern polities – namely apartheid-era South Africa and Israel.

The evidence the authors cite goes beyond mere sensationalist rumours and media reports.

Thus, they report that one scientist who been had employed in a chemical and biological warfare plant in South Africa testified before the post-apartheid Truth and Reconciliation Commission that he had indeed led a research team tasked with developing a “a ‘pigmentation weapon’ that would ‘target only black people’ and that could be spread through beer, maize, or even vaccinations’” (p252).

Meanwhile, according to media reports and government leaks cited by the authors, Israel has taken up the gauntlet of developing a ‘race bomb’, building on the research begun by its former ally South Africa (p252).

Unfortunately, however, (or perhaps fortunately, especially for the Palestinians) Sarich and Miele report that, as compared to developing a ‘race bomb’ for use in apartheid-era South Africa:

Developing a weapon that would target Arabs but spare Jews would be much harder because the two groups are exceedingly alike genetically” (p253).[74]

Indeed, this problem is not restricted to the Middle East. On the contrary, Sarich and Miele report, listing almost every ethnic conflict that had recently been in the headlines at the time they authored their book:

The same would hold for the Serbs, Croats, and Bosnians in the former Yugoslavia; the Irish Catholics and Ulster Protestants in Northern Ireland; North and South Korea; and Pakistan and India” (p254)

This is, of course, because warring ethnic groups tend to be neighbours, often with competing claims to the same territory; yet, for the same reason, they also often share common origins, as well as the inevitable history of mating, miscegenation and intermarriage that invariably occurs wherever different groups come into contact with one another, howsoever discouraged and illicit such relationships may be.

Thus, paradoxically, warring ethnic groups are almost always genetically quite closely related to one another.

The only exceptions to this general rule are in places there has been recent large-scale movements of populations from distant regions of the globe, but the various populations have yet to interbreed with one another for a sufficient period as to dilute their genetic differences (e.g. blacks and whites in the USA or South Africa).

Thus, Sarich and Miele identify only Sudan in Northeast Africa as, at the time they were writing, a “likely prospect for this risk” (namely, the development of a ‘race bomb’), as at this time war was then raging between what they describe as “racially mixed Islamic north and the black African Christian and traditional-religion south” (p255).

Yet, here, even assuming that the genetic differences between the two then-warring groups were indeed sufficiently substantial as to make such a weapon a theoretical possibility, it is highly doubtful that either side would have the technological wherewithal, capacity, resources and expertise to develop such a weapon.

After all, Israel is a wealthy country with a highly developed high-tech economy with an advanced armaments industry and is a world leader in scientific and technology research, not to mention receiving billions of dollars in military aid annually from the USA alone.

South Africa was also regarded as a developed economy during the heyday of apartheid when this research was supposedly conducted, though it is today usually classed as ‘developing[75]

Sudan, on the other hand, is a technologically backward Third World economy. The prospect of either side in the conflict developing a novel form of biological weapon is therefore exceedingly remote.

A similar objection applies to the authors’ suggestion that, even in multiracial America, supposedly comparatively “immune to attack from race bombs from an outside source” on account of its “large racially diverse population”, there may still be a degree of threat from “terrorist groups within our country” (p255).

Thus, it is true that there may well be terrorist groups in the USA that do indeed harbour genocidal intent. Indeed, black nationalist groups like the Nation of Islam and black Israelites have indeed engaged in openly genocidal mass murders of white Americans, while white nationalist groups, though poitically very marginal, have also been linked to terror attacks and racially motivated murders, albeit isolated, sporadic and on a very small scale, at least in recent decades.

However, it is extremely unlikely that these marginal extremists, whose membership is largely drawn from most uneducated and deprived strata of society, would have the technical knowledge and resources to build a ‘race bomb’ of the sort envisaged by Sarich and Miele, especially since such weapons remain only a theoretical possibility and are not known to have been successfully developed anywhere in the world, even in South Africa and Israel.

At any rate, even among relatively genetically distinct and unmixed populations, any ‘race bomb’ would, Sarich and Miele rightly report, inevitably lack “pinpoint accuracy” given the only very minimal genetic differentiation observed among human races, a key point that they discussed at length earlier in their book (p253).

Therefore, Sarich and Miele conclude:

“[The only] extremists crazy enough to attempt to use such weapons would be [those extremists] crazy enough to view large numbers of dead among their own nation, race or ethnic group as ‘acceptable losses’ in some unholy holy war to save their own group would risk employing such a device” (p353-4).

Unfortunately, some “extremists” are indeed just that “crazy” and extreme, and these “extremists” include not only terrorist groups, but also sometimes governments as well.

Indeed, every major war in recent history has, by very definition, involved the main combatant regimes being all too willing to accept “large numbers of dead among their own nation, race, or ethnic group as ‘acceptable losses’” – otherwise, of course, they would be unlikely to qualify as ‘major’ wars.

Thus, Sarich and Miele conclude:

Even if race bombs do not have the pinpoint accuracy desired, they have the potential to do great harm to people of all races and ethnic groups” (p253).

Political Implications?

Aside from their somewhat sensationalist discussion of the prospect for ‘race bombs’, Sarich and Miele, in their final chapter, also discuss perhaps more realistic scenarios of how an understanding (or failure to understand) the nature and biology of race differences might affect the future of race relations in America, the west and beyond.

In particular, they identify three possible future ‘scenarios’, namely:

  1. Meritocracy;
  2. Affirmative Action, Race Norming and Quotas’; and
  3. Resegregation.

A fourth possibility that they do not discuss is freedom of association, as championed by libertarians.

Under this principle, which largely prevailed in the USA prior to the 1964 Civil Rights Act (and in the UK prior to the 1968 Race Relations Act), any private individual or corporation (but not the government) would be free to discriminate against any person or group he or she wished on any grounds whatsoever, howsoever racist or downright irrational.

Arguably, such a system would, in practice, result in something very close to meritocracy, since any employer that discriminated irrationally against a certain demographic would be outcompeted and hence driven to out of business by competing employers that instead chose the best candidates for the job, or even preferentially employed members of the group disfavoured by other employers precisely because, since some other employers refused to hire them, the latter would be willing to work for lower wages, hence cutting costs and thereby enabling them to undercut and thereby outcompete their more prejudiced competitors.

In practice, however, some degree of discrimination would likely remain, especially in the service industry, not least because, not just employers, but consumers themselves might discriminate against service providers of certain demographics.[76]

The authors, for their part, deplore the effects of affirmative action in higher education.

Relying on Sarich’s own direct personal experience as a professor at the University of California at Berkley, where affirmative action was openly practiced from 1984 until 1996, at which time it was, at least in theory,[77] discontinued after an amendment to the California state constitution prohibiting the practice in public education, government employment and contracting, they report that it resulted in:

An Apartheid-like situation – two student bodies separated by race/ethnicity and performance who wound up, in the main, in different courses, pursued different majors, and had minimal social interactions but maximum resentment” (p245)

Thus, they conclude:

It is, frankly, difficult to imagine policies that could have been more deliberately crafted or better calculated to exacerbate racial and ethnic tensions, discourage individual performance among all groups, and contribute to the decay of a magnificent educational institution” (p245)

The tone adopted here suggests that the authors also very much disapprove of the third possible scenario that they discuss, namely resegregation.

However, they also very much acknowledge that this process is already occurring in modern America, and also seem pessimistic regarding the chances of halting or reversing it.

Despite or perhaps because of government-imposed quotas, society becomes increasingly polarized along racial lines… America increasingly resegregates itself. This trend can already be seen in housing, enrollment in private schools, racial composition of public schools, and political affiliation” (p246).

On the other hand, their own preference seems to be very much for what they call ‘meritocracy’.[78]

After all, they report:

Society… cannot level up-only down-and any such leveling is necessarily at the expense of individual freedom and, ultimately, the total level of accomplishment” (p246).

However, they acknowledge that a return to meritocracy, or at least the abolition of race preferences, would not be without its problems, not least of which is the inevitable degree of resentment of the part of those groups which perceive themselves as losing out in competition with other better performing groups.

Thus, they conclude:

When we assess group representations with respect to the high-visibility pluses (e.g., high-paying jobs) and minuses (e.g., criminality) in any society, it is virtually guaranteed that they are not going to be equal-and that the differences will not be trivial” (p246)

On the other hand, race relations were not especially benign even in modern ‘affirmative action’-era America, or what we might aptly term the ‘post-post-racial America’ era, when the utopian promises of the early Obama-era went up in flames, along with much of America’s urban landscape, in the mostly peaceful BLM riots which claimed at least nineteen lives and caused property damage estimated in the billions of dollars in 2020 alone.

Could things really get any worse were racial preferences abolished altogether? Are the urban ghetto black underclass really likely to riot because fewer upper-middle-class blacks are given places at Harvard that they didn’t really deserve?

In mitigation of any resentments that arise as a consequence of disparities in achievement between groups, Sarich and Miele envisage that, in the future:

Increasing societal complexity, by definition, means increasing the number of groups in that society to which a given individual can belong. This process tends to mitigate exclusive group identification and the associated resentment toward other groups” (p242).

In other words, Sarich and Miele seem to be saying that, instead of identifying only with their race or ethnicity, individuals might come identify with another other aspects of their identity, in respect of which aspects of their identity their ‘own’ group would presumably perform rather better in competition with other groups.[79]

What aspects of their identity they have in mind, they do not say.

The problem with this is that, while individuals do indeed evince an in-group preference even in respect of quite trivial (or indeed wholly imaginary) differences, both historically and cross-culturally in the world today, ethnic identity has always been an especially important part of people’s identity, probably for basic biological reasons, rooted as it is in a perception of shared kinship.

In contrast, other aspects of a person’s identity (e.g. their occupation, which football team they support, their sex) tend to carry rather less emotional weight.[80]

In my view, a better approach to mitigating the resentments associated with the different average performance of different groups is instead to emphasize performance in different spheres of attainment.

After all, if it is indeed, as Sarich and Miele contend in the passage quoted above, “virtually guaranteed” that different groups have different levels of achievement in different activities, it is also “virtually guaranteed” that no group will perform either well or poorly at all these different endeavours.

Thus, blacks may indeed, on average, perform relatively poorly in academic and intellectual pursuits, at least as compared to whites and Asians. However, blacks seemingly perform much better in other spheres, not least in popular music and, as discussed above, in many athletic events.

Indeed, as discussed by blogger and journalist Steve Sailer in his fascinating essay for National Review, Great Black Hopes, African Americans actually consistently outperform whites in any number of spheres (Sailer 1996).

As amply demonstrated by Herrnstein and Murray in The Bell Curve (reviewed here), intellectual ability, as measured by IQ, indeed seems to be of particular importance in determining socioeconomic status and income in modern economically developed technologically advanced societies, such as the USA, and, in this respect, blacks perform relatively poorly.

However, popular entertainers and elite athletes, while not necessarily possessing high IQs, nevertheless enjoy enormous social and cultural prestige in modern western society, far beyond that enjoyed by lawyers, doctors, or even leading scientists, playwrights, artists and authors.

More children grow up wanting to be professional footballers or pop stars than grow up wanting to be college professors or research scientists, and, whereas everyone, howsoever estranged from popular culture like myself, could nevertheless name any number of famous pop stars, actors and athletes, many of them black, the vast majority of leading intellectuals and scientists are all but unknown to the vast majority of the general public.

Indeed, even those working in other ostensibly high-IQ fields, like law and medicine, and perhaps science and academia too, are much more likely to follow sports, and watch popular movies and TV than they are to, say, recreationally read scientific journals or even popular science books and magazines.

In other words, although it is the only example the authors give in the passage quoted above, “high-paying jobs” are far from the only example of “high-visibility pluses” in which different ethnic groups perform differently, and nor are they the most “high-visibility” of such “pluses”.

Indeed, the sort of “high-paying jobs” that Sarich and Miele likely have in mind are not even the only type of “high-paying jobs”, though they may be the most numerous such jobs, since elite athletes and entertainers, in addition to enjoying enormously high social and cultural prestige, also tend to be very well-paid.

In short, the idea that intellectual ability is the sole, or even the most important, determinant of social worth and prestige, is an affection largely restricted to those who, like Sarich and Miele, and also many of their most vocal critics like Gould, happen to work in science, academia and other spheres where intellectual ability is indeed at a premium.

Most women, in my experience, would rather be thought beautiful (or at least pretty) than intelligent; most men would rather be considered athletic, tough, strong, charismatic and manly than they would a ‘brainy nerd’ – and, when it comes to being considered tough, athletic, manly and charismatic, black males arguably perform rather better than do whites or Asians!

Mating, Miscegenation, Race Riots and Rape

Finally, it is perhaps worth noting that Sarich and Miele also discuss, and, perhaps surprisingly, caution against, another widely touted supposed panacea to racial problems, namely mass miscegenation and intermarriage.

On this view, all racial animosities will disappear in just a few generations if we all just literally fornicate them out of existence by indiscriminately reproducing with one another and hence ensuring all future generations are of mixed race and hence indistinguishable from one another.

If this were the case then, in the distant future, race problems would not exist simply because distinguishable races would not exist, and there would only be one race – the human race – and we would all presumably live happily ever after in a benign and quite literally ‘post-racial’ utopia.

In other words, racial conflict would disappear in the future because the claim of the racial egalitarians and race deniers – namely that there are no human races, but rather only one race, the human race – the very claim that Sarich and Miele have devoted their enitre book to rejecting – would ultimately come to be true.

Of course, one might question whether this outcome, even if achievable, would indeed be desirable, not least since it would presumably result in the loss, or at least dilution, of the many unique traits, and abilities of different races, including those that Sarich and Miele have discussed in previous chapters.

At any rate, given the human tendency towards assortative mating, especially with respect to traits such as race and ethnicity, any such post-racial alleged utopia would certainly be long in coming. A more likely medium-term outcome would be something akin to a pigmentocracy of the sort endemic throughout much of Latin America, where race categories are indeed more fluid and continuous, but racial differences are certainly still apparent, and still very much associated with income and status, and race problems arguably not noticeably ameliorated.

Yet Sarich and Miele themselves raise a different, and perhaps less obvious, objection to racial miscegenation as a potential cure-all and panacea for racial animosity and conflict.

Far from being the panacea to end racial animosity and conflict, Sarich and Miele contend that, at least in the short-term, miscegenation may actually exacerbate racial conflict:

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (p255).

Thus, they observe that:

All around the world, downwardly mobile males who perceive themselves as being deprived of wealth, status, and especially females by up-and-coming members of a different race are ticking time bombs” (p248).

Indeed, it is not just intermarriage that ignites racial animosity. Other forms of interracial sexual contact may be even more likely to provoke a violent response, especially where it is alleged, often falsely, that the sexual contact in question was coercive.

Thus, in the USA, allegations of interracial rape seem to have been the most frequent precursor to full-blown race riots. Thus, early twentieth century riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 all seem to have been ignited by rumours or allegations that a white woman had been the victim of rape at the hands of a black man.

Meanwhile, Britain’s first major modern race riot, the 1958 Notting Hill riot, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before then turning on them both.

More recently, the 2005 Birmingham riot, which, in a dramatic reflection of the demographic transformation of Britian, did not involve white people at all, was ignited by the allegation that a black girl had been gang-raped by South Asian males.

Meanwhile, in a dramatic proof that even ‘civilized’ white western Anglo-Celts (or at least semi-civilized Scousers and Aussies) are still not above taking to the streets when they perceive their womenfolk (and their reproductive interests) as under threat, both the 2005 Cronulla riots in Sydney, Australia, and the 2023 attack on a 4-star hotel housing refugees in Kirby, Merseyside were ignited by the allegation that Middle Eastern men had been sexually harassing, or at least propositioning, local white girls.

Likewise, in Britain and beyond, the spectre of ‘Muslim grooming gangs’ sexually exploiting and pimping underage white girls in cities throughout the North of England has ignited anti-Muslim sentiment seemingly to a far greater degree than has an ongoing wave of terrorist attacks in the same country in which multiple people have been killed.

Likewise, the spectre of interracial rape also loomed large in the justifications offered on behalf of the reconstruction-era Ku Klux Klan for their various atrocities, which were supposedly motivated by the perceived need to protect the ostensible virtue and honour of white women in the face of black predation.

More recently, in 2015, Dylann Roof allegedly shouted You rape our women and you’re taking over our country before opening fire on the predominantly black congregation at a church in South Carolina, killing nine people.

Why then is the spectre of interracial sexual contact, especially rape, so likely to provoke racist attacks?

For Sarich and Miele, the answer is obvious:

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (p256).

This so-called “racial solidarist perspective” also represents, of course, a crudely group-selectionist understanding of male reproductive competition – but one that, though biologically questionable at best, is, in simplified form, all but pervasive among racialists.

What applies, according to van den Berghe, to intermarriage surely applies to an even greater degree to other forms of miscegenation, such as casual sex and rape, where the father does not take responsibilty for raising any mixed-race offspring that result, and this is instead left in the hands of the mother’s own ethnic community.

Thus, as sociologist-turned-sociobiologist Pierre van den Berghe, puts it in his excellent The Ethnic Phenomenon (reviewed here), observes:

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (The Ethnic Phenomenon: p75). 

Competition over reproductive access to fertile females is, after all, Darwinian conflict in its most direct and primordial form.

One is thus reminded of the claim of ‘Robert’, a character from Platform, a novel by controversial but celebrated contemporary French author Michel Houellebecq, when he asserts that: 

“What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 

_____________________

Endnotes

[1] Of course, even if race differences were restricted to “a few highly visible features” (e.g. skin colour, nose shape, body size), it may still be possible for forensic scientists to identify the race of a subject from his DNA. They would simply have to look at those portions of the genome that code for these “few highly visible features”. However, there would then be no correlation with other segments of the genome, and genetic differences between races would be restricted to the few genes coding for these “few highly visible features”.
In fact, however, there is no reason to believe that races differ to a greater degree in externally visible traits (skin colour, nose shape, hair texture, stature etc.) than they do in any other traits, be they physiological or indeed psychological. It is just the externally visible traits with which we are most familiar and which are most difficult to dismiss as illusory, or explain away as purely cultural in origin, because we see them before us everyday whenever we are confronted with a person of a different race. In contrast, other traits are less obvious and apparent, and hence easier for race deniers to deny, or, in the case of behavioural differences, dismiss as purely cultural in origin.

[2] Here, the popular notion that serial killers are almost invariably white males was probably a factor in why the police were initially searching for a white suspect in this case. This stereotype was likely also a factor in the delay in apprehending another serial killer, the so-called ‘DC sniper’, whose crimes occurred around the same time, and who was also profiled as likely being a white man.
In fact, however, unlike many other stereotypes regarding race differences in crime rates, this particular stereotype is wholly false. While it is, of course, true that serial killers are disproportionately male, they are not disproportionately white. On the contrary, in the USA, blacks are actually overrepresented by a factor of two among convicted serial killers, as they are also overrepresented among perpetrators of other forms of violent crime (Walsh 2005).
Implicated in both cases were innacurate offender profiles, which, among other errors, had labelled the likely offender as a white male. Yet psychological profiling of this type is largely, if not wholly, a pseudoscience.
Thus, one meta-analysis found that criminal profilers often did not perform better, or performed only marginally better, at predicting the characteristics of offenders than did control groups composed of non-expert laypeople (Snook et al 2007).
As Steve Sailer has pointed out, offender profiling is, ironically, most unreliable where it is also most fashionable – psychological profiles of serial killers etc., which regularly feature in movies, TV and crime literature – but very unfashionable where is it most reliable – e.g. a young black male hanging around a certain area is very likely up to no good (Sailer 2019).
The latter, of course, in involves so-called racial profiling, which is very politically unfashionable, though also represents a much more efficient and effective use of police resources than ignoring factors such as race, age and sex. Of course, it also involves, if you like, ‘age profiling’ and ‘sex profiling’ as well, but these are much less controversial, though they rely on the exact same sort of statistical generalizations, which are again indeed accurate at the aggregate statistical level, though often unfair on individuals to whom the generalizations do not apply.

[3] The one-drop rule seems to have originated as a means of maintaining the racial purity of the white race. Thus, people of only slight African ancestry were classed as black (or, in those days, as ‘Negro’) precisely in order to prevent them passing and thereby infiltrating and adulterating the white gene pool, with interracial marriage, cohabitation and sexual relations, not only socially anathema, but also often explicitly prohibited by law.
Despite this white racialist origin, today the one-drop rule continues to operate in North America. It seems to be primarily maintained by the support of two interest groups, namely, first, mixed-race Americans, especially those of middle-class background, who want to benefit from discriminatory affirmative action programmes in college admissions and employment; and, second, self-styled ‘anti-racists’, who want to maintain the largest possible coalition of non-whites against the hated and resented white oppressor group.
Of course, some white racists may also still support the ‘one-drop rule’, albeit for very different reasons, and there are endless debates on some white nationalist websites as to who precisely qualifies as ‘white (e.g. Armenians, Southern Italians, people from the Balkans and Caucascus, but certainly not Jews). However, white racists are, today, of marginal political importance, save as bogeymen and folkdevils, and hence have minimal influence on mainstream conceptions of race.

[4] An even more problematic term is the currently fashionable but detestable term people of colour, which (like its synonymous but now politically-incorrect precursor, coloured people) manages to arbitrarily group together every race except white Europeans – an obviously highly Eurocentric conception of racial identity, but one which ironically remains popular with leftists because of its perceived usefulness in fermenting a coalition of all non-white races against the demonized white oppressor group.
The term also actually makes very little sense, save in this social, political and historical context. After all, in reality, white people are just as much ‘people of colour’ as people of other races. They are just a different colour, and indeed, since many hair and eye colors are largely, if not wholly, restricted to people of white European descent, arguably whites arguably have a stronger claim to being ‘people of colour’ than do people of most other races.

[5] Famously, and rather bizarrely, race in South Africa was said to be determined, at least on a practical day-to-day basis, by the so-called pencil test, whereby a pencil was placed in a person’s hair, and, if it fell to the ground, they were deemed white, whereas if it remained in their hair, held by the kinky hair characterisitic of sub-Saharan Africans, then they were classed as black or coloured.

[6] Defining race under the Nuremberg Laws was especially problematic, since Jewish people, unlike, say, black Africans, are not obviously phenotypically distinguishable from other white Europeans, at least not in every case. Thus, the Nuremberg laws relied on paper evidence of ancestry rather than mere physical appearance, and distinguished degrees of ancestry, with mischlings of the first and second degrees having differing circumscribed political rights.

[7] Racial identity in the American South during the Jim Crow era, like in America as a whole today, was determined by the so-called one-drop rule. However, the incorporation of other ethnicities into this uniquely American biracial system was potentially problematic. Thus, in the famous case of US v Bhagat Singh Thind, Bhagat Singh Thind, an Indian Sikh, arguing that he was both Caucasian, according to the anthropological claification of the time, and, being of North Indian high caste origin, Aryan too, argued that he ought to eligible for naturalization as an American citizen under the overtly racially discriminatory naturalization laws then in force. He was unsuccessful. Similarly, in Ozawa v United States, a person of Japanese ancestry was deemed not to be white under the same law.
Although I am not aware of any caselaw on the topic, presumably people of Middle Eastern ancestry, or partially of Middle Eastern ancestry, or North African ancestry, would have qualified as ‘white. For example, I am not aware of any Jewish people, surely the largest such group in America at the time (albeit, in the vast majority of cases, of mixed European and Middle Eastern ancestry), being denied naturalization as citizens.
Indeed, today, such groups are still classed as ‘white’ in the US census, much to their apparent chagrin, but a new MENA category is scheduled to be added to the US census in 2030. This new category has been added at the behest of MENA people themselves, aghast at having had to identify as white in earlier censuses, and strangely all too ready to abandon their ostensible ‘white privilege.
This earlier categorization of Middle-Eastern and North African people as white suggests a rather more inclusive definition of ‘white than is applied today, with more and more groups rushing to repudiate their whiteness, possibly in order to qualify as an oppressed group and hence benefit from affirmative action and other forms of racial preference, and certainly in order to avoid the stigma of whiteness. White privilege, it seems, is not all it’s cracked up to be.

[8] One of the main criticisms of the Dangerous Dogs Act 1991, rushed through Parliament in the UK amid a media-led moral panic over canine attacks on children, was the difficulty of distinguishing, or drawing the line between one breed and another. Obviously, similar problems emerge in determining the race of humans.
Indeed, the problems may even be greater, since the morphological differences (and also seemingly the genetic differences: see above) between human races are much smaller in magnitude than those between some dog breeds.
On the other hand, however, the problems may be even greater for identifying dog breeds, because, except for a few pedigreed purebreds, most domestic dogs are mixed-breed ‘mongrels to some degree. In contrast, excepting a few recently formed hybrid populations (such as African-Americans and Cape Coloureds), and clinal populations at the boundaries of the continental races (such as populations from the Horn of Africa), most people around the world are of monoracial ancestry, largely because, until recent migrations facilitated by advances in transport technology (ships, aeroplanes etc.), people of different races rarely came into contact with one another, and, where they did, interracial relationships often tended to be stigmatized, if not strictly prohibited (though this certainly completely didn’t stop them happening).
In addition, whereas human races were formed deep in prehistory, most dog breeds (excepting a few so-called ‘basal breeds’) seem to be of surprisingly recent origin.

[9] For example, when asked to identify the parent of a child from a range of pictures, children match the pictured children with a parent of the same race, rather than those of the same body-size/body-type or wearing similar clothing. Similarly, when asked to match pictures of children with the pictures of the adults whom they will grow up to become, children again match the pictures by race, not clothing or body-build (Hirschfeld 1996).

[10] In the notes for the previous chapter, they do, as I have already discussed, cite the work of Lawrence Hirschfeld as authority for the claim that even young children recognize the hereditary and immutable nature of race differences. It may be that Sarich and Miele have his studies in mind when they write of  evidence for “a species-wide module in the human brain that predisposes us to sort the members of our species into groups based on appearance”.
However, as I understand it, Hirschfeld doesn’t actually argue that his postulated group classification necessarily sorts individuals into groups “based on appearance [emphasis added]” as such. Rather, he sees is as a module designed to classify people into ‘human kinds’, but not necessarily by race. It could also, as I understand it, apply to kinship groups and ethnicities.
Somewhat analogously, anthropologist Francisco Gil-White argues that we have a tendency to group individuals into different ethnicities as a by-product of a postulated ‘species-recognition module’. In other words, we mistakenly classify members of different ethnicities as members of different species (i.e. what some social scientists have referred to as pseudo-speciation) because different ethnicities resemble different species in so far as, just as species breed true, so membership of a given ethnicity is passed down in families, and, just as members of different species cannot interbreed, so individuals are generally encouraged to mate endogamously, i.e., within their own group (Gil-White 2001).
Although Gil-White’s argument is applied to ethnic groups in general, it is perhaps especially applicable to racial groups, since the latter have a further feature in common with different species, namely individuals of different races actually look different in terms of inherited physiological characters (e.g. skin colour, facial morphology, hair texture, stature), as, of course, do different species.
Races are indeed ‘incipient species’, and, until as recently as the early twentieth century, biologists and anthropologists seriously debated the question as to whether the different human races did indeed constitute different species.
For example, Darwin himself gave serious and respectful consideration to this matter in his chapter ‘On the Races of Men’ in The Descent of Man before ultimately concluding that the different races were better described as subspecies.
More recently, John R Baker also gave a fascinating and balanced account of the evidence bearing on this question in his excellent book Race, which I have reviewed here (see this section of my review in particular).

[11] On the other hand, in his paper, ‘An integrated evolutionary perspective on ethnicity’, controversial evolutionary psychologist Kevin Macdonald disagrees with this conclusion, citing personal communication from geneticist and anthropologist Henry Harpending for the argument that:

Long distance migrations have easily occurred on foot and over several generations, bringing people who look different for genetic reasons into contact with each other. Examples include the Bantu in South Africa living close to the Khoisans, or the pygmies living close to non-pygmies. The various groups in Rwanda and Burundi look quite different and came into contact with each other on foot. Harpending notes that it is ‘very likely’ that such encounters between peoples who look different for genetic reasons have been common for the last 40,000 years of human history; the view that humans were mostly sessile and living at a static carrying capacity is contradicted by history and by archaeology. Harpending points instead to ‘starbursts of population expansion.’ For example, the Inuits settled in the arctic and exterminated the Dorsets within a few hundred years; the Bantu expansion into central and southern Africa happened in a millennium or less, prior to which Africa was mostly the yellow (i.e., Khoisan) continent, not the black continent. Other examples include the Han expansion in China, the Numic expansion in northern Africa [correction: actually in the Great Basin region of North America], the Zulu expansion in southern Africa during the last few centuries, and the present day expansion of the Yanomamo in South America. There has also been a long history of invasions of Europe from the east. ‘In the starburst world people would have had plenty of contact with very different looking people’” (Macdonald 2001: p70).

[12] A commenter on an earlier version of this article, Daniel, suggested that that our tendency to group individuals by race could represent a by-product of a postulated facial recognition faculty, which some evidence suggests is a domain-specific module or adaptation, localized in a specific area of the brain, the fusiform gyrus or fusiform facial area, injury or damage to which area sometimes results in an inability or recognize faces (or prosopagnosia). Thus, he writes:

Any two human faces are about as similar in appearance as any two bricks. But humans are far more sensitive to differences in human faces than we are to differences in bricks. The evolutionary psychologist would infer that being very good at distinguishing faces mattered more to our ancestors’ survival than being very good at distinguishing bricks. Therefore we probably have a face-recognition module in our brains.

On this view, race differences, while they may be real, are not so obvious, or rather would not be so obvious were we not so highly attuned to recognizing minor differences in facial morphology in order to identify individuals.
This idea strikes me as very plausible. Certainly, when we think of racial differences in physical appearance, we tend to think of facial characteristics (e.g. differences in the shapes of noses, lips, eyes etc.).
However, this probably also reflects, in part, the fact that, at least in western societies, in ordinary day-to-day life, other parts of our bodies are usually hidden from view by clothing. Thus, at least according to physiologist John Baker in his excellent book, Race (which I have reviewed here) racial groups, especially the Khoisan of Southern Africa, also differ considerably in their external genitalia, but these differences would generally be hidden from view by clothing.
Baker also claims that races differ substantially in the shape of their skulls, claiming:

Even a little child, without any instruction whatever, could instantly separate the skulls of [Eskimos] from those of [Lapps]” (Race: p427).

Of course, facial differences may partly be a reflection of differences in skull shape, but I doubt an ability to distinguish skulls would reflect a byproduct of a facial recognition module.
Likewise, Pygmies differ from other Africans primarily, not in facial morphology, but in stature.
Further evidence that we tend to focus on differences in facial morphology only because we are especially attuned to such differences, whether by practice or innate biology, is provided by the finding that artificial intelligence systems are able to identify the race of a subject through internal x-rays of their bodily organs, even where humans, including trained medical specialists, are incapable of detecting any difference (Gichoya et al 2022).
This also, incidentally, contradicts the popular notion that race differences are largely restricted to a few superficial external characteristics, such as skin-colour, hair texture and facial morphology. In reality, there is no reason in principal to expect that race differences in internal bodily traits (e.g. brain-size) would be of any lesser magnitude than those in external traits. It is simply that the latter are more readily observable on a day-to-day basis, and hence more difficult to deny.

[13] If racism was not a European invention, racism may nevertheless have become particularly virulent and extreme among Europeans in the nineteenth century. One interesting argument is that it was, paradoxically, Europeans’ very commitment to such notions as universal rights and human equality that led them to develop and embrace an ideology of racial supremacism and inequality. This is because, whereas other people’s and civilizations simply took such institutions as slavery for granted, seeing them as entirely unproblematic, Europeans, due to their ostensible commitment to such lofty notions as universal rights and equality, felt a constant need to justify slavery to themselves. Thus, theories of racial supremacy were invented as just such a justification. As sociologist-turned-sociobiologist Pierre van den Berghe explains in his excellent The Ethnic Phenomenon: (which I have reviewed here):

In hundreds of societies where slavery existed over several thousand years, slavery was taken for granted and required no apology… The virulent form of racism that developed in much of the European colonial and slave world was in significant part born[e] out of a desire to justify slavery. If it was immoral to enslave people, but at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (The Ethnic Phenomenon: reviewed here: p115).

[14] Although the differences portrayed undoubtedly reflected real racial differences between populations, the stereotyped depictions also suggest that they were also used as a means of identifying and distinguishing between different peoples and ethnicities and hence may have been exaggerated as a kind of marker for race or nationality. Thus, classicist Mary Lefkowitz writes:

Wall paintings are not photographs, and to some extent the different colors may have been chosen as a means of marking nationality, like uniforms in a football game. The Egyptians depicted themselves with a russet color, Asiatics in a paler yellow. Southern peoples were darker, either chocolate brown or black” (History Lesson: A Race Odyssey: p39).

In reality, since North African Caucasoids and sub-Saharan Africans were in continual contact down the Nile Valley, this also almost certainly means that they interbred with one another, diluting and blurring the phenotypic differences between them. In short, if the Egyptians weren’t wholly Caucasoid, so also the Nubians weren’t entirely black.

[15] Other historical works referring to what seems to be the same stele translate the word that Sarich and Miele render as ‘Negro’ instead as ‘Nubian’, and this is probably the more accurate translation. The specific Egyptian word used seems to have been a variant of ‘nHsy’ or ‘Nehesy’, the precise meaning and connotations of which word is apparently a matter of some controversy.
Incidentally, whether the Nubians are indeed accurately to be described as ‘Negro’ is perhaps debatable. Although certainly depicted by the Egyptians as dark in complexion and also sometimes as having other Negroid features, as indeed they surely did in comparison to the Egyptians themselves, they were also in long and continued contact with the Egyptians themselves, with whom they surely interbred. It is therefore likely that they represented, like contemporary populations from the Horn of Africa, a clinal population, as did the Egyptians themselves, since, just as Nubians were in continual contact with Egyptians, so Egyptians were also in continual contact with the Nubians, which would inevitably have resulted in some gene flow between their respective populations.
Whereas the vast Sahara Desert represented, as Sarich and Miele themselves discuss, a formidable barrier to population movement and gene flow and hence a clear boundary between what were once called the Negroid and Caucasoid races, population movement, and hence gene flow, up and down the Nile valley in Northeast Africa was much more fluid and continuous.

[16] Actually, the English word ‘caste’, deriving from the Portuguese ‘casta’, conflates two distinct but related concepts in India, namely, on the one hand, ‘Varna’ and, on the other, ‘Jāti’. Whereas the former term, ‘Varna’, refers to the four hierarchically organized classes (plus the ‘untouchables’ or ‘dalits’, who strictly are considered so degraded and inferior that they do not qualify as a caste and exist outside the caste system), and may even be of ancient origin among the proto-Indo-Europeans, the latter term, ‘Jāti’, refers to the many thousands of theoretically strictly endogamous occupational groups within the separate Varna.
As for Sarich and Miele’s claim that Varna are “as old as Indian history itself”, history is usually distinguished from prehistory by the invention of writing. By this criterion, Indian history might be argued to begin with the ancient Indus Valley Civilization. However, their script has yet to be deciphered, and it is not clear whether it qualifies as a fully developed writing system.
By this measure, the Indian caste system is not “as old as Indian history itself”, since the caste system is thought to have been imposed by Aryan invaders, who arrived in the subcontinent only after the Indus Valley Civilization had fallen into decline, and may indeed have been instrumental in bringing to an end the remnants of this civilization. However, arguably, at this time, India was not really ‘India’, since the word ‘India’ is of Sanskrit origin and therefore arrived only with the Aryan invaders themselves.

[17] There is also some suggestion that the vanarāḥ, who feature in the Ramayana and are usually depicted as monkey-like creatures, may originally have been conceived as a racist depiction of relatively the darker-complexioned and wider-nosed, Southern and indigenous Indians whom the Aryan invaders encountered in the course of their conquests, as may also be true of the demonic rākṣasāḥ and asurāḥ, including the demon king Ravana, who is described as ruling from his island fortress of Laṅkā, which is generally equated with the island of Sri Lanka, located about 35 miles off the coast of South India.
These ideas are, it almost goes without saying, extremely politically incorrect and unpopular in modern India, especially in South India, since South Indians today, despite different religious traditions, are not noticeably less devout Hindus than North Indians, and hence rever the Ramayana as a sacred text to a similar degree.

[18] One is tempted to reject this claim – namely that the use of the Sanskrit word for colour’ to designate ‘caste has no connection to differences in skin colour as between the Indo-Aryan conquerors and the Dravidian populations whom they most likely subjugated – as mere politically correct apologetics. Indeed, despite its overwhelming support in linguistics, archaeology, genetics, and even in the histories provided in the ancient Hindu texts themselves, the very concept of an Indo-European conquest is very politically incorrect in modern India. The notion is perceived as redolent of the very worst excesses of both caste snobbery and the sort of notions of white racial superiority that were popular among Europeans during the colonial period. Moreover, as we have seen, to this day, castes differ not only genetically, and in a manner consistent with the Aryan invasion theory, but also in skin tone (Jazwal 1979Mishra 2017).
On the other hand, however, some evidence suggests that the association of caste with colour actually predates the Indo-Aryan conquest of the Indian subcontinent and originates with the original proto-Indo-Europeans. Thus, in his recent book The Horse, the Wheel and Language, David W Anthony, discussing Georges Dumézil’s trifunctional hypothesis, reports that: 

“The most famous definition of the basic divisions within Indo-European society was the tripartite scheme of Georges Dumézil, who suggested there was a fundamental three-part division between the ritual specialist or priest, the warrior and the ordinary herder/cultivator. Colors may have been associated with these three roles: white for the priest, red for the warrior and black or blue for the herder/cultivator” (The Horse, the Wheel and Language: p92).

It is from this three-fold social hierarchy that the four-varna Indian caste system may have derived. Similarly, leading Indo-Europeanist JP Mallory observes that “both ancient India and Iran expressed the concept of caste with the word for colour” and that:

Indo-IranianHittiteCeltic and Latin ritual all assign white to priests and red to the warrior. The third function would appear to have been marked by a darker colour such as black or blue” (In Search of the Indo-Europeans: p133).

This would all tend to suggest that the association of caste (or at least occupation) with colour long predates the Indo-Aryan conquest of the subcontinent and hence cannot be a reference to differences in skin colour as between the Aryan invaders and indigenous Dravidians.
On the other hand, however, it is not at all clear that the Indian caste system has anything to do with, let alone derives from, the three social groups that supposedly existed among the ancient proto-Indo-Europeans. On the contrary, the Indian caste system is usually considered as originating much later, after the Indo-European arrival in South Asia, and then only in embryonic form. Certainly, there is little evidence that the proto-Indo-European social struture was anything like as rigid as the later Indian caste system.
However, it is interesting to note that that, even under the trifunctional hypothesis, a relatively lighter colour (white) is considered as having been assigned to the priestly group, and a darker colour to the lower-status agricultural workers, paralleling the probable differences in skin tone as between Aryan conquerors and the indigenous Dravidians whom they encountered and likely subjugated.  

[19] Neither is Hartung nor his essay mentioned in the rather cursory endnote accompanying this chapter (p265-6). This reflects a recurrent problem throughout the enitre book. Thus, in the preceding chapter, ‘Race and History’, many passages appear in quotation marks, but it is not always entirely clear where the quotations are taken from, as the book’s endnotes are rudimentary, just giving a list of sources for each chapter as a whole, without always linking these sources to the specific passages quoted in the text. Unfortunately, this sort of thing is a recurrent problem in popular science books, and, in Sarich and Miele‘s defence, I suspect that it is the publishers and editors, rather than the authors, who are usually to blame.

[20] Thus, Hartung writes:

The [Jewish] Sages were quite explicit about their view that non-Jews were not to be considered fully human. Whether referring to ‘gentiles’, ‘idolaters’, or ‘heathens’, the biblical passage which reads ‘And ye my flock, the flock of my pasture, are men, and I am your God’ (Ezekiel 34:31; KJV) is augmented to read… ‘And ye my flock, the flock of my pastures, are men; only ye are designated ‘men’ (Baba Mezia 114b)” (Hartung 1995).

Similarly, Hartung quotes the Talmud as teaching:

In the case of heathens; are they not in the category of adam? – No, it is written: And ye my sheep, the sheep of my pasture, are adam (man). Ye are called adam but heathens are not called adam. [Footnote reads:]… The term adam does not denote ‘man’ but Israelite. The term adam is used to denote man made in the image of God and heathens by their idolatry and idolatrous conduct mar this divine image and forfeit the designation adam” (Kerithoth 6b)

However, as Sarich and Miele, and indeed Hartung, are at pains to emphasize, lest they otherwise be attacked as antisemitic, the tendency to view one’s own ethnic group as the only ‘true’ humans on earth, is by no means exclusive to the ancient Hebrews, but rather is a recurrent view among many cultures across the world. As I have written previously:

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common

Thus, as Hartung himself writes in the very essay from which Sarich and Miele themselves quote, himself citing the work of anthropologist Napoleon Chagnon:

The Yanomamo Indians, who inhabit the headwaters of the Amazon, traditionally believe that… that they are the only fully qualified people on earth. The word Yanomamo, in fact, means man, and non-Yanomamo are viewed as a form of degenerated Yanomamo.”

Similarly, Sarich and Miele themselves write of the San Bushmen of Southern Africa:

Bushmensort all mammals into three mutually exclusive groups: ‘!c’ (the exclamation point represents the ‘clicking’ sound for which their language is well known) denotes edible animals such as warthogs and giraffes; ‘!ioma’ designates an inedible animal such as a jackal, a hyena, a black African, or a European white; the term ‘zhu’ is reserved for humans, that is, the Bushmen themselves” (p57).

[21] According to John Hartung’s analysis, Adam in the Genesis account of creation is best understood as, not the first human, but rather only the first Jew – hence the first true human (Hartung 1995). However, Christian Identity theology turns this logic on its head: Adam was not the first Jew, but rather the first white man.
As evidence, they cite the fact that the Hebrew word ‘Adam’ (אדם) seems to derive from the word for the colour red, which they, rather doubtfully, interpret as evidence for his light skin, and hence ability to blush. (A more likely interpretation for this etymology is that the colour was a reference to the red clay, or “dust of the ground”, from which man was, in the creation narrative of Genesis, originally fashioned: Genesis 2:7. Thus, the Hebrew word ‘Adam’, אדם, is also seemingly cognate with Adamah, אדמה, translated as ‘ground’ or ‘earth’, and the creation of Man from clay is a recurrent motif Near Eastern creation narratives and mythology.)
Christian Identity is itself a development from British Israelism, which claims, rather implausibly, that indigenous Britons are themselves (among the) true Israelites, representing the surviving descendants of the ten lost tribes of Israel. Other races, then, are equated with the pre-Adamites, with Jews themselves, or at least Ashkenazim, classed as either Khazar-descended imposters, or sometimes more imaginatively equated with the so-called serpent seedline, descended from the biblical Cain, himself ostensibly the progeny of Eve when she (supposedly) mated with the Serpent in the Garden of Eden.
Christian identity theology is, as you may have noticed, completely bonkers – rather like, well… theology in general, and Christian theology in particular.

[22] The Old Testament passage in question, Genesis 9:22-25, recounts how Ham sees his drunken father Noah naked, and so, as a consequence, Ham’s own son Canaan is cursed by Noah. Since seeing one’s father naked hardly seems a sufficient transgression to justify the curse imposed, some biblical scholars have suggested that the original version was censored by puritanical biblical scribes offended by or attempting to cover up its original content, which, it has been suggested, may have included a castration scene or possibly a description of incestuous male rape (or even the rape of his own mother, which, it has been suggested, might explain the curse upon his son Canaan, who is, on this view, the product of this incestuous union).
In some interpretations, the curse of Ham was combined, or perhaps simply confused, with the mark of Cain, which was itself interpreted as a reference to black skin. In fact, these are entirely separate parts of the Old Testament with no obvious connection to one another, or indeed to black people.
The link between the curse of Ham and black people is, however, itself quite ancient, long predating the Atlantic slave trade, and seems to have originated in the Talmud, whose authorship, or at least compilation, is usually dated to the sixth century CE, historian Thomas Gossett reporting:

In the Talmud there are several contradictory legends concerning Ham—one that God forbade anyone to have sexual relations while on the Ark and Ham disobeyed this command. Another story is that Ham was cursed with blackness because he resented the fact that his father desired to have a fourth son. To prevent the birth of a rival heir, Ham is said to have castrated his father” (Race: The History of an Idea in America: p5).

This association may have originated because Cush, another of the sons of Ham (and an entirely different person to Canaan, his brother) was said to be the progenitor of, and to have given his name to, the Kingdom of Kush, located on the Nile valley, south of Ancient Egypt, whose inhabitants, the Kushites, who were indeed known for their dark skin colour (though were, by modern standards, probably best classified as mixed-race, or as a clinal or hybrid population, being in long standing contact with predominantly Caucasoid population of Egypt).
Alternatively, the association of Ham with black people may reflect the fact that the Hebrew word ‘ham’ (‘חָם’) has the meaning of ‘hot’, which was taken as a reference to the heat of Africa.
As you have probably gathered, none of this makes much sense. But, then again, neither does much Christian theology, or indeed much of the Old Testament (or indeed the New Testament) or theology in general, let alone most attempts to provide a moral justification for slavery consistent with Christian slave morality.
In fact, it is thought most likely that the curse of Ham was originally intended in reference to, not black people, but rather the Canaanites, since it was Canaan, not his brother Cush, against whom the curse was originally issued. This interpretation also makes much more sense in terms of the political situation in Palestine at the time this part of the Old Testament was likely authored, with the Canaanites featuring as recurrent villains and adversaries of the Israelites throughout much of the Old Testament. On this view, the so-called curse of Ham was indeed intended as a justification for oppression, but not of black people. Rather, it sought to justify the conquest of Canaan and subjugation of her people, not the later enslavement of blacks.

[23] Slavery had already been abolished throughout the British Empire even earlier in 1833, long before Darwin published The Origin of Species, so the idea of Darwinism being used to justify slavery in the British empire is a complete non-starter. (Darwin himself, to what it’s worth, was also opposed to slavery.)
Admittedly, slavery continued to be practised, however, in other, non-English speaking parts of the world, especially the non-western world, for some time thereafter. However, it is not likely that Darwin’s theory of evolution was a significant factor in the continued practice of slavery in, say, the Muslim world, since most of the Muslim world has never accepted the theory of evolution. In short, slavery was longest maintained in precisely those regions (Africa, the Middle East) where Darwinian theory, and indeed a modern scientific worldview, was least widely accepted.

[24] Montagu, who seems to have been something of a charlatan and is known to have lied in correspondence regarding his academic qualifications, had been born with the very Jewish-sounding, non-Anglo name of Israel Ehrenberg, but had adopted the hilariously pompous, faux-aristocratic name ‘Montague Francis Ashley-Montagu’ in early adulthood.

[25] Less persuasively, Sarich and Miele also suggest that the alleged lesbianism, or bisexuality, of both Margeret Mead and Ruth Benedict may similarly have influenced their similar culturally-determinist theories. This seems, to me, to be clutching at straws.
Neither Mead nor Benedict were Jewish, or in any way ethnically alien, yet arguably each had an even greater direct influence on American thinking about cultural differences than did Boas himself. Boas’s influence, in contrast, was largely indirect – namely through his students such as Montagu, Mead and Benedict. Therefore, Sarich and Miele have to point to some other respect in which Mead and Benedict were outsiders. Interestingly, Kevin Macdonald makes the same argument in Culture of Critique (endnote 61: reviewed here), and is similarly unpersuasive.
In fact, the actual evidence regarding Benedict and especially Mead’s sexuality is less than conclusive. It amounts to little more than salacious speculation. After all, in those days, if a person was homosexual, then, given prevailing attitudes and laws, they probably had every incentive to keep their private lives very much private.
Indeed, even today, speculation about people’s private lives tend to be unproductive, simply because people’s private lives tend, by their very nature, to be private.

[26] Curiously, though he is indeed widely credited as the father of American anthropology, Boas’s own influence on the field seems to have been largely indirect. His students, Mead, Benedict and Montagu, all seem to have gone on to become more famous than he was, at least among the general public, and each certainly published works that became more famous, and more widely cited, than anything authored by Boas himself.
Indeed, Boas’s own work seems to relatively little known, and little cited, even by those whom we could regard as his contemporary disciples. His success was in training students/disciples and in academic politicking rather than research.
Perhaps the only work of his that remains widely cited and known today is his work on cranial plasticity among American immigrants and their descendants, which has now been largely discredited.

[27] In the years that have passed since the publication of Sarich and Miele’s ‘Race: The Reality of Human Differences’, this conclusion, namely the out of Africa theory of human evolution, has been modified somewhat by the discovery that our early African ancestors interbred with other species (or perhaps subspecies?) of hominid, including those inhabiting Eurasia, such as Neanderthals and Denisovans, such that, today, all non-sub-Saharan African populations have some Neanderthal DNA.

[28] I think another key criterion in any definition of ‘race’, but which is omitted from most definitions, is whether the differences in “heritable featuresbreed true. In other words, whether two parents both bearing the trait in question will transmit it to their offspring. For example, among ethnically British people, since two parents, both with dark hair, may nevertheless produce a blond-haired offspring, hair colour is a trait which does not breed true. Whether a certain phenotype breeds true is, at least in part, a measure of the frequency of interbreeding with people of a different phenotype in previous generations. It may therefore change over time, with increasing levels of miscegenation and intermarriage. Therefore, this criterion may be implied by Sarich and Miele’s requirement that, in order to qualify as ‘races’, populations must be “separated geographically from other… populations”.

[29] Actually, the definition of ‘species’ is rather more complex – and less rather precise: see discussion during my review of John Baker’s Race, which discusses the matter in this section.

[30] Using colour differences as an analogy for race differences is also problematic, and potentially confusing, for another reason – namely colour is already often conflated with race. Thus, races are often referred to by their (ostensible) colours (e.g. sub-Saharan Africans as ‘black’, Europeans as white, East Asians as yellow, Hispanics and Middle-Eastern populations as brown, and Native Americans as red) and ‘colour’ is sometimes even used as a synonym (or perhaps a euphemism) for race. Perhaps as a consequence, it is often asserted, falsely, that races differ only in skin colour. Using the electromagnetic spectrum as an analogy for race differences is likely to only exacerbate this already considerable confusion.

[31] Interestingly, however, different languages in widely differing cultures tend to put the boundaries between their different colour terms in roughly the same place, suggesting an innate disposition to this effect. Attempts to teach alternative colour terms, which divide the electromagnetic spectrum in different places, to those peoples whose languages lack certain colour terms, has shown that humans learn such classifications less readily than the familiar ones recognized in other languages. Also, although different cultures and languages have different numbers of colour-terms, the colours recognized follow a distinct order, beginning with just light’ and ‘dark, followed by red (see Berlin & Kay, Basic Color Terms: Their Universality and Evolution).

[32] As I have commented previously, perhaps a better analogy to illustrate the clinal nature of race differences is, not colour, but rather social class – if only because it is certain to cause cognitive dissonance and doublethink among leftist sociologists. As pioneering biosocial criminologist Anthony Walsh demands:

Is social class… a useless concept because of its cline-like tendency to merge smoothly from case to case across the distribution, or because its discrete categories are determined by researchers according to their research purposes and are definitely not ‘pure’” (Race and Crime: A Biosocial Analysis: p6).

But the same sociologists and leftist social scientists who, though typically very ignorant of biology, insist race is a ‘social construct’ with no basis in biology, nevertheless continue to employ the concept of social class, or socioeconomic status, as if it were entirely unproblematic.

[33] In addition to the mountains that mark the Tibetan-Indian border, the vast, but sparsely populated tundra and Steppe of Siberia also provides a part of the boundary between what were formerly called the Caucasoid and Mongoloid races. As Steve Sailer has observed, one can get a good idea of the boundaries between races by looking at maps of population density. Those regions that are sparsely populated today (e.g. mountain ranges, deserts, tundra and, of course, oceans) were also generally incapable of supporting large population densities in ancient times, and hence represented barriers to gene flow and racial admixture.

[34] Indeed, even some race realists might agree that terms like ‘Italian’ are indeed largely social constructions and not biologically meaningful, because Italians are not obviously physically distinguishable from the populations in neighbouring countries on the basis of biologically inherited traits, such as skin colour, nose shape or hair texture – though they do surely differ in gene frequencies, and, at the aggregate statistical level, surely in phenotypic traits too. Thus, John R Baker in his excellent ‘Race’ (reviewed here) warns against what he terms “political taxonomy”, which equates the international borders between states with meaningful divisions between racial groups (Race: p119). Thus, Baker declares:

In the study of race, no attention should be paid to the political subdivisions of the surface of the earth” (Race: p111).

Baker even offers a reductio ad absrudum of this approach, writing:

No one studying blood-groups in Australia ‘lumps’ the aborigines… with persons of European origin; clearly one would only confuse the results by so doing” (Race: p121).

Yet, actually, the international borders between states do indeed often coincide with genetic differences between populations. This is because the same geographic obstacles (e.g. mountain ranges, rivers and oceans) that are relatively impassable and hence have long represented barriers to gene flow also represent both:

  1. Language borders, and hence self-identified ‘nations’; and
  2. Militarily defensible borders.

Indeed, Italians, one of the groups cited by Diamond, and discussed by Sarich and Miele, provide a good illustration of this, because Italy has obvious natural borders, that are defensible against invaders, that represent language borders, and that long represented a barrier to gene flow, being a peninsula, surrounded on three sides by the Mediterranean Sea, and on the fourth, its only land border, by the Alps, which represent the border between Italian-speakers and speakers of French and German.

[35] Likewise, in the example cited by Sarich and Miele themselves, the absence of the sickle-cell gene was, as Sarich and Miele observe, the “ancestral human condition” shared by all early humans before some groups subsequently went on to evolve the sickle-cell gene. Therefore, that any two groups do not possess the sickle-cell gene does not show that they are any more related to one another than to any other human group, including those that have evolved sickle-cell, since all early humans initially lacked this gene.
Moreover, Diamond himself refers not to the sickle-cell gene specifically, but rather to “antimalarial genes” in general and there are several different genetic variants that likely evolved because they provide some degree of resistence to malaria, for example the alleles causing conditions thalassemia, Glucose-6-Phosphate Dehydrogenase (G6PD) Deficiency, and certain hemoglobin variants. These quite different adaptations evolved independently in different populations where malaria was common, and indeed have different levels of prevalence in different populations to this day.

[36] Writing in the early seventies, long before the sequencing of the human genome, Lewontin actually relied, not on the direct measurement of genetic differences between, and within, human populations, but rather indirect markers for genetic differences, such as blood group data. However, his findings have been broadly borne out by more recent research.

[37] However, in fact, similar points had been made soon after Lewontin’s original paper had been published (Mitton 1977; 1978).

[38] Actually, while many people profess to be surprised that, depending on precisely how measurements are made, we share about 98% of our DNA with chimpanzees, this has always struck me as, if anything, a surprisingly low figure. After all, if one takes into account all the possible ways an organism could be built, including those ways in which it could be built but never would be built, simply because, if it were, the organism in question would never survive and reproduce and hence evolve in the first place, then we are surely remarkably similar in morphology.
Just looking at our external, visible physiology, we and chimpanzees (and many other organisms besides) share four limbs, ten digits on each, two eyes, two nostrils, a mouth, all similarly positioned in relation to one another, to mention just a few of the more obvious similarities. Our internal anatomy is also very similar, as are many aspects of our basic cellular structure.

[39] This is analogous to the so-called other-race effect in face recognition, whereby people prove much less proficient at distinguishing individuals of races other than their own than they are at distinguishing members of their own race, especially if they have had little previous contact with members of other races. This effect, of course, is the basis for the familiar stereotype whereby it is said ‘they [i.e. members of another race] all look alike to me’.

[40] If any skeptical readers doubt this claim, it might be worth observing that Ostrander is not only a leading researcher in canine genetics, but also seemingly has no especial ideological or politically-correct axe to grind in relation to this topic. Although she is obviously alluding to Lewontin’s famous finding in the passage quoted, she does not mention race at all, referring only to variation among “human populations”, not human races. Indeed, human races are not mentioned at all in the article. Rather, it is exclusively concerned with genetic differences among dog breeds and their relationship to morphological differences (Ostrander 2007).

[41] In addition to problems with defining and measuring the intelligence of different dogs, and dog breeds, there are also, as already discussed above, difficulties in defining, and identifying different dog breeds, problems that, despite the greater morphological and genetic differentiation among dog breeds as compared to human races, are probably greater than for human races, since, except for a few pedigreed purebreds, most dogs are mixed-breed ‘mongrels . These problems, in turn, create problems when it comes to measuing the intelligence of different breeds, since one can hardly assess the intelligence of a given breed without first defining and identifying which dogs qualify as members of that breed.

[42] In fact, however, whereas the research reported upon in the mass media does indeed seems to have relied exclusively on the reported ability of different breds to learn and obey new commands with minimal instruction, Stanley Coren himself, in the the original work upon which this ranking of dog breeds by intelligence was based, namely his book, The Intelligence of Dogs, seems to have employed a broader, more nuanced and sophisticated understanding of canine intelligence, Thus, Coren is reported as distinguishing three types of canine intelligence, namely:

  1. Instinctive intelligence’, namely the dog’s ability to perform the task it was bred for (e.g, herding in the case of a herding dog);
  2. Adaptive intelligence’, namely the ability and speed with which a dog can learn new skills, and solve novel problems, for itself; and
  3. Obedience intelligence’, namely the ability and speed with which a dog can be trained and learn to follow commands from a human master.

[43] There is no obvious reason to believe that domestic animals are, on average, any more intelligent than their wild ancestors. On the contrary, the process of domestication is actually generally associated with a reduction in brain volume, itself a correlate of intelligence, perhaps are part of a process of becoming more neotenized that tends to accompany domestication.
It is, of course, true that domestic animals, and domestic dogs in particular, evince impressive abilities to communicate with humans (e.g. understanding commands such as pointing, and even intonation of voice) (see The Genius of Dogs). However, this reflects only a specific form of social intelligence rather than general intelligence.
In contrast, in respect of the forms of intelligence required among wild animals, domestic animals would surely fare much worse than their wild ancestors. Indeed, many domestic animals have been so modified by human selection that they are quite incapable of surviving in the wild without humans.

[44] Actually, criminality, or at least criminal convictions, is indeed inversely correlated with intelligence, with incarcerated offenders, having average IQs of around 90 – i.e. considerably below the average within the population at large, but not so low in ability as to qualify as having a general learning disabiltiy. In other words, incarerated offenders tend to be smart enough to have the wherewithal to commit a criminal act in the first place, but not smart enough to realize it probably isn’t a good idea in the long-term.
However, with data mostly comes from incarcerated offenders, who are usually given a battery of psychological tests on admission into the prison system, including a test of cognitive ability. It is possible, indeed perhaps probable, that those criminals who evade detection, and hence never come to the attention of the authorities, have relatively higher IQs, since it may be their higher inteligence that enables them to evade detection.
At any rate, the association between crime and low IQ is not generally thought to result from a failure to understand the nature of the law in the first place. Rather, it probably reflects that intelligent people are more likely to recognise that, in the long-term, regularly committing serious crimes is probably a bad idea, because, sooner or later, you are likely to be caught, with attendant punishment and damage to your reputation and future earning capacity.
Indeed, the association between IQ and crime might partially explain the high crime rates observed among African-Americans, since the average IQ of incarcerated offenders is similar to that found among African Americans as a whole.

[45] One is reminded here of Eysenck’s description of the basenji breed as “natural psychopaths” quoted above (The IQ Argument: p170).

[46] For example, differences in skin colour reflect, at least in part, differences in exposure to the sun at different latitudes; while differences in bodily size and stature, and relative bodily proportions, also seem to reflect adaptation to different climates, as do differences in nose shape. Just as lighter complexion facilitates the synthesis of vitamin D in latitudes where exposure to the sun is at a minimum, and dark skin protects from the potentially harmful effects of excessive exposure to the sun’s rays in tropical climates, so a long, thin nose is thought to allow the warming and moisturizing of air before it enters the lungs in cold and dry climates, and body-size and proportions affect the proportion of the body that is directly exposed to the elements (i.e. the ratio of surface-area to volume), a potentially critical factor in temperature regulation, with tall, thin bodies favoured in warm climates, and short stocky frames, with flat faces and shorter arms and legs favoured in colder regions.

[47] For example, as explained in the preceding endnote, the Bergmann and Allen rules neatly explain many observed differences in bodily stature and body form between different races as an adaptation to climate, while Thomson’s nose rule similarly explains differences in nose shape. Likewise, while researchers such as Peter Frost and  Jared Diamond have argued that differences in skin tone cannot entirely be accounted for by climatic factors, nevertheless such factors have clearly played some role in the evolution of differences in skin tone.
This, of course, explains why, although the correlation is far from perfect, there is indeed an association between latitude and skin colour. This also explains why Australia, with a generally much warmer climate than, and situated at a lower latitude than, the British Isles, but in recent times, at least until very recently, populated primarily by people of predominantly Anglo-Celtic ancestry, has the highest levels of melanoma in the world; and also why, conversely, dark-skinned Afro-Caribbeans and South Asians resident in the UK, experience higher rates of rickets, due to lacking sufficient sunlight for vitamin D synthesis.

[48] Alternatively, Carleton Coon attributed the large protruding buttocks of many Khoisan women to maintaining a storehouse of nutrients that can be drawn upon to meet the caloric demands of pregnancy (Racial Adaptations: p105). This is probably why women of all races have naturally greater fat deposits than do men. However, in the arid desert environment to which San people are today largely confined, namely the Kalahari Desert, where food is often hard to come by, maintaining sufficient calories to successfully gestate an offspring may be especially challenging, which might be posited as the ultimate evolutionary factor that led to the evolution of steatopygia among female Khoisan.
Of course, these two competing hypotheses for the evolution of the large buttocks of Khoisan women – namely, on the one hand, sexual selection or mate choice and, on the other, the caloric demands of pregnancy in a desert environment – are not mutually exclusive. On the contrary, if large fat reserves are indeed necessary to successfully gestate an offspring, then it would pay for males to be maximally attracted to females with sufficiently large fat reserves to do just this, so as to maximize their own reproductive success.

[49] This argument, I might note, does not strike me as entirely convincing. After all, it could be argued that strong body odour would actually be more noticeable in hot climates, simply because, in hot climates, people tend to sweat more, and therefore that dry earwax, which is associated with reduced body odour, should actually be more prevalent among people whose ancestors evolved in hot climates, the precise opposite of what is found.
On the other hand, Edward Dutton, discussing population differences in earwax type, suggests that “pungent ear wax (and scent in general) is a means of sexual advertisement” (J Philippe Rushton: A Life History Perspective: p86). This would suggest that a relatively stronger body odour (and hence the wet earwax with which strong body odour is associated) would have been positively selected for (rather than against) by mate choice and sexual selection, the precise opposite of what Wade assumes.

[50] In their defence, I suspect Sarich and Miele are guilty, not so much of biological ignorance, as of sloppy writing. After all, Vincent Sarich was an eminent and pioneering biological anthropolgist, geneticist and biochemist, hardly likely to be guilty of such an error. What I suspect they really meant to say was, not that there was no evidence of sexual selection operating in humans, but rather that there was no conclusive evidence that sexual selection was responsible for racial differences among humans, as also conclude later in their book (p236).

[51] Of all racial groups in the USA, only among Pacific Islanders display even higher rates of obesity that that observed among black women, though here it is both sexes who are prone to obesity.

[52] Just to clarify and prevent any confusion, higher proportions of white men than white women are indeed overweight or obese, in both the USA and UK. However, this does not mean that men are fatter than women. Women of all races, including white people, have higher body-fat levels than men, whereas men have higher levels of musculature.
Obesity is measured by body mass index (BMI), which is calculated by reference to a person’s weight and height, not their body fat percentage. Thus, some professional bodybuilders, for example, have quite high BMIs, and hence qualify as overweight by this criteria, despite having very low body fat levels. This is one limitation to using BMI to assess whether a person is overweight.
Indeed, criteria for qualifying as ‘obese’ or ‘overweight’ is different for men and women, partly to take account of this natural difference in body-fat percentages, as well as other natural sex differences in body size, shape and composition.

[53] Women of all races have, on average, higher levels of body fat than do men of the same race. This, it is suggested, is to provide the necessary storehouse of nutrients to successfully gestate a foetus for nine months. Possibly men may be attracted to women with fat deposits because this shows that they have sufficient excess energy stored so as to successfully carry a pregnancy to term and nurse the resulting offspring. This may also explain the evolution of breasts among human females, since other mammals develop breasts only during pregnancy and, save during pregnancy and lactation, human breasts are, unlike those of other mammals, composed predominantly of fat, not milk.

[54] Interestingly, in a further case agreeing with what Steve Sailer calls ‘Rushton’s Rule of Three, whereby blacks and Asians respectively cluster at opposite ends of a racial spectrum for various traits, there is some evidence that, if black males prefer a somewhat heavier body-build in prospective mates than do white males, then Asian males prefer a somewhat thinner body-build (e.g. Swami et al 2006).

[55] Whereas most black Africans have long arms and legs, African Pygmies may represent an exception. In addition, of course, to a much smaller body-size overall, one undergraduate textbook in biological anthropology reports that they “have long torsos but relatively small appendages” relative to their overall body-size (Human Variation (Fifth Edition): p185). However, leading mid-twentieth century American phsysical anthropologist Carleton Coon reports that, being “they have relatively short legs, particularly short in the thigh, and long arms, particularly long in the forearm” (The Living Races of Man: p106).

[56] Probably this is to be attributed to better superior health, nutrition and living-standards in North America, and even in the Caribbean, as compared to sub-Saharan Africa. Better training facilities, which only richer countries (and people) have sufficient resources to invest in, is also likely a factor. However, one interesting paper by William Aiken proposes that high rates of mortality during the ‘Middle Passage’ (i.e. the transport of slaves across the Atlantic) during the slave trade selected for increased levels of androgens (e.g. testosterone) among the survivors, which he suggests may explain both the superior athletic performance and the much higher rates of prostate cancer among both African-Americans and Afro-Caribbeans as compared to whites (Aitken 2011). Of course, high androgen levels might also plasusibly explain the high rates of violent crime among African-Americans and Afro-Caribbean populations.

[57] Of course, the degree of relationship, if any, between athletic and sporting ability and intellectual ability probably depends on the sport being performed. Most obviously, if chess is to be classified as a ‘sport’, then one would obviously expect chess ability to have a greater correlation with intelligence than, say, arm-wrestling. Intelligence is likely of particular importance in sports where strategy and tactics assume great importance.
Relatedly, in team sports, there are likely differences in the importance of intelligence among players playing in different positions. For example, in the sport discussed by Sarich and Miele themselves, namely American football, it is suggested that the position of quarterback requires greater intelligence than other positions, because the quarterback is responsible for making tactical decisions on the field. This, it is controversially suggested, is why African-Americans, though overrepresented in the NFL as a whole, are relatively less likely to play as quarterbacks.
Similarly, being a successful coach or manager also likely requires greater intelligence.
Interestingly with regard to the question of sports and IQ, though regarded as one of the greatest ever heavyweights, Muhammad Ali scored as low as 78 on an IQ test (i.e. in the low normal range) when tested in an army entrance exam, and was initially turned down for military service in Vietnam as a consequence, though it is sometimes claimed this was because of dyslexia rather than low general intelligence, meaning that the written test he was given underestimated his true intelligence level. Interestingly, another celebrated heavyweight, Mike Tyson, is also said to have scored similarly in the low normal range when tested as a child.
Another reason that IQ might be predictive of ability in some sports is that IQ is known to correlate to reaction times when it comes to performing elementary cognitive tasks. This seems analogous to the need to react quickly and accurately to, say, the speed and trajectory of a ball in order to strike or catch it, as is required in many sports. I have discussed the paradox of African-Americans being overrepresented in elite sports, but having slower average reaction times, here.

[58] People diagnosed with high functioning autism, and Asperger’s syndrome in particular, do indeed have a higher average IQ than the population at large. However, this is only because among the very criteria for diagnosing these conditions is that the person in question must have an IQ which is not so low as to indicate a mental disability. Otherwise, they would not qualify as ‘high functioning’. This removes those with especially low IQs and hence leaves the remaining sample with an above average IQ compared to the population at large.

[59] Rushton’s implication is that this advantage, namely narrower hips, applies to both sexes, and certainly blacks seem to predominate among medal winners in track events in international athletics at least as much in men’s as in women’s athletic events. This suggests, presumably, that, although it is obviously only women who give birth and hence were required to have wider hips in order to birth larger brained infants, nevertheless male hip width was also increased among larger-brained races as a byproduct of selection for increased hip size among females.
If black women do indeed have narrower hips than white women, and black babies smaller brains, then one might predict that black women might have difficulty birthing offspring fathered by white males, as the mixed-race infants would have brains somewhat larger than that of infants of wholly Negroid ancestry. Thus, Russian racialist Vladimir Avdeyev asserts:

“The form of the skull of a child is directly connected with the characteristics of the structure of the mother’s pelvis—they should correspond to each other in the goal of eliminating death in childbirth. The mixing of the races unavoidably leads to this, because the structure of the pelvis of a mother of a different race does not correspond to the shape of the head of [the] mixed infant; that leads to complications during childbirth” (Raciology: p157).

More specifically, Avdeyev claims:

American Indian women… often die in childbirth from pregnancies with a child of mixed blood from a white father, whereas pure-blooded children within them are easily born. Many Indian women know well the dangers [associated with] a pregnancy from a white man, and therefore, they prefer a timely elimination of the consequence of cross-breeding by means of fetal expulsion, in avoidance of it” (Raciology: p157-8).

However, I find little evidence to support this claim from delivery room data. Rather, it seems to be racial differences in overall body size that are associated with birth complications.
Thus, East Asian women have relatively greater difficulties birthing offspring fathered by white males (specifically, a greater rate of c-sections or caesarean births) as compared to those fathered by Asian males (Nystrom et al 2008). However, according to Rushton himself, East Asians have brain sizes as large or larger than those of Europeans.
However, East Asians also have substantially smaller average body-size as compared to Europeans. It seems, then, that Asian women, with their own smaller frames, simply have greater difficulty birthing relatively larger framed mixed-race, half-white offspring.
Avdeyev also claims that, save in the case of mixed-race offspring fathered by larger-brained races, birth is a generally less physically traumatic experience among women from racial groups with smaller average brain-size, just as it is among nonhuman species, who also, of course, have smaller brains than humans. Thus, he writes:

“Women of lower races endure births very easily, sometimes even without any pain, and only in highly rare cases do they die from childbirth” (Raciology: p157).

Again, delivery room data provides little support for his claim. In fact, data from the USA actually seems to indicate a somewhat higher rate of caesarean delivery among African-American women as compared to American whites (Braveman et al 1995Edmonds et al 2013Getahun et al 2009Valdes 2020; Okwandu et al 2021).

[60] Another disadvantage that may result from higher levels of testosterone in black men is the much higher incidence of prostate cancer observed among black men resident in the west, since prostate cancer seems to be to be associated with testosterone levels. In addition, the higher apparent susceptibility of blacks to prostate cancer, and perhaps to violent crime and certain forms of athletic ability, may reflect, not just levels of testosterone, but how susceptible different races are to androgens such as testosterone, which, in turn, reflects their level and type of androgen receptors (see Nelson and Witte 2002).

[61] In writing about politically incorrect and controversial topic, the authors are guilty of some rather sloppy errors, which, given the importance of the subject to their book and its political sensitivity, is difficult to excuse. For example they claim that:

Asians have a slightly higher average IQ than do whites” (p196).

Actually, however, this advantage is restricted to East Asians. It doesn’t extend even to Southeast Asians (e.g. Thais, Filipinos, Indonesians), who are also classed as ‘Mongoloid’ in traditional racial taxonomies, let alone to South Asians and West Asians, who, though usually classed as Caucasoid in early twentieth century racial taxonomies, also qualify as Asian in the sense that they trace their ancestry to the Asian continent, and are considered ‘Asian’ in British-English usage, if not American-English.

[62] Issues like this are not really a problem in assessing the intelligence of different human populations. It is true that some groups do perform relatively better on certain types of test item. For example, East Asians score relatively higher in spatio-visual intelligence than in verbal ability, whereas Ashkenazi Jews show the opposite pattern. Meanwhile, African Americans score relatively higher in rote memory than general intelligence and Australian Aboriginals score relatively higher in spatial memory. However, this is not a major factor when assessing the relative intelligence of different human races because most differences in intelligence between humans, whether between individuals or between groups, is captured by the g factor.

[63] Actually, whether the difference in brain size between the sexes disappears after controlling for differences in body-size depends on how one controls for body-size. Simply dividing brain-size by brain size, or vice versa, makes the difference virtually entirely disappear. In fact, it actually gives a slight advantage in brain size to women.
However, Ankney convincingly argues that this is an inappropriate way to control for differences in body-size between the sexes because, among both males and females, as individuals increase in body-size, the brain comes to take up a relatively smaller portion of overall body-size. Yet despite this, individuals of greater stature have, on average, somewhat higher IQs. Ankney therefore proposes that, the correct way to control for body-size, is to compare the average brain size of men and women of the same body-size. Doing so, reveals that men have larger brains relative to bodies even after controlling for body-size in this way (Ankney 1992).
However, zoologist Dolph Schluter points out that, if you do the opposite – i.e. instead of comparing the brain-sizes of men and women of equivalent body-size, compare the body-size of men and women with the same brain-size – then one finds a difference in the opposite direction. In other words, among men and women with the same brain-size as one another, women tend to have smaller bodies (Schluter 1992).
Thus, Schluter reports:

White men are more than 10 cm taller on average than white women with the same brain weight” (Schluter 1992).

This paradoxical finding is, he argues, a consequence of a statistical effect known as regression to the mean, whereby extreme outliers tend to regress to the mean in subsequent measurements, and the more extreme the outlier, the greater the degree of regression. Thus, an extremely tall woman, as tall as the average man, will not usually have a brain quite as unusually large as her exceptionally large body-size; whereas a very short man, as short as the average women, will not usually have a brain quite as unusually small as his unusually small body-size.
Ultimately, I am led to agree with infamous fraud, charlatan and bully Stephen Jay Gould that, given the differences in both body-shape and composition as between males and females (e.g. men have much greater muscle mass; women greater levels of fat), it is simply impossible to know how to adequately control for body-size as between the sexes.
Thus, Gould writes:

“[Even] men and women of the same height do not share the same body build. Weight is even worse than height, because most of its variation reflects nutrition rather than intrinsic size—and fat vs. skinny exerts little influence upon the brain” (The Mismeasure of Man: p106).

The only conclusion that can be reached definitively is that, after controlling for body-size, any remaining differences in brain-size as between the sexes are small in magniude.

[64] Another less widely supported, but similarly politically correct explanation for the correlation between latitude and brain is that these differences reflect a visual adaptation to differing levels of ambient light in different regions of the globe. On this view, popularions further from the equator, where there is less ambient light evolved both larger eyes, so as to see better, and also larger brains, to better process this visual input (Pearce & Dunbar 2011).

[65] Lynn himself has altered his figure slightly in accordance with the availability of new datasets. In the original 2006 edition of his book, Race Differences in Intelligence he gives a slightly lower figure of 67, before changing this back up to 71 in the 2015 edition of the same book, while, in The Intelligence of Nations, published in 2019, Lynn and his co-author report the average IQ in sub-Saharan Africans as 69.

[66] Thus, other researchers have, predictably, considered Lynn’s estimates as altogether too low and provided what they claim are more realistic figures. The disagreement focuses primarily on which samples are to be regarded as representative, with Lynn disregarding studies using what he regards as elite and unrepresentative.
For example, Wicherts et al, in their systematic review of the available literature, give an average IQ of 82 for sub-Saharan Africans as a whole (Wicherts et al 2010). However, even this much higher figure is very low compared to IQs in Europe and North America, with an IQ of 100, and also considerably lower than the average IQ of blacks in the US, which are around 85.
This difference has been attributed both to environmental factors, and to the fact that African-Americans, and Afro-Caribbeans, have substantial white European admixture (though this latter explanation fails to explain why African-Americans are outcompeted academically and economically by recent unmixed immigrants from Africa).
At any rate, even assuming that the differences are purely environmental in origin, an average IQ of 80 for sub-Saharan Africans, as reported by Wicherts et al (2010), seems oddly high when it is compared to the average IQ of 85 reported for African Americans and 100 for whites, since the difference in environmental conditions as between blacks and whites in America is surely far less substantial than that between African Americans and black Africans resident in sub-Saharan Africa.
As Noah Carl writes:

It really doesn’t make sense for them to argue that the average IQ in Sub-Saharan Africa is as high as 80. We already have abundant evidence that black Americans score about 85 on IQ tests, as compared to 100 for whites. If the average IQ in Sub-Saharan Africa is 80, this would mean the massive difference in environment between Sub-Saharan Africa and the US reduces IQ by only 5 points, yet the comparatively small difference in environment between black and white Americans somehow reduces it by 15 points” (Carl 2025)

[67] In diagnosing mental disability, other factors besides raw IQ will also be looked at, such as adaptive behaviour (i.e. the ability to perform simple day-to-day activities, such as basic hygiene). Thus, Mackintosh reports:

In practice, for a long time now an IQ score alone has not been a sufficient criterion [for the diagnosis of mental disability]… Many years ago the American Association on Mental Deficiency defined mental retardation as ‘significantly sub-average general intellectual functioning existing concurrently with deficits in adaptive behavior’” (IQ and Human Intelligence: p356).

[68] Of course, merely interacting with someone is not an especially accurate way of estimating their level of intelligence, unless perhaps one is discussing especially intellectually demanding subjects, which tends to be rare in everyday conversation. Moreover, Philippe Rushton proposes that we are led to overestimate the intelligence of black people when interacting with them because their low intelligence is often masked by a characteristic personality profile – “outgoing, talkative, sociable, warm, and friendly”, with high levels of social competence and extraversion – which personality profile itself likely reflects an innate racial endowment (Rushton 2004).

[69] Ironically, although he was later to have a reptutation among some leftist sociologists as an incorrigible racist who used science (or rather what they invariably refer to as ‘pseudo-science’) to justify the existing racial order, Jensen was in fact first moved to study differences in IQ between races, and the issue of test bias, precisely because he initially assumed that, due to the different behaviours of low-IQ blacks and whites, IQ tests might indeed be underestimating the intelligence of black Americans and somehow biased against them (The g Factor: p367). However, his careful, systematic and quantitative research ultimately showed this assumption to be false (see Jensen, Bias in Mental Testing).

[70] Mike Tyson, another celebrated African American world heavyweight champion, was also recorded as having a similarly low IQ when tested in school. With regard to Ali’s test results, the conditions for admittance to the military were later lowered to increase recruitment levels, in a programme which became popularly known as Macnamara’s morons, after the US Defense Secretary responsible for implementing it. This is why Muhammad Ali, despite initially failing the IQ test that was a prerequisite for enlistment, was indeed later called up, and famously refused to serve.
Incidentally, the project to lower standards in order to increase recruitment levels is generally regarded as having been an unmitigated disaster and was later abandoned. Today, the US military no longer uses IQ testing to screen recruits, instead employing the Armed Services Vocational Aptitude Battery, though this, like virtually all tests of mental ability and aptitude, nevertheless taps into the general factor of intelligence, and hence is, in part, an indirect measure of IQ.

[71] My own analogy, in the text above, is between race and species. Thus, I write that it would be no more meaningful to describe a sub-Saharan with an IQ below 70 as mentally handicapped than it would be to describe a chimpanzee as mentally handicapped simply because they are much less intelligent than the average human. This analogy – between race/subspecies and species – is, in some respects more apposite, since races or subspecies do indeed represent ‘incipient species’ and the first stage of speciation (i.e. the evolution of populations into distinct species). On the other hand, however, it is not only very provocative, but also very misleading in a very different way, simply because the differences between chimpanzees and humans in intelligence and many other traits are obviously far greater than those between the different races of mankind, who all represent, of course, a single species.

[72] Richard Lynn, in Race Differences in Intelligence (which I have reviewed here), attributes a very low IQ of just 62 to New Guineans, and an even lower IQ, supposedly just 52 to San Bushmen. However, he draws this conclusion on the basis of very limited evidence, especially in respect of the San (see discussion here). However, in relation to New Guineans, it is worth noting that Lynn provides much more data (mostly from the Australian school system) in respect of the IQs of the Aboriginal population of Australia, to whom New Guineans are closely related, and to whom he ascribes a similarly low average IQ (as discussed here).

[73] I am not sure what evidence Harpending relies on to infer a high average IQ in South India. Richard Lynn, in his book, Race Differences in Intelligence (which I have reviewed here) reports a quite low IQ of 84 for Indians in general, whom he groups, perhaps problematically, with Middle Eastern and North African peoples as, supposedly, a single race.
However, a more recent study, also authored by Lynn in collaboration with an Indian researcher, does indeed report higher average intelligence in South India than in North India, and also in regions with a coastline (Lynn & Yadav 2015).
This, of course, rather contradicts Lynn’s own ‘cold winters theory’, which posits that the demands of surviving in a relatively colder climate during winter selects for higher intelligence, as North India is situated at a higher latitude than South India, and, especially in some mountainous regions of the North East, has relatively colder winters.
Incidentally, it also seemingly contradicts any theory of what we might term ‘Aryan supremacy’, since it is the lighter complexioned North Indians who have greater levels of Indo-European ancestry and speak Indo-Aryan languages, whereas the darker complexioned South Indians speak Dravidian languages and have much less Indo-European ancestry, and hence North Indians, together with related groups such as Iranians, not German Nazis, who have the strongest claim to being ‘Aryans.
South India also today enjoys much higher levels of economic development than does North India.

[74] Ashkenazi Jews, of course, have substantial European ancestry, as a consequence of long sojourn as diaspora minority in Europe. The same is true to some extent of Sephardi Jews, who trace their ancestry to the Jewish populations formerly resident in and then expelled from Spain and Portugal. However, although these are the groups whom westerners usually have in mind when thinking of Jews, the majority of Jews in Israel today are actually the Mizrahim, who remained resident in the Middle East, if not in Palestine, and hence have little or no European admixture. 

[75] The fact that apartheid-era South Africa, despite international sanctions, was nevertheless a ‘developed economy’, but South Africa today is classed as a ‘developed economy’, of course, ironically suggests that, if South Africa is indeed ‘developing’, it is doing so in altogether the wrong direction.

[76] For example, to take one obvious example, customers at strip clubs and brothels generally have a preference for younger, more physically attractive, service providers of a particular sex, and also often show a racial preference too.
The topic of the economics of discrimination was famously analysed by pioneering Nobel Prize winning economist Gary Becker.

[77] Some degree of discrimination in favour of black and perhaps other underrerpresented demographics likely continued under the guise of a newly-adopted ‘holistic’ approach to university admission. This involved deemphasizing quantifiable factors such as grades and SAT scores, which meant that any discrimination against certain demographics (i.e. whites, Asians and males) is less easily measured and hence proven in a court of law.

[78] It also ought to be noted in this context that the very term meritocracy is itself problematic, raising, as it does, the question of how we define ‘merit’, let alone how we objectively measure and quantify it for the purposes of determining, for example, how is appointed to a particular job or has his application for a particular university accepted or rejected. Determining the ‘merit’ of a given person is necessarily a ‘value judgement’ and hence inherently a subjective assessment.
Of course, in practice, when people talk of meritocracy in this sense, they usually mean that employers should select the ‘best person for the job’, not ‘merit’ in some abstract cosmic moral sense. In this sense, it is not really ‘merit’ that determines whether a person obtains a given job, but rather their market value in the job market (i.e. the extent to which they possess the marketable skills etc.).
Yet this is not the same thing as ‘merit’. After all, a Premiership footballer may command a far higher salary in the marketplace than, say, a construction worker. However, this is not to say that they are necessarily more meritorious outside the football pitch. It is the players merits as a footballer that are in issue not their merits as people or moral agents. Construction workers surely contribute more to a functioning society.
Market value, unlike merit, is something that can be measured and quantified, and indeed the market itself, left to its own devices, automatically arrives at just such a valuation.
However, although a free market system may approximate meritocracy, albeit only in this narrow sense, a perfect meritocracy is unattainable, even in this narrow sense. After all, employers sometimes make the wrong decision. Moreover, humans have a natural tendency towards nepotism (i.e. promoting their own close kin at the expense of non-kin) and perhaps to ethnocentrism and racism too.
Thus, as I have written about previously, equal opportunity is, in practice, almost as utopian and unachievable as equality of outcome (i.e. communism).

[79] Sarich and Miele even cite models of where the salience of racial group identity is supposedly overcome, or at least mitigated:

The examples of basic military training, sports teams, music groups, and successful businesses show that [animosities between racial, religious and ethnic groups] can indeed be overcome. But doing so requires in a sense creating a new identity by to some extent stripping away the old. Eventually, the individual is able to identify with several different groups” (p242).

Yet, even under these conditions, racial animosities are not entirely absent. For example, despite basic training, racial incidents are hardly unknown in the US military.
Moreover, the cooperation between ethnicities often ends with the cessation of the group activity in question. In other words, as soon as they finish playing for their multiracial sports team, the team members will go back to being racist again, to everyone other than their teammates. After all, racists are not known for their intellectual consistency and racism and hypocrisy tend to go together.
For example, members of different races may work, and fight, together in relative harmony and cohesion in the military. However, military veterans are not noticeably any less racist than non-veterans. If anything, in my limited experience, the pattern seems to be quite the opposite. Indeed, many leaders in the ‘white power’ movement in the USA (e.g. Louis Beam, Glenn Miller) were military veterans, and a recent book, Bring the War Home by Kethleen Belew, even argues that it was the experience of defeat in Vietnam, and, in particular, the return of many patriotic but disillusioned veterans, that birthed the modern ‘white power’ movement.
Similarly, John Allen Muhammad, the ‘DC sniper’, a serial killer and member of the black supremacist Nation of Islam cult, who was responsible for killing ten people, all of them white, and whose accomplice admitted that his killings were motivated by a desire to kill white people, was likewise a military veteran.

[80] Despite the efforts of successive generations of feminists to stir up animosity between the sexes, even sex is not an especially salient aspect of a person’s identity, at least when it comes to group competition. After all, unlike in respect of race and ethnicity, almost everyone has relatives and loved ones of both biological sexes, usually in roughly equal number, and the two sexes are driven into one another’s arms by the biological imperative of the sex drive. As Henry Kissinger is, perhaps apocryphally, quoted as observing:

No one will win the battle of the sexes. There is too much fraternizing with the enemy”.

Indeed, the very notion of a ‘battle of the sexes’ is a misleading metaphor, since people compete, in reproductive terms, primarily against people of the same sex as themselves in competition for mates.

References

Aiken (2011) Historical determinants of contemporary attributes of African descendants in the Americas: The androgen receptor holds the key, Medical Hypotheses 77(6): 1121-1124.
Allison et al (1993) Can ethnic differences in men’s preferences for women’s body shapes contribute to ethnic differences in female adiposity? Obesity Research 1(6):425-32.
Ankney (1992) Sex differenes in relative brain size: The mismeasure of woman, too? Intelligence 16(3–4): 329-336.
Beals et al (1984) Brain Size, Cranial Morphology, Climate, and Time MachinesCurrent Anthropology 25(3), 301–330.
Braveman et al (1995) Racial/ethnic differences in the likelihood of cesarean delivery, CaliforniaAmerican Journal of Public Health 85(5): 625–630.
Carl (2025) Are Richard Lynn’s national IQ estimates flawed? Aporia, January 1.
Coppinger & Schneider (1995) Evolution of working dogs.In: Serpell (ed.). The Domestic Dog. Its Evolution, Behaviour and Interactions with People (pp. 22-47). Cambridge: Cambridge University Press.
Crespi (2016) Autism As a Disorder of High Intelligence, Frontiers of Neuroscience 10:300.
Draper (1989) African Marriage Systems: Perspectives from Evolutionary Ecology, Ethology and Sociobiology 10(1-3):145-169.
Edwards (2003). Human genetic diversity: Lewontin’s fallacy. BioEssays 25 (8): 798–801.
Edmonds et al (2013) Racial and ethnic differences in primary, unscheduled cesarean deliveries among low-risk primiparous women at an academic medical center: a retrospective cohort studyBMC Pregnancy Childbirth 13, 168.
Ellis (2017) Race/ethnicity and criminal behavior: Neurohormonal influences, Journal of Criminal Justice 51: 34-58.
Getahun et al (2009) Racial and ethnic disparities in the trends in primary cesarean delivery based on indicationsAmerican Journal of Obstetrics and Gynecology 201(4):422.e1-7.
Freedman et al (2006) Ethnic differences in preferences for female weight and waist-to-hip ratio: a comparison of African-American and White American college and community samples, Eating Behaviors 5(3):191-8.
Frost (1994) Geographic distribution of human skin colour: A selective compromise between natural selection and sexual selection? Human Evolution 9(2):141-153.
Frost (2006) European hair and eye color: A case of frequency-dependent sexual selection? Evolution and Human Behavior 27:85-103.
Frost (2008) Sexual selection and human geographic variation, Journal of Social Evolutionary and Cultural Psychology 2(4):169-191.
Frost (2014) The Puzzle of European Hair, Eye, and Skin Color, Advances in Anthropology 4(02):78-88.
Frost (2015) Evolution of Long Head Hair in Humans. Advances in Anthropology 05(04):274-281.
Frost (2023) Do human races exist? Aporia Magazine July 11.
Gichoya et al (2022) AI recognition of patient race in medical imaging: a modelling study, Lancet 4(6): E406-E414.
Greenberg & LaPorte (1996) Racial differences in body type preferences of men for women International Journal of Eating Disorders 19:275–8.
Hartung (1995) Love Thy Neighbor: The Evolution of In-Group Morality. Skeptic 3(4):86–98.
Hirschfeld (1996) Race in the Making: Cognition, Culture, and the Child’s Construction of Human Kinds. Contemporary Sociology 26(6).
Jensen & Johnson (1994) Race and sex differences in head size and IQ, Intelligence 18(3): 309-333.
Jazwal (1979) Skin colour in north Indian populationsJournal of Human Evolution 8(3): 361-366.
Juntilla et al (2022) Breed differences in social cognition, inhibitory control, and spatial problem-solving ability in the domestic dog (Canis familiaris), Scientific Reports 12:1.
Lee et al (2019) The causal influence of brain size on human intelligence: Evidence from within-family phenotypic associations and GWAS modeling, Intelligence 75: 48-58.
Lewontin (1972). The Apportionment of Human Diversity.  In: Dobzhansky, T., Hecht, M.K., Steere, W.C. (eds) Evolutionary Biology (New York: Springer).
Lynn & Yadav (2015) Differences in cognitive ability, per capita income, infant mortality, fertility and latitude across the states of IndiaIntelligence 49: 179-185
Mishra (2017) Genotype-Phenotype Study of the Middle Gangetic Plain in India Shows Association of rs2470102 with Skin PigmentationJournal of Investigative Dermatology 137(3):670-677.
Mitton (1977). Genetic Differentiation of Races of Man as Judged by Single-Locus and Multilocus AnalysesThe American Naturalist 111 (978): 203–212.
Mitton (1978). Measurement of Differentiation: Reply to Lewontin, Powell, and Taylor. The American Naturalist 112 (988): 1142–1144. 
Macdonald 2001 An integrative evolutionary perspective on ethnicity. Politics & the Life Sciences 20(1):67-8.
Nelson & Witte (2002) Androgen receptor CAG repeats and prostate cancer, American Journal of Epidemiology 15;155(10):883-90.
Norton et al (2019) Human races are not like dog breeds: refuting a racist analogy. Evolution: Education and Outreach 12: 17.
Nystrom et al (2008) Perinatal outcomes among Asian–white interracial couplesAmerican Journal of Obstetrics and Gynecology 199(4), p382.e1-382.e6.
Okwandu et al (2021) Racial and Ethnic Disparities in Cesarean Delivery and Indications Among Nulliparous, Term, Singleton, Vertex Women. Journal of Racial & Ethnic Health Disparities. 12;9(4):1161–1171.
Ostrander (2007) Genetics and the Shape of Dogs, American Scientist 95(5): 406.
Pearce & Dunbar (2011) Latitudinal variation in light levels drives human visual system size, Biology Letters 8(1): 90–93.
Piffer (2013) Correlation of the COMT Val158Met polymorphism with latitude and a hunter-gather lifestyle suggests culture–gene coevolution and selective pressure on cognition genes due to climate, Anthropological Science 121(3):161-171.
Pietschnig et al (2015) Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews 57: 411-432.
Rushton (2004) Solving The African IQ Conundrum : “Winning Personality” Masks Low Scores, Vdare.com August 12.
Rushton, J.P. & Ankney, C.D. (2009) Whole Brain Size and General Mental Ability: A Review. International Journal of Neuroscience, 119(5):692-732.
Sailer (1996) Great Black HopesNational Review, August 12.
Sailer (2019) Richard Jewell’: The Problem With Profiling, Takimag, December 18.
Schluter (1992) Brain size differences, Nature 359:181.
Schoenemann et al (2000) Brain-size does not predict general cognitive ability within families. Proceedings of the National Academy of Sciences, 97:4932–4937.
Singh (1994) Body fat distribution and perception of desirable female body shape by young black men and women, Eating Disorders 16(3): 289-294.
Snook et al (2007) Taking Stock of Criminal Profiling: A Narrative Review and Meta-Analysis, Criminal Justice and Behavior 34(4):437-453.
Swami et al (2006) Female physical attractiveness in Britain and Japan: a cross-cultural study, European Journal of Personality 20(1): 69-81.
Taylor (2021) Making sense of race, American Renaissance, May 14.
Tishkoff et al (2007) Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics (1): 31-40.
Thompson et al (1996) Black and white adolescent males’ perceptions of ideal body size, Sex Roles 34(5-6): 391–406.
Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and Adaptation, Journal of Personality 58(1):17-67.
Valdes (2020) Examining Cesarean Delivery Rates by Race: a Population-Based Analysis Using the Robson Ten-Group Classification System Journal of Racial and Ethnic Health Disparities.
Van den Berghe & Frost (1986) Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9: 87-113.
Whitney (1997) Diversity in the Human Genome, American Renaissance 8(3), March 1997
Whitney (1999) The Biological Reality of RaceAmerican Renaissance, 10(10) October 1999.
Wicherts et al (2010) A systematic literature review of the average IQ of sub-Saharan Africans, Intelligence 38(1):1-20

Catherine Hakim’s ‘Erotic Capital’: Too Much Feminism; Not Enough Evolutionary Psychology

Catherine Hakim, Honey Honey: The Power of Erotic Capital (London: Allen Lane 2011)

Catherine Hakim, a British sociologist – proudly displaying her own ‘erotic capital’ in a photograph on the dust jacket of the hardcover edition of her book – introduces her concept of ‘erotic capital’ in this work, variously titled either Money Honey: the Power of Erotic Capital’ or Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom’.[1]

Although Hakim insists this concept of ‘erotic capital’ is original to her, in reality it appears to be little more than social science jargon for sex appeal – a new term invented for a familiar concept, introduced to disguise the lack of originality of the concept.[2]

Certainly, Hakim may be right that economists and sociologists have often failed to recognize and give sufficient weight to the importance of sexual attractiveness in human relations. However, this reflects only the prejudices, puritanism and prudery of economists and sociologists, not the originality of the concept.

In fact, the importance of sexual attractiveness in human affairs has been recognized by intelligent laypersons, poets and peasants from time immemorial. It is also, of course, a central focus of much research in evolutionary psychology.

Hakim maintains that her concept of ‘erotic capital’ is broader than mere sex appeal by suggesting that even heterosexual people tend to admire and enjoy the company of individuals of the same sex with high levels of erotic capital:

Even if they are not lesbian, women often admire other women who are exceptionally beautiful, or well-dressed, and charming. Even if they are not gay, men admire other men with exceptionally well-toned, ‘cut’ bodies, handsome faces and elegant social manners” (p153).

There is perhaps some truth to this.

For example, I recall hearing that the audiences at (male) bodybuilding contests are, perhaps oddly, composed predominantly of heterosexual men. Similarly, since action movies are a genre that appeals primarily to male audiences, it was presumably heterosexual men and boys who represented the main audiences for Arnold Schwarzenegger action movies during his 1980s heyday, and they were surely not attracted by his acting ability. Indeed, I am reminded of this meme.[3]

Likewise, heterosexual women seem, in many respects, even more obsessed with female beauty than are heterosexual men. Indeed, this is arguably not very surprising, since female beauty is of far more importance to women than to men, since their own marital prospects, and hence socioeconomic status, depend substantially upon it.

Thus, just as pornographic magazines, which, until eclipsed in the internet age, attracted an overwhelmingly male audience, were filled with pictures of beautiful, sexy women in various states of undress, so fashion magazines, which attracted an audience as overwhelmingly female and porn’s was male, were likewise filled with pictures of beautiful, sexy women, albeit somewhat less explicit and wearing more clothes.

However, if men do indeed sometimes admire muscular men, and women do sometimes admire beautiful women, I nevertheless suspect people are just as often envious of and hence hostile towards same-sex rivals whom they perceive as more attractive than themselves.

Indeed, there is even some evidence for this.

In her book, Survival of the Prettiest (which I have reviewed here), Nancy Etcoff reviews many of the advantages associated with good looks, as does Catherine Hakim in Money Honey. However, Etcoff, for her part, also identifies at least one area where beautiful women are apparently at a disadvantage – namely, they tend to have difficulties holding down friendships with other women, presumably on account of jealousy:

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (Survival of the Prettiest: p50; citing Krebs & Adinolfy 1975).[4]

Interestingly, sexually insightful French novelist Michel Houellebecq, in his novel, Whatever, suggests that the same may be true for exceptionally handsome men. Thus, he writes:

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63).

A Sex Difference in Sexiness?

Besides introducing her supposedly novel concept of ‘erotic capital’, Hakim’s book purports to make two original discoveries, namely that:

  1. Women have greater erotic capital than men do; and
  2. Because men have a greater sex drive than women, “there is a systematic and apparently universal male sex deficit: men generally want a lot more sex than they get” (p39).

However, once one recognizes that ‘erotic capital’ essentially amounts to sex appeal, it is doubtful whether these two claims are really conceptually separate.

Rather, it is the very fact that men are not getting as much sex as they want that explains why women have greater sex appeal than men, because men are always on the lookout for more sex – or, to put the matter the other way around, it is women’s greater levels of sex appeal (i.e. ability to trigger the male sex drive) that explains why heterosexual men want more sex than they can get. After all, it is sex appeal that drives the desire for sex, just as it is one person’s desire for sex that invests the person with whom they desire to have sex with sex appeal.

Indeed, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

Evolutionary Psychology

Yet there is a curious omission in Hakim’s otherwise comprehensive review of the literature on this topic, one that largely deprives her exposition of its claims to originality.

Save for two passing references (at p88 and in an endnote at p320), she omits any mention of a theoretical approach in the human behavioural sciences which has, for at least thirty years prior to the publication of her book, not only focused on sexual attractiveness and recognized what Hakim refers to as the ‘universal male sex deficit’ (albeit not by this name), but also provided a compelling theoretical explanation for this phenomenon, something conspicuously absent from her own exposition – namely, evolutionary psychology and sociobiology.

According to evolutionary psychologists, men have evolved a greater desire for sex, especially commitment-free promiscuous sex, because it enabled them to increase their reproductive success at minimal cost, whereas the reproductive rate of women was more tightly constrained, burdened as they are with the costs of both pregnancy and lactation.

This insight, known as Bateman’s principle dates from over sixty years ago (Bateman 1948), was rediscovered, refined and formalized by Robert Trivers in the 1970s (Trivers 1972), and applied explicitly to humans from at least the late-1970s with the publication of Donald Symons’ seminal The Evolution of Human Sexuality (which I have reviewed here).

Therefore, Hakim is disingenuous claiming:

Only one social science theory [namely, Hakim’s own] accords erotic capital any role at all” (p156).

Yet, despite her otherwise comprehensive review the literature on sexual attractiveness and its correlates, including citations of some studies conducted by evolutionary psychologists themselves to test explicitly sociobiological theories, one searches the index of her book in vain for any entry for ‘evolutionary psychology’, ‘sociobiology’ or ‘behavioural ecology’.[5]

Yet Hakim’s book often merely retreats ground evolutionary psychologists covered decades previously.

For instance, Hakim treats male homosexual promiscuity as a window onto the nature of male sexuality when it is freed from the constraints imposed by women (p68-71; p95-6).

Thus, as evidence that men have a stronger sex drive than women, Hakim writes:

Paradoxically, the most compelling evidence of this comes from homosexuals, who are relatively impervious to the brainwashing and socialization of the heterosexual majority. Lesbian couples enjoy sex less frequently than any other group. Gay male couples enjoy sex more frequently than any other group—and their promiscuous lifestyle makes them the envy of many heterosexual men. Gay men in long-term partnerships who have become sexually bored with each other maintain an active sex life through casual sex, hookups, and promiscuity. Even among people who step outside the heterosexual hegemony to carve out their own independent sexual cultures, men are much more sexually active than women, on average” (p95-6).

Here, Hakim echoes, but conspicuously fails to cite or acknowledge the work of evolutionary psychologist Donald Symons, who, in his seminal The Evolution of Human Sexuality (which I have reviewed here), first published in 1979, some three decades before Hakim’s own book, pioneered this exact same approach, in his ninth chapter, titled ‘Test Cases: Hormones and Homosexuals’. Thus, Symons writes:

I have argued that male sexuality and female sexuality are fundamentally different, and that sexual relationships between men and women compromise these differences; if so, the sex lives of homosexual men and women—who need not compromise sexually with members of the opposite sex—should provide dramatic insight into male sexuality and female sexuality in their undiluted states. Homosexuals are the acid test for hypotheses about sex differences in sexuality” (The Evolution of Human Sexuality: p292).

To this end, Symons briefly surveys the rampant promiscuity of American gay culture in the pre-AIDS era when he was writing, including the then-prevalent practice of gay men meeting strangers for anonymous sex in public lavatoriesgay bars and exclusively gay bathhouses (The Evolution of Human Sexuality: p293-4).

He then contrasts this hedonistic lifestyle with that of lesbians, whose romantic relationships typically mirror heterosexual relationships, being characterized by long-term pair bonds and monogamy.

This similarity between lesbian relationships and heterosexual coupling, and the stark contrast with rampant homosexual male promiscuity, suggests, Symons argues, that, contrary to feminist dogma, which asserts that it is men who both dictate and primarily benefit from the terms of heterosexual coupling, it is in fact women who dictate the terms of heterosexual coupling in accordance with their own interests and desires (The Evolution of Human Sexuality: p300).

Thus, as popular science writer Matt Ridley writes:

Donald Symons… has argued that the reason male homosexuals on average have more sexual partners than male heterosexuals, and many more than female homosexuals, is that male homosexuals are acting out male tendencies or instincts unfettered by those of women” (The Red Queen: p176).

This is, of course, virtually exactly the same argument that Hakim is making, using exactly the same evidence, but Symons is nowhere cited in her book.

Hakim again echoes the work of Donald Symons in noting the absence of a market for pornography among women to mirror the extensive market for pornography produced for male consumers.

Thus, before the internet age, magazines featuring primarily nude pictures of women commanded sizable circulations despite the stigma attached to their purchase. In contrast, Hakim reports:

The vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility… Women should logically be the main audience for male nudes, but they display little interest. Most of the erotic magazines aimed at women in Europe have failed, and almost none of the photographers doing male nudes are women. The taste for erotica and pornography is typically a male interest, whether heterosexual or homosexual in character…The lack of female interest in male nudes (at least to the same level as men) demonstrates both lower female sexual interest and desire, and the higher erotic value of the female nude in almost all cultures —with a major exception being ancient Greece” (p71).

Yet here again Hakim directly echoes, but fails to cite, Donald Symons’s seminal The Evolution of Human Sexuality, who, citing the Kinsey Reports, observed:

Enormous numbers of photographs of nude females and magazines exhibiting nude or nearly nude females are produced for heterosexual men; photographs and magazines depicting nude males are produced for homosexual men, not for women” (The Evolution of Human Sexuality: p174)

This Symons calls “the natural experiment of commercial periodical publishing” (The Evolution of Human Sexuality: p182).

Similarly, just as Hakim notes that “the vast majority of male nude photography is produced by men for male viewers, often with a distinctly gay sensibility” (p71), so Symons three decades earlier concluded:

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (The Evolution of Human Sexuality: p304).

However, Symons’s pioneering book on the evolutionary psychology human sexuality is not cited anywhere in Hakim’s book, and neither is it listed in her otherwise quite extensive bibliography.

Sex Surveys

Another odd omission from Hakim’s book is that, while she extensively cites the findings of numerous ‘sex surveys’ replicating the robust finding that men report more sexual partners over any given timespan than women do, Hakim never grapples with, and only once in passing alludes to, the obvious problem that (homosexual encounters aside) every sexual encounter must involve both a male and a female, such that, on average, given the approximately equal numbers of both males and females in the population as a whole (i.e. an equal sex ratio), men and women must have roughly the same average number of sex partners over their lifetimes.[6]

Two explanations have been offered for this anomalous finding. Firstly, there may be a small number of highly promiscuous women – i.e. prostitutes – whom surveys generally fail to adequately sample (Brewer et al 2000).

Alternatively, it is suggested, not unreasonably, that respondents may be dishonest even in ostensibly anonymous surveys, especially when they deal with sensitive subjects such as a person’s sexual experience and behaviours.

Popular stereotype has it that it is men who lie in sex surveys in order to portray themselves as more promiscuous and hence ‘successful with women’ than they really are.

However, while this claim seems to be mostly conjecture, there is actual data showing that women are also dishonest in sex surveys, lying about their number of sex partners for precisely the opposite reason – namely to appear more innocent and chaste, or at least less rampantly slutty, than they really are, given the widespread demonization of promiscuity among women.

Thus, one interesting study found that women report relatively more sexual partners in surveys when they believe their answers are anonymous than they do when they believe their answers may be viewed by the experimenter, and more still when they believe that they are hooked up to a polygraph machine designed to detect any dishonest answers when reporting their answers. Indeed, in the fake lie-detector conditions, female respondents actually reported more sexual partners than did male respondents (Alexander and Fisher 2003).

A further factor may be that men and women define ‘sex’ differently, at least for the purposes of giving answers to sex surveys, perhaps exploiting the same sort of semantic ambiguities that Bill Clinton sought to exploit to evade perjury charges in relation to his claim not to have had sexual relations’ with Monica Lewinsky.

Paternity Certainty, Mate Guarding and the Suppression of Female Sexuality

Hakim claims men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’. Thus, she claims:

Men have taken steps to prevent women exploiting their one major advantage over men, starting with the idea erotic capital is worthless anyway. Women who openly deploy their beauty or sex appeal are belittled as stupid, lacking in intellect and other ‘meaningful’ social attributes” (p75).

In particular, Hakim views so-called ‘sexual double-standards’ and the puritanical attitudes expressed by many religions (especially Christianity and Islam) as mechanisms by which men suppress female sexuality and thereby prevent women taking advantage of their greater levels of ‘erotic capital’ or sex appeal as compared to men.

Citing the work of female historian Gerda Lerner, Hakim claims that men established patriarchy and sought to control the sexuality of women so as to assure themselves of the paternity of their offspring:

Patriarchal systems of control and authority were developed by men who wanted to be sure that their land and property, whatever they were, would be passed on to their own biological children” (p77).

However, she fails to explain the ultimate evolutionary reason why men would ever even be interested in, or care about, the paternity of the offspring who inherit their property.

Here, of course, evolutionary psychology provides a ready and compelling explanation.

Evolutionary psychologists contend that human male’s interest in the paternity of their putative offspring ultimately reflects the sociobiological imperative of maximizing their reproductive success by securing the passage of their genes into subsequent generations, and their concern that their parental investment not be maladaptively misdirected towards offspring fathered, not by themselves, but rather by a rival male.

Yet Hakim is evidently unaware of, or at least does not cite, the substantial scientific literature in evolutionary psychology on male sexual jealousy and mate guarding (e.g. Wilson & Daly 1992; Buss et al 1992).

Had Hakim familiarized herself with this literature, and the literature on mate guarding among non-human animals, she might have spared herself from her next error. For on the very next page, citing another female historian, one Julia Stonehouse, Hakim purports to trace men’s efforts to control women’s sexuality back to the supposed discovery of the role sex – and of men – in reproduction in 3000BC (p78-9).

At the beginning of civilization, from around 20000 BC to 8000 BC, there were no gods, only goddesses who had the magical power to give birth to new life quite independently… Men were seen to have no role at all in reproduction up up to around 3000 BC… Theories of reproduction changed around 3000 BC – man was suddenly presented as sowing the ‘seed’ that was incubated by women to deliver the man’s child… Control of women’s sexuality started only when men believed they planted the unique seed that produces a baby” (p78-9).[7]

This would seem a very odd claim to anyone with a background in biology, especially in sociobiology, behavioural ecology and animal behaviour.

Hakim is apparently unaware that naturalists have long observed analogous patterns of what biologists call mate guarding among non-human species, who are, of course, surely not consciously (or even subconsciously) aware of the relationship between sexual intercourse and reproduction, but who have nevertheless been programmed by natural selection to behave in such a way as to maximise their reproductive success by engaging in such mate-guarding behaviours, even without any conscious awareness of the ultimate evolutionary function of such behaviour.

For example, analogous behaviours are observed among our closest extant nonhuman relatives, namely chimpanzees. Thus, Jane Goodall, in her seminal study of chimpanzee behaviour in the wild, describes how the dominantalpha male’ within a troop of chimpanzees will attempt to prevent any males other than him from mating with a fertile estrus female, though she acknowledges:

The best that even a powerful alpha male can, realistically, hope to do is to ensure that most of the copulations around the time of ovulation are his” (The Chimpanzees of Gombe: p473).

In addition, she reports how even subordinate males sometimes successfully sequester fertile females into consortships, whereby they seclude fertile females, often forcibly, leading them to a peripheral part of the group’s home range so as to monopolize sexual access to the female in question, until her period of maximum fertility and sexual receptivity has passed (The Chimpanzees of Gombe: p453-465).

Such chimpanzee consortships sometimes involve force and coercion but other times seem to be largely consensual. We might therefore characterize them as representing the rough chimpanzee equivalent something in between either:

  1. Taking your wife or girlfriend away for a romantic weekend away in Paris; or
  2. Kidnapping a teenage girl and keeping her locked in the basement as a sex slave.

Certainly then, although chimpanzees are almost certainly unaware of the role of sexual intercourse, and of males, in reproduction, they nevertheless engage in mate-guarding behaviours simply because such behaviours tended to maximize their reproductive success in ancestral environments.

Indeed, more controversially, Goodall herself even tentatively proposes an analogy with human sexual jealousy, noting that:

“[Some] aggressive interventions [among chimpanzees] appear to be caused by feelings of sexual and social competitiveness which, if we were describing human behavior, we should label jealousy” (The Chimpanzees of Gombe: p326).

Thus, if our closest ancestors among extant primates, along with humans themselves, evince something akin to sexual jealousy and male sexual proprietariness, then it is a fair bet that our common ancestor with chimpanzees did too, and hence that mate-guarding was also practised by our prehuman ancestors, and certainly predates 3000 BC, the oddly specific date posited by Hakim and Stonehouse.

Certainly, mate-guarding does not require, or presuppose, any conscious (or indeed subconscious) awareness of the role of sexual intercourse – or even of males – in reproduction.[8]

Who Is Responsible to the Stigmatization of Promiscuity?

As for Hakim’s claim that men have suppressed women’s exploitation of their erotic capital because they are jealous of the fact that women have more of it and wish to stop women taking advantage of their superior levels of ‘erotic capital’, this also seems very dubious.

Take, for example, the stigmatization of sex workers such as prostitutes, a topic to which Hakim herself devotes considerable attention. Hakim argues that this stigma results from men’s envy of women’s greater levels of erotic capital and their desire to prevent women from exploiting this advantage to the full.

Thus, she writes:

The most powerful and effective weapon deployed by men to curtail women’s use of erotic capital is the stigmatization of women who sell sexual services” (p75).

Unfortunately, however, this theory is plainly contradicted by the observation that women are actually generally more censorious of promiscuity and prostitution than are men (Baumeister and Twenge 2002).

In contrast, men, for obvious reasons, rather enjoy the company of prostitutes and other promiscuous women – although it is true that, due to concerns regarding paternity certainty, they may not wish to marry them.

Hakim, for her part, acknowledges that:

The stigma attached to selling sexual services in the Puritan Christian world… is so complete that women are just as likely as men to condemn prostitution and prostitutes. Sometimes women are even more hostile, and demand the eradication (or regulation) of the industry more fiercely than men, a pattern now encouraged by many feminists” (p76).

In an associated endnote, going further, she even concedes:

In Sweden, the 1996 sex survey showed women objected to prostitutes twice as often as men: two fifths of women versus one fifth of men thought that both buyers and sellers should be treated as criminals” (p282).

Yet this pattern is by no means limited to Sweden, but rather appears to be universal. Thus, Baumeister and Twenge report:

Women seem consistently more opposed than men to prostitution and pornography. Klassen, Williams, and Levitt (1989) reported the results of a survey asking whether prostitution is ‘always wrong’. A majority (69%) of women, but only a minority (45%) of men, were willing to condemn prostitution in such categorical terms. At the opposite extreme, about three times as many men (17%) as women (6%) responded that prostitution is not wrong at all” (Baumeister and Twenge 2002).

Indeed, men appear to more liberal, permissive and tolerant, and women more censorious, in respect of virtually aspects of sexual morality. Thus, women are much more likely than men to disapprove of pornography, promiscuity, prostitution, premarital sex, sex with robots and household appliances and other such fun and healthy recreational activities (see Baumeister and Twenge 2002).[9]

Faced with this overwhelming evidence, Hakim is forced to acknowledge:

If women in Northern Europe object to the commercial sex industry more strongly than men, this seems to destroy my argument that the stigmatization and criminalization of prostitution is promoted by patriarchal men” (p76).

However, Hakim has a ready, if not entirely convincing, response, maintaining that:

Over time women have come to accept and actively support ideologies that constrain them” (p77).

And also that:

Women have generally had the main responsibility for enforcing constraints but did not invent them” (p273, note 20).

However, this effectively reduces women to mindless puppets without agency of their own.

It also fails to explain why women are actually more puritanical than are men themselves.

Perhaps evil, devious, villainous, patriarchal men could somehow have manipulated women, against their own better interests, into being somewhat puritanical, or perhaps even as puritanical as are men themselves. However, they are unlikely to have succeeded in manipulating women into becoming even more puritanical than those evil male geniuses supposedly doing the manipulation and persuading.

Hakim’s Mythical ‘Male Sex Right

Hakim suggests that sexual morality reflects what she calls a “male sex right” (p82).

Thus, she argues that the moral opprobrium attaching to gold-diggers and prostitutes reflects the supposed patriarchal assumption that:

Men should get what they want for free, especially sex” (p79).

Men should not have to pay women for sexual favours or erotic entertainments [and] men should get what they want for free” (p98).

However, this theory is plainly contradicted by three incontestable facts.

First, promiscuous sex is stigmatized even where it does not involve payment. Thus, if prostitutes are indeed stigmatized, so are ‘sluts’ who engage in sex promiscuously but without any demand for payment.

Secondly, marriage is not condemned by moralists but rather held up as a moral ideal despite the fact that, as Hakim herself acknowledges, it usually involves a trade of sexual access in return for financial support – i.e. disguised (and overpriced) prostitution.

Third, far from advocating, as suggested by Hakim, that men should ‘get sex for free’, Christian moralists traditionally promoted abstinence and celibacy, especially before marriage, outside of marriage, and, for those held in highest regard by the church (i.e. nuns, monks and priests), permanently.[10]

In short, what seems to be condemned by moralists seems to be the promiscuity itself, not the demand for payment.

After all, if there really were  a “male sex right”, as contended by Hakim, then rape would presumably be, not a crime, but rather a basic, universal and inalienable human right!

Puritanism and Prudery as Price-fixing Among Prostitutes

A more plausible theory of the stigmatization of sex work might be sought, not in the absurd fallacies of feminism, but in the ‘dismal science’ of economics.

On this view, what is stigmatized is not the sale of sex itself, but rather its availability at too low a price.

Sex available at too low a price runs undercutting other women and driving down the prices the latter can themselves hope to demand for sexual services.

On this view, if men can get bargain basement blowjobs outside of marriage or similar ‘committed’ relationships, then they will have no need to pursue such relationships and women will lose the economic security with which these relationships provide them.

Hakim claims that sexual morality reflects the assumption that:

Men should get what they want for free, especially sex” (p79).

My own view is almost the opposite. Sexual morality reflects the assumption, not that men should be able to get sex for free, but rather that they should be obliged to pay a hefty price (e.g. the ultimate price – marriage), and certainly a lot more than is typically demanded by prostitutes.

Aside from myself, this view has been most comprehensively developed by psychologist Roy Baumeister and colleagues. Baumeister and Vohs (2006: p358) write:

“The so-called ‘cheap’ woman (the common use of this economic term does not strike us as accidental), who dispenses sexual favors more freely than the going rate, undermines the bargaining position of all other women in the community, and they become faced with the dilemma of either lowering their own expectations of what men will give them in exchange for sex or running the risk that their male suitors will abandon them in favor of other women who offer a better deal” (Baumeister and Vohs 2006: p358).

On this view, women’s efforts to prevent other women from capitalizing on their sex appeal is, as Baumeister and Vohs put it, analogous to:

Other rational economic strategies, such as OPEC‘s efforts to drive up the world price of oil by inducing member nations to restrict their production” (Baumeister and Vohs 2006: p357).

Interestingly, an identical analogy – between the supply of oil and of sex – had earlier been adopted by Warren Farrell in his excellent The Myth of Male Power (which I have reviewed here), where he wrote:

In the Middle East, female sex and beauty are to Middle Eastern men what oil and gas are to Americans: the shorter the supply the higher the price. The more women ‘gave’ away sex for free, or for a small price, the more the value of every woman’s prize would be undermined, which is why anger toward prostitution, purdah violation (removing the veil), and pornography runs so deep, especially among women. It is also why parents told daughters, ‘Don’t be cheap.’ ‘Cheap’ sex floods the market” (The Myth of Male Power: p77).

This then explains why women are generally more puritanical and censorious of promiscuity, prostitution and pornography than are men.

It might also explain why feminism and puritanical anti-sex attitudes tend to go together.

Hakim herself insists that feminist campaigners against prostitution, pornography and other such fun and healthy recreational activities are the unwitting dupes of their patriarchal oppressors, having inadvertently internalized ‘patriarchal’ norms that demonize sex work and women’s legitimate exploitation of their erotic capital for financial gain.

In fact, however, the feminists are probably acting in their own selfish best interests by opposing such activities. As Donald Symons explains in his excellent The Evolution of Human Sexuality (which I have reviewed here):

The gain in power to control heterosexual interaction that accompanies the reduction of sexual pleasure is probably one reason… that feminism and antisexuality often go together… As with more recent feminist movements the militant suffrage movement in England before World War I ‘never made sexual freedom a goal, and indeed the tone of its pronouncements was more likely to be puritanical and censorious on sexual matters than permissive: ‘Votes for women and chastity for men’ was one of Mrs Pankhurst’s slogans’… Much recent feminist writing about female sexuality… emphasize[s] masturbation and, not infrequently, lesbianism, which in some respects are politically equivalent to antisexuality”  (The Evolution of Human Sexuality: p262).

However, if feminist prudery is rational in reflecting the interests of feminist prudes, it does not reflect the interests of women in general. Indeed, to represent the interests of women as a whole (as feminists typically purport to do) is almost impossible, because the interests of different women conflict, not least since women are in reproductive competition primarily with one another. Thus, Symons observes:

Feminist prostitutes and many nonprostitute, heterosexual feminists are in direct competition, and it should be no surprise that they are often to be found at one another’s throats” (The Evolution of Human Sexuality: p260).

This, he explains, is because:

To the extent that heterosexual men purchase the services of prostitutes and pornographic masturbation aids, the market for the sexual services of nonprostitute women is diminished and their bargaining position vis-à-vis men is weakened… The implicit belief of heterosexual feminists such as Brownmiller that, in the absence of prostitution and pornography, men will come to want the same kinds of heterosexual relationships that women want may be an attempt to underpin morally a political program whose primary goal is to improve the feminists’ own bargaining position”  (The Evolution of Human Sexuality: p260).

Hakim does not really address this alternative and, in my view, far more plausible theory of the origins of, and rationale behind, sexual prudery and puritanism. Indeed, she does not even mention this alternative explanation for the stigmatization and criminalization of sex work anywhere in the main body of her text, instead only acknowledging its existence in two endnotes (p273 & p283).

In both endnotes, she gives little consideration to the theory, but rather summarily and rather dismissively rejects the theory. On the first occasion, she gives no real reason for rejecting this theory, merely commenting that, in her opinion, Baumeister and Twenge (2002), who champion this theory:

Confuse distal and proximate causes, policy-making and policy implementation. Women generally have the main responsibility for enforcing constraints but do not invent them” (p273, note 20).

On the second occasion, she simply claims, in a single throwaway sentence:

The trouble with this argument is of course that marital relationships are not comparable with casual relationships” (p283, note 8).

However, although this sentence includes the words “of course”, its conclusion is by no means self-evident, and Hakim provides no evidence in support of this conclusion in the endnote.

Admittedly, she does briefly expand upon the same idea at a different point her text, where she similarly contends:

The dividing line between the two markets [i.e. mating markets involving short-term relationships and long-term relationships] is sufficiently important for there to be little or no competition between the two markets” (p235).

This, however, seems doubtful. From a male perspective, both long-term and short-term relationships may serve identical ends – namely access to regular sex.[11]

Therefore, paying a prostitute may represent an alternative (often cheaper) substitute for the time and expense of conventional courtship.

As Donald Symons puts it:

The payment of money and the payment of commitment are not psychologically equivalent, but they may be economically equivalent in the heterosexual marketplace” (The Evolution of Human Sexuality: p260).

Indeed, conventional courtship often, indeed almost invariably, involves the payment of monies by the male partner (e.g. for dates).

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

Thus, according to Baumeister and Twenge:

Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification” (Baumeister and Twenge 2002: p172).

As explained by Tobias and Mary Marcy in their forgotten early twentieth century Marxist-masculist masterpiece, Women As Sex Vendors (which I have reviewed here and here), street prostitutes, especially those supporting a pimp, are stigmatized simply because:

These women are selling below market or scabbing on the job” (Women As Sex Vendors: p29).

What’s that got to do with the Price of Prostitutes?

Particularly naïve, if not borderline economically illiterate, is Hakim’s conclusions regarding the likely effect of the decriminalization of prostitution on the prices prostitutes are able to demand for their services. Thus, she writes:

The only realistic solution to the male sex deficit is the complete decriminalization of the sex industry. It should be allowed to flourish like other leisure industries. The imbalance in sexual interest would be resolved by the laws of supply and demand, as it is in other entertainments. Men would probably find they have to pay more than they are used to” (p98).

In fact, far from men “find[ing] they have to pay more than they are used to”, the usual consequence of the decriminalization of the sale of a commodity is a fall in the value of this commodity, not a rise.

This is because criminalization produces additional costs for suppliers, not least the risk of prosecution, which are almost invariably more than enough to offset lack of regulation and taxes, and the reduced demand attendant to criminalization, which generally reflects the generally lesser risk of prosecution associated with consumption as opposed to supply.[12]

Thus, with the passage into force of the Volstead Act in 1920, which banned the sale and purchase of alcoholic beverages throughout the USA, the price of alcohol is said to have roughly tripled or even quadrupled.

Similarly, the legalization of marijuana in many US states seems to have been associated with a drop in its price, albeit not as great a fall as some opponents (and no few advocates!) of legalization apparently anticipated.

Indeed, later in her book rather contradicting herself, Hakim admits:

In countries where the [sex] trade is criminalized, such as the United States and Sweden, the local price of sexual services can be pushed higher, due to higher risks” (p165).

And also that:

In countries where prostitution is criminalized, fees can sometimes be higher than in countries where it is legal, due to scarcity and higher risks” (p87).

In short, all the evidence suggests that, if prostitution were entirely decriminalized, or, better still, destigmatized as well, then, far from men “find[ing] they have to pay more than they are used to”, in fact the price of prostitutes would drop considerably.

Hakim writes:

Women offering sexual services can earn anywhere between double and fifty times more than they could earn in ordinary jobs, especially jobs at a comparable level of education. This world of greater opportunity is something that men would prefer women not know about. This is the principal reason why providing sexual services is stigmatized… to ensure women never learn anything about it” (p229).

In reality, however, far from this being something that “men would prefer women not know about”, men would benefit if more women were aware of, and took advantage of, the high earnings available to them in the sex industry – because then more women would presumably enter this line of work and hence prices would be driven down by increased competition.

In addition, if more women worked in the sex industry, fewer would be competing for jobs with men in other industries.

In contrast, the main losers would be existing sex workers, who find that they would have to drop their prices in order to cope with increased competition from other service providers – and perhaps also women in pursuit of husbands, who would find that, with bargain basement blowjobs available from prossies, more and more men find have little need to subject themselves to the inequities and indignities of marriage and conventional courtship, which, of course, offer huge economic benefits to women precisely because they are, compared to purchasing the services of prostitutes, such a bad deal for men.

Sexual Double-Standards Cut Both Ways

Arguing that the stigmatization of sex work is “the most powerful and effective weapon deployed by men to curtail women’s use of erotic capital”, Hakim points to the fact that this “stigma… never affects men who sell sex quite so much” as evidence that this stigma was invented by, and hence serves the interests of, evil male oppressors.

Thus, she contends:

The patriarchal nature of… [negative] stereotypes [about sex workers] is exposed by quite different perceptions of men who sell sex: attitudes here are ambivalent, conflicted, unsure” (p76).

I would contend that there is a more convincing economic explanation as why males providing sexual services are relatively less stigmatized – namely, gigolos and rent-boys, in offering services to women and homosexual men, do not threaten to undercut the prices demanded by non-prostitute women on the hunt for husbands.

Indeed, the proof that there is nothing whatever patriarchal about these differing perceptions is provided by the fact that, in respect of long-term relationships, these ‘double-standards’ are reversed.

Thus, whereas homemaker’ or ‘housewife is a respectable occupation for a woman, attitudes towards ‘househusbands’ who are financially dependent on their wives are – to adopt Hakim’s own phraseology – ‘ambivalent, conflicted, unsure’.

Meanwhile, men who are financially dependent on their partners and whose partners happen to work in the sex industry – i.e. pimps – are actually criminalized for their purportedly exploitative lifestyle.

However, the lifestyle of a pimp is actually directly analogous to that of a housewife/homemaker – both are economically dependent on their sexual partners and both are notorious for spending an exorbitant proportion of their sexual partner’s earnings on items such as clothing and jewellery.

Women’s Sexual Power – Innate or Justly Earned?

Hakim argues that exploitation of sex appeal for financial gain – e.g. working in the sex industry, marrying for money or flirting with the boss for promotions – ought to be regarded as a perfectly legitimate means of social, occupational and economic advancement.

In defending this proposition, she resorts to ad hominem, asserting (without citing data) that disapproval of the exploitation of erotic capitalalmost invariably comes from people who are remarkably unattractive and socially clumsy” (p246).

I will not stoop to respond to this schoolyard-tier substitution of personal abuse for rational debate (roughly, ‘if you disagree with me it’s only because you’re ugly!’), save to comment that the important question is not whether such people is ugly – but rather whether they are right.

Defending women’ exploitation of the male sexual drive, Hakim protests

Apparently is fine for men to exploit any advantage they have in wealth or status, but rules are invented to prevent women exploiting their advantage in erotic capital” (p149).

However, this ignores the fact that, whereas men’s greater earnings are a consequence of the fact that they work longer hours, for a greater proportion of their adult lives, in more dangerous and unpleasant working conditions, women’s greater level of sex appeal merely reflects their good fortune in being born female.

Yet Hakim denies erotic capital is “entirely inherited”, instead insisting:

All aspects of erotic capital can be developed, just like intelligence”.[13]

However, no amount of make-up, howsoever skillfully applied, can disguise excessively irregular features and even expensive plastic surgery and silicone enhancements are recognized as inferior to the real thing.

Moreover, even Hakim would presumably be hard-pressed to deny that the huge advantages incumbent on being born female are indeed “entirely inherited”. Indeed, even men who undergo costly gender reassignment surgery are rarely as attractive as even the average woman.

However, Hakim insists that:

Women generally have higher erotic capital than men because they work harder at it” (p244).

Here, I suspect Hakim has her causation precisely backwards. In fact, women work harder at being attractive (e.g. applying makeup, spending copious amounts of money on clothes, jewelry etc.) precisely because the rightly realize that good looks has bigger pay-offs for women than for men.

Indeed, Hakim herself admits:

Even if men and women had identical levels of erotic capital, the male sex deficit automatically gives women the upper-hand in private relationships” (p244).[14]

A Darwinian perspective suggests that both women’s greater erotic capital and the male sex deficit result ultimately from the fact that females biologically make a greater investment in offspring and therefore represent the limiting factor in mammalian reproduction.

In short, no amount of hard work will grant to men the sexual power conferred upon women simply by virtue of their fortune in being born as a member of the privileged sex.

Disadvantage, Discrimination and Double-Standards

Given that she believes erotic capital can be enhanced through the investment of time and effort, Hakim denies that the advantages accruing to attractive people are in any way unfair or discriminatory. Similarly, she does not regard the advantages accruing to women on account of their greater erotic capital – such as their greater ability to marry up’ (‘hypergamy’) or earn lucrative salaries in the sex industry – as unfair.

However, oddly, Hakim is all too ready to invoke the malign spectre of ‘discrimination’ on those rare occasions where inequality of outcome seemingly benefits men over women.

Thus, Hakim gripes argues that:

The entertainment industry… currently recognizes and rewards erotic capital more than any other industry. However, here too there is an unfair bias against women that leads to lower rewards for higher levels of erotic capital than are observed for men. In Hollywood, male stars earn more than female stars, even though female stars do the same work, but going ‘backwards and in high heels’” (p231).

Oddly, however, Hakim neglects to observe that in Hollywood’s next door neighbour, the pornographic industry, female performers earn more than men and the disparity is much greater and affects all performers, not just A-list stars.

This is despite the fact that, in this very same paragraph quoted above, she acknowledges in parenthesis that “entertainment industry… includes the commercial sex industry” (p231).

Neither does Hakim note that, as discussed by Warren Farrell in Why Men Earn More (reviewed here):

Top women models earn about five times more, that is, about 400% more, than their male ‘equivalent’. Put another way, men models earn about 20% of the pay for the same work” (Why Men Earn More: p97-8).

Hakim rightly decries the fact that:

The concept of discrimination is too readily applied in situations where there is differential treatment or outcomes. In many cases, there are simple explanations for such outcomes that do not involve unfair favoritism or intentional bias” (p131-2).

Yet, oddly, despite this wise counsel, Hakim fails to follow her own advice, being all too ready to invoke discrimination as an explanation, especially malign patriarchal discrimination, wherever she finds women at a seeming disadvantage.

For example, many studies find that more physically attractive people earn somewhat higher salaries, on average, than do relatively less attractive people (e.g. Scholz & Sicinski 2015).

However, perhaps surprisingly, the wage premium associated with good looks is generally found to be somewhat greater for males than for females (e.g. Frieze, Olson & Russell 1991).[15]

This is, for Hakim, a form of “hidden sex discrimination” (p194). Thus, she protests:

Attractive men receive a larger beauty premium than do women. This is clear evidence of sex discrimination, especially as all studies show women score higher than men on attractiveness scales” (p246).

At first glance, it may indeed seem anomalous that the wage premium associated with physical attractiveness is rather greater for men than for women. However, rather than rushing to invoke the malign spectre of sexual discrimination, a simpler explanation is readily at hand.

Perhaps relatively more attractive women simply reduce their efforts in the workplace because other means of social advancement are opened up to them by virtue of their physical attractiveness – not least marriage.

After all, as Hakim herself emphasizes elsewhere in her book:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labor market to women. All the evidence suggests that both routes can be equally important paths to social status and wealth for women in modern societies” (p142).

Therefore, rather than expend effort to advance herself through her career, a young woman, especially an attractive young woman, instead focuses her attention on marriage as a form of advancement. As the redoubtable HL Mencken put it in his book In Defense of Women:

The time is too short and the incentive too feeble. Before the woman employee of twenty-one can master a tenth of the idiotic ‘knowledge’ in the head of the male clerk of thirty, or even convince herself that it is worth mastering, she has married the head of the establishment or maybe the clerk himself, and so abandons the business” (In Defense of Women: p70).

Or, as Matthew Fitzgerald puts it in his delightfully subtitled Sex-ploytation: How Women Use Their Bodies to Extort Money From Men:

It takes far less effort to warm the bed of a millionaire than to earn a million dollars yourself” (Sex-ploytation: p10)

In short, why work for money when you have the easier option of marrying it instead?

Moreover, evidence suggests that relatively more physically attractive women are indeed able to marry men with higher levels of income and accumulated capital than are relatively less physically attractive women (Elder 1969; Hamermesh and Biddle 1994; Udry & Eckland 1984).

Indeed, some of the same studies that show the lesser benefits of attractiveness for women in terms of earnings and occupational advancement also show greater benefits for women in terms of marriage prospects (e.g Elder 1969; Udry & Eckland 1984).

Thus, psychologist Nancy Etcoff writes, in her book Survival of the Prettiest (which I have reviewed here):

“The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (Survival of the Prettiest: p65)

Yet, in stark contrast, as even Hakim herself acknowledges, ‘marrying up’ is not an option for even the handsomest of males simply because:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)… Even today, most women admit that their goal was always to marry a higher-earning man, and most achieve their goal” (p141).[16]

In short, it seems that Hakim regards any advantage accruing to women on account of their greater erotic capital as natural and legitimate, not to mention fair game for women to exploit to the full and at the expense of men.

However, in those rare instances where sexual attractiveness seemingly benefits men more than it does women, this advantage is then necessarily attributed by Hakim to a “hidden sex discrimination” and hence viewed as inherently malign.

Are Women Wealthier Than Men?

Hakim claims that the importance of what she calls erotic capital has been ignored or overlooked due to what she claims is “the patriarchal bias in social science” (p75).

As anyone who is remotely aware of the current state of the social sciences should be all too aware, there is little evidence for “patriarchal bias in social science”. On the contrary, for over half a century at least, the social sciences have been heavily infested with feminism.

My own view is almost the opposite of Hakim’s – namely, it is not “patriarchal bias”, but rather feminist bias that has led social scientists to ignore the importance of sexual attractiveness in social and economic relations – because feminists, in their efforts to portray women as a ‘disadvantaged and oppressed group, have felt the need to ignore or downplay women’s sexual power over men.

In fact, although Hakim accuses them of being unwitting agents of patriarchy, feminists have probably been wise to play down women’s sexual power over men – because once this power is admitted, the fundamental underlying premise of feminism, namely that women represent an oppressed group, is exposed as fallacious.

Indeed, much of data reviewed by Hakim herself inadvertently proves precisely this.

For example, Hakim observes that:

The marriage market remains an avenue for upward social mobility long after the equal opportunities revolution opened up the labour market to women. All the evidence suggests that both routes can be equally important paths to wealth for women in modern societies” (p142).

As a consequence, Hakim observes that:

There are more female than male millionaires in a modern country such as Britain. Normally, men can only make their fortune through their jobs and businesses. Women achieve the same wealthy lifestyle and social advantages through marriage as well as through career success” (p24).

There are more female than male millionaires in Britain. Some women get rich through their own efforts, while others are wealthy widows and divorcées who married well” (p142).

Here, though, I suspect Hakim actually downplays the extent of the gender differential. Certainly, she is right that in observing that “normally, men can only make their fortune through their jobs and businesses” and hence that:

Handsome men who marry into money are still rare compared to the numbers of beautiful women who do this” (p24).

However, while she is right that “some women get rich through their own efforts, while others are wealthy widows and divorcées who married well”, I suspect she is exaggerating when she claims “both routes can be equally important paths to wealth for women in modern societies”.

In fact, while many women become rich through marriage or inheritance, self-made millionaires seem to be overwhelmingly male.

Thus, most self-made millionaires make their fortunes through business and investment. However, as Warren Farrell observes in his excellent Why Men Earn More (reviewed here and here), whereas feminists blame the lower average earnings of women as compared to men on discrimination by employers, in fact, among the self-employed and business owners, where discrimination by employers is not a factor, the disparity in earnings between men and women is even greater than among employees.

Thus, Farrell reports:

When there was no boss to ‘hold women back’, women who owned their own businesses netted, at the time (1970s through 1990s) between 29% and 35% of what men netted; today, women who own their own businesses net only 49% of their male counterparts’ net earnings” (Why Men Earn More: pxx).

On the other hand, focussing on the ultra-rich, in the latest 2023 Forbes 400 list of the richest Americans, there are only sixty women, just fifteen percent of the total, of whom only twelve (i.e. just twenty percent) are, Forbes magazine reports, ‘self-made’, in contrast to fully seventy percent of the men in the list.

None of the six richest women on the list seem to have played any part in accumulating their own wealth, each either inheriting it from a deceased father or husband, or expropriating it from their husbands in the divorce courts.[17]

As Ernest Belfort Bax wrote over a century ago, in collaboration with an anonymous Irish jurist, in The Legal Subjection of Men (which I have reviewed here):

The bulk of women’s property, in 99 out of every 100 cases, is not earned by them at all. It arises from gift or inheritance from parents, relatives, or even the despised husband. Whenever there is any earning in the matter it is notoriously earning by some mere man or other. Nevertheless, under the operation of the law, property is steadily being concentrated into women’s hands” (The Legal Subjection of Men: p9).

This, of course, suggests that it is men rather than women who should be campaigning for ‘equal opportunity’, because, whereas most traditionally male careers are now open to both sexes, the opportunity to advance oneself through marriage remains almost the exclusive preserve of women, since, as Hakim herself acknowledges:

Even highly educated women with good salaries seek affluent and successful partners and refuse to contemplate marrying down to a lower-income man (unlike men)” (p141).

Women also have other career opportunities available to them that are largely closed to men, or at least to heterosexual men – namely, careers in the sex industry.

Yet such careers can be highly lucrative. Thus, Hakim herself reports that:

Women offering sexual services can earn anywhere between twice and fifty times what they could earn in ordinary jobs, especially jobs at a comparable level of education” (p229).

Yet men are not only denied these easy and lucrative means of financial enrichment but are also driven by the Hakim calls the ‘male sex deficit’ to spend a large portion of whatever wealth they can acquire attempting to buy the sexual services and affection of women, whether through paying for sex workers or through conventional courtship.

Thus, as I have written previously:

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

As a consequence, despite working fewer hours, for a lesser proportion of their adult lives in safer and more pleasant working environments, women are estimated by researchers in the marketing industry to control around 80% of consumer spending.

Yet Hakim goes even further, arguing that both what she calls the ‘male sex deficit’ and the greater levels of erotic capital possessed by women place women at an advantage over men in all their interactions with one another, on account of what she refers to as ‘the principle of least interest’.

In other words, since men want sex with women more than women want sex with men, all else being equal, women almost always have the upper-hand in their relationships with men.[18]

Indeed, Hakim goes so far as to claim that men are condemned to a:

Semi-permanent state of sexual desire and frustration… Suppressed and unfulfilled desires permeate all of men’s interactions with women” (p228).

Yet, here, Hakim surely exaggerates.

Indeed, to take Hakim’s words literally, one would almost be led to believe that men walk around with permanent erections.

I doubt any man is ever really consumed with overwhelming “suppressed and unfulfilled desires” when conversing with, say, the average fat middle-aged woman in the contemporary west. Indeed, even when engaging in polite pleasantries, routine conversation, or even mild flirtation with genuinely attractive young women, most men are capable of maintaining their composure without visibly salivating or contemplating rape.

Yet, for all her absurd exaggeration, Hakim does have a point. Indeed, she calls to mind Camille Paglia’s memorable and characteristically insightful description of men as:

Sexual exiles… [who] wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19).

Therefore, Hakim is right to claim that, by virtue of the ‘the principle of least interest’, women generally have the upper-hand in interactions with men.

Indeed, her conclusions are dramatic – and, though she seemingly does not fully appreciate their implications – actually directly contradict and undercut the underlying premises of feminism – namely that women are disadvantaged as compared to men.[19]

Thus, she observes that:

At the national level, men may have more power than women as a group – they run governments, international organizations, the biggest corporation and trade unions. However, this does not automatically translate into men having more power at the personal level. At this level, erotic capital and sexuality are just as important as education, earnings and social networks… Fertilityfurther enhances women’s power” (p245).

 On the contrary, she therefore concludes:

In societies where men retain power at the national level, it is entirely feasible for women to have greater power… for private relationships” (p245).

Yet women’s power over their husbands, and women’s sexual power over men in general, also confers upon women both huge economic power and even indirect political power, especially given that men, including powerful men, have a disposition to behave chivalrously and protectively towards women.

Thus, one is reminded of Arthur Schopenhauer’s observation, in his brilliant, celebrated and infinitely insightful essay On Women, of how:

Man strives in everything for a direct domination over things, either by comprehending or by subduing them. But woman is everywhere and always relegated to a merely indirect domination, which is achieved by means of man, who is consequently the only thing she has to dominate directly” (Schopenhauer, On Women).

Indeed, in this light, we might do no better than contemplate in relation to our own cultures the question Aristotle posed of the ancient Spartans over two thousand years ago:

What difference does it make whether women rule, or the rulers are ruled by women?” (Aristotle, Politics II).

References

Alexander & Fisher (2003) Truth and consequences: Using the bogus pipeline to examine sex differences in self-reported sexuality, Journal of Sex Research 40(1): 27-35.
Bateman (1948), Intra-sexual selection in Drosophila, Heredity 2 (Pt. 3): 349-368.
Baumeister & Vohs (2004) Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions, Personality and Social Psychology Review 8(4) 339-363.
Baumseister & Twenge (2002) Cultural Suppression of Female Sexuality, Review of General Psychology 6(2): 166-203.
Brewer, Garrett, Muth & Kasprzyk (2000) Prostitution and the sex discrepancy in reported number of sexual partners, Proceedings of the National Academy of Sciences of the United States of America; USA 2000, 12385.
Buss (1989) Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures, Behavioral and Brain Science 12(1):1-14.
Buss, Larson, Westen & Semmelroth (1992) Sex Differences in Jealousy: Evolution, Physiology, and Psychology, Psychological Science 3(4):251-255.
Elder (1969) Appearance and education in marriage mobility. American Sociological Review, 34, 519-533.
Frieze, Olson & Russell (1991) Attractiveness and Income for Men and Women in Management, Journal of Applied Social Psychology 21(13): 1039-1057.
Hamermesh & Biddle (1984) Beauty and the labor market. American Economic Review, 84, 1174-1194.
Kanazawa (2011) Intelligence and physical attractiveness. Intelligence 39(1): 7-14.
Kanazawa and Still (2018) Is there really a beauty premium or an ugliness penalty on earnings?Journal of Business and Psychology 33: 249–262.
Scholz & Sicinski (2015) Facial Attractiveness and Lifetime Earnings: Evidence from a Cohort Study, Review of Economics and Statistics (2015) 97 (1): 14–28.
Trivers (1972) Parental investment and sexual selection. In B. Campbell (Ed.) Sexual Selection and the Descent of Man, 1871-1971 (pp 136-179). Chicago, Aldine.
Udry and Eckland (1984) Benefits of being attractive: Differential payoffs for men and women.Psychological Reports, 54: 47-56.
Wilson & Daly (1992) The man who mistook his wife for a chattel. In: Barkow, Cosmides & Tooby, eds. The Adapted Mind, New York: Oxford University Press,1992: 289-322.


[1] Both editions appear to be largely identical in their contents, though I do recall noticing a few minor differences. Page numbers cited in the current review refer to the former edition, namely Money Honey: the Power of Erotic Capital, published in 2011 by Allen Lane, which is the edition of which this post is a review.

[2] One is inevitably reminded here of Richard Dawkins’s ‘First Law of the Conservation of Difficulty’, whereby Dawkins not inaccurately observes ‘obscurantism in an academic subject is said to expand to fill the vacuum of its intrinsic simplicity’.

[3] In this context, it is interesting to note that Arnold Schwarzenegger and other bodybuilders with extremely muscular physiques do not seem to be generally regarded as especially handsome and attractive by women. Anecdotally, women seem to prefer men of a more lean and athletic physique, in preference to the almost comically exaggerated musculature of most modern bodybuilders. As Nancy Etcoff puts it in Survival of the Prettiest (reviewed here), women seem to prefer:

Men [who] look masculine but not exaggeratedly masculine” (Survival of the Prettiest: p159).

In writing this, Etcoff seemed to have in mind primarily male facial attractiveness. However, it seems to apply equally to male musculature. For more detailed discussion on this topic, see here.

[4] Although I here attribute beautiful women’s unpopularity among other women to jealousy on the part of the latter, there are other possible explanations for this phenomenon. As I discuss in my review of Etcoff’s book (available here), another possibility is that beautiful women are indeed simply less likeable in terms of their personality. Perhaps, having grown accustomed to being fawned over and receiving special privileges on account of their looks, especially from men, they gradually become, over time, entitled and spoilt, something that is especially apparent to other women, who are immune to their physical charms.

[5] Hakim mentions evolutionary psychology as an approach, to my recollection, only once, in passing, in the main body of her text. Here, she associates the approach with ‘essentialism’, a scare-word, and straw man, employed by social scientists to refer to biological theories of sex and race differences, which Hakim herself defines as referring to “a specific outdated theory that there are important and unalterable biological differences between men and women”, as indeed there undoubtedly are (p88).
Evolutionary psychology as an approach is also mentioned, again in passing, in one of Hakim’s endnotes (p320, note 22). As mentioned above, Hakim also cites several studies conducted by evolutionary psychologists to test specifically evolutionary hypotheses (e.g. Kanazawa 2011; Buss 1989). Therefore, it cannot be that Hakim is simply unaware of this active research programme and theoretical approach.
Rather, it appears that she either does not understand how Bateman’s principle both anticipates, and provides a compelling explanation for the phenomena she purports to undercover (namely, the ‘male sex deficit’ and greater ‘erotic capital’ of women); or that she disingenuously decided not to discuss evolutionary psychology and sociobiology precisely because she recognizes the extent to which it deprives her own theory of its claims to originality.

[6] Actually, due to greater male mortality and the longer average lifespan of women, there are actually somewhat more women than men in the adult population. However, this is not sufficient to account for the disparity in number of sex partners reported in sex surveys, especially since the disparity becomes more pronounced only in older cohorts, who tend to be less sexually active. Indeed, since female fertility is more tightly contrained by age than is male fertility, the operational sex ratio may actually reveal a relative deficit of fertile females.

[7] Before they discovered of the role of men in impregnating women, and in those premodern societies where “this idea never emerged”, there was, Hakim reports, ‘free love’ and rampant promiscuity, sexual jealousy presumably being unknown (p79). Of course, we have heard these sorts of ideas before, not least in the discredited Marxian concept of ‘primitive communism’ and in Margaret Mead’s famous study of adolescence in Samoa. Unfortunately, however, Mead’s claims have been thoroughly debunked, at least with regard to Samoan culture. Indeed, it is notable that, in the examples of such premodern cultures supposedly practising ‘free love’ that are cited by Hakim, Samoa is conspicuously absent.

[8] This error is analogous to the so-called ‘Sahlins fallacy’, so christened by Richard Dawkins in his paper ‘Twelve misunderstandings of kin selection’, whereby celebrated cultural anthropologist (and left-wing political activist) Marshall Sahlins, in his book The Use and Abuse of Biology (reviewed here), assumed that, for humans, or other animals, to direct altruism towards biological relatives proportionate to their degree of relatedness as envisaged by kin selection and inclusive fitness theory, they must necessarily understand the mathematical concept of fractions.

[9] Only in respect of homosexuality, especially male homosexuality, are these attitudes oddly reversed. Here, women are more accepting and tolerant, whereas men are much more likely to disapprove of and indeed be repulsed by the idea of male homosexuality in particular (though heterosexual men often find the idea of lesbian sex arousing, at least until they witness for themselves what most real lesbian women actually look like).

[10] Thus, Hakim herself observes that, under Christian morality:

Celibacy was praised as admirable, then enforced on Catholic priests, monks, and nuns” (p80)

[11] If both long-term and short-term sexual relationships both serve similar functions for men – namely, a means of obtaining regular sexual intercourse – perhaps women do indeed conceive of such relationships as representing entirely separate marketplaces, since, unlike for heterosexual men, short-term commitment-free sex is much easier to obtain for women than is a long-term relationship. This then might explain Hakim’s assumption that the two markets are entirely separate, since, as herself a female, this is how she personally has always perceived it.
However, I suspect that, even for women, the two spheres are not entirely conceptually separate. For example, women sometimes enter short-term commitment-free sexual relationships with men, especially high-status men, in the hope that such a relationship might later develop into a long-term romantic relationship.

[12] Besides the risk of criminal prosecution, the costs for suppliers associated with criminalization include the inability of suppliers to resort to legal mechanisms either for protection or to enforce contracts. This is among the reasons that, in many jurisdictions were prostitution is criminalized, both prostitutes and their clients are at considerable risk of violence, including extortion, blackmail, rape and robbery. It is also why suppliers often turn instead to other means of protection, providing an opening for organized crime elements.

[13] In fact, it is a fallacy to suggest that because something can be enhanced or improved by “time and effort”, this means it is not “entirely inherited”, since the tendency to successfully devote “time and effort” to self-improvement is at least partly a heritable aspect of personality, associated with the personality factor identified by psychometricians as conscientiousness. Behavioural dispositions are, in principle, no less heritable than morphology.

[14] This, of course, implies that the greater female level of ‘erotic capital’ is separable from the ‘male sex deficit’, when, in reality, as I have already discussed the ‘male sex deficit’ provides an obvious explanation for why women have greater sex appeal, since, as Hakim herself acknowledges:

It is impossible to separate women’s erotic capital, which provokes men’s desire… from male desire itself” (p97).

[15] Although there is a robust and well-established correlation between attractiveness and earnings, this does not necessarily prove that it is attractiveness itself that causes attractive people to earn more. In particular, Kanazawa and Still argue that more attractive people also tend to be more intelligent, and also have other personality traits, that are themselves associated with higher earnings (Kanazawa and Still 2018).

[16] Indeed, more affluent women are actually even more selective regarding the socio-economic status that they demand in a prospective partner, preferring partners who are even higher in socioeconomic status than they are themselves (Wiederman & Allgeier 1992; Townsend 1989).
This, of course, contradicts the feminist claim that women only aspire to marry up because, due to supposed discrimination, ‘patriarchy’, male privilege and other feminist myths, women lack the means to advance in social status through occupational means.
In fact, the evidence implies that the feminists have their causation exactly backwards. Rather than women looking to marriage for social advancement because they lack the means to achieve wealth through their careers due to discrimination, instead the better view is that women do not expend great effort in seeking to advance themselves through their careers precisely because they have the easier option of achieving wealth and privilege by simply marrying into it.
Unfortunately, the fact that even women with high salaries and of high socioeconomic status insist on marrying men of similarly high, or preferably even higher, socioeconomic status than themselves means that feminist efforts to increase the number of women in high status occupations, including by methods such as affirmative action and other forms of overt and covert discrimination against men, also have the secondary effect of reducing rates of marriage and hence of fertility, since the higher the socioeconomic status and earnings of women the fewer men there are of the same or higher socioeconomic status for them to marry, particularly because other high status high income occupations are similarly occupied increasingly by other women. This may be one major causal factor underlying one of the leading problems facing developed economies today, namely their failure to reproduce at replacement levels. This is one of many reasons we must stridently oppose such feminist policies.

[17] Of course, being ‘self-made’ is a matter of degree. Many of Among the six richest women in America listed by Forbes, the only ambiguous case, who might have some claim (albeit very weak) to having herself earned some small part of her own fortune, rather than merely inherited it, is the sixth richest woman in America, Abigail Johnson, who is currently CEO of the company established by her grandfather and formerly run by her father. Although she certainly did not build her own fortune, but rather very much inherited it, she nevertheless has been involved in running the family business that she inherited. The five richest women in America, in contrast, have no claim whatsoever to having earned their own fortunes. On the contrary, all seemingly inherited their wealth from male relatives (e.g. husbands, fathers), except for the former wife of Jeff Bezos, who instead expropriated the monies of her husband through divorce. According to Forbes the richest ‘self-made’ woman on the list is the seventh richest woman in America, and thirty-eighth richest person overall, Diana Hedricks. However, since she founded the company upon which her fortune is built with her then-husband, it is reasonable to suppose, given the rarity of ‘self-made’ female millionairs, that he in fact played the decisive role in establishing the family’s wealth.

[18] Actually, however, the situation is more complex. While men certainly want sex more than women do, especially promiscuous sex outside a committed relationship, women surely have a greater desire for long-term, committed, romantic relationships than men do. This complicates the calculus with respect to who has the least interest in a given relationship.
On the other hand, however, the reason why women have a strong desire for long-term committed romantic relationships is, at least in part, the financial benefits and security with which such relationships typically provide them. These one-sided benefits are, of course, further evidence that women do indeed have the upper-hand in their relationships with men, even, perhaps especially, in long-term committed relationships.
Yet men can also obtain sex outside of committed relationships, not least through prostitutes. Yet the very fact that heterosexual prostitution almost invariably involves the man paying the woman for sex rather than vice versa is, of course, further proof that, again, women do indeed have the upper-hand, on account of ‘the principle of least interest’.

[19] A full understanding of the extent to which women’s sexual power over men confers upon them an economically privileged position is provided by several works pre-dating Hakim’s own, namely Esther Vilar’s The Manipulated Man (which I have reviewed here), Matthew Fitzgereld’s delightfully subtitled Sex-Ploytation: How Women Use Their Bodies to Extort Money from Men, Tobias and Mary Marcy’s forgotten early twentieth century Marxist-masculist masterpiece Women As Sex Vendors (which I have reviewed here) and Warren Farrell’s The Myth of Male Power (which I have reviewed here and here).

Mental Illness, Medicine, Malingering and Morality: The Myth of Mental Illness vs The Myth of Free Will

Thomas Szasz, Psychiatry: The Science of Lies New York: Syracuse University Press, 2008

The notion that psychiatric conditions, including schizophrenia, ADHD, depression, alcoholism and gambling addiction, are all illnesses ‘just like any other disease’ (i.e. just like smallpox, malaria or the flu) is obvious nonsense. 

Just as political pressure led to the reclasification of homosexuality as, not a mental illness, but a normal variation of human sexuality, so a similar campaign is currently underway in respect of gender dysphoria. Today, if someone is under the delusion that they are a member of the opposite sex, we pander to the delusion and provide them with hormone therapy, hormone blockers and sex reassignment surgery. It is as if, where a patient suffers from the delusion that they are Napoleon, then, instead of treating them for this delusion, we instead provide them with legions of troops with which to invade Prussia.

If indeed these conditions are to be called ‘diseases’, which, of course, depends on how we define ‘disease’, they are clearly diseases very much unlike the infections of pathogens with which we usually associate the word ‘disease’. 

For this reason, I had long meant to read the work of Thomas Szasz, a psychiatrist whose famous (or perhaps infamous) paper, The Myth of Mental Illness (Szasz 1960), and book of the same title, questioned the concept of mental illness and, in the process, rocked the very foundations of psychiatry when first published in the 1960s. I was moreover, as the preceding two paragraphs would suggest, in principle open, even sympathetic, to what I understood to be its central thesis. 

Eventually, I got around to reading instead Psychiatry: The Science of Lies, a more recent, and hence, I not unreasonably imagined, more up-to-date, work of Szasz’s on the same topic.[1]

I found that Szasz does indeed marshal many powerful arguments against what is sometimes called the disease model’ of mental health

Unfortunately, however, the paradigm with which he proposes to replace this model, namely a moralistic one based on the notion of ‘malingering’ and the concept of free will, is even more problematic, and less scientific, than the disease model that he proposes to do away with.  

Physiological Basis of Illness 

For Szasz, mental illness is simply a metaphor that has come to be taken altogether too literally. 

Mental illness is a metaphorical disease; that, in other words, bodily illness stands in the same relation to mental illness as a defective television stands to an objectionable television programme. To be sure, the word ‘sick’ is often used metaphorically… but only when we call minds ‘sick’ do we systematically mistake metaphor for fact; and send a doctor to ‘cure’ the ‘illness’. It’s as if a television viewer were to send for a TV repairman because he disapproves of the programme he is watching” (Myth of Mental Illness: p11). 

But what is a disease? What we habitually refer to as diseases are actually quite diverse in aetiology. 

Perhaps the paradigmatic disease is an infection. Thus, modern medicine began with, and much of modern medicine is still based on, the so-called ‘germ theory of disease’, which assumes that what we refer to as disease is caused by the effects of germs or ‘pathogens’ – i.e. microscopic parasites (e.g. bacteria, viruses), which inhabit and pass between human and animal hosts, causing the symptoms by which disease is diagnosed as part of their own life-cycle and evolutionary strategy.[2]

However, this model seemingly has little to offer psychiatry. 

Perhaps some mental illnesses are indeed caused by infections. 

Indeed, physicist-turned-anthropologist Gregory Cochran even controversially contends that homosexuality (which is not now considered by psychiatrists as a mental illness, despite its obviously biologically maladaptive effects – see below) may be caused by a virus

However, this is surely not true of the vast majority of what we term ‘mental illnesses’. 

However, not all physical diseases are caused by pathogens either. 

For example, developmental disorders and inherited conditions are also sometimes referred to as diseases, but these are caused by genes rather than germs

Likewise, cancer is sometimes called a disease, yet, while some cancers are indeed sometimes caused by an infection (for example, cervical cancer is usually caused by HPV, a sexually transmitted virus), many are not. 

What then do all these examples of ‘disease’ have in common and how, according to Szasz, do so-called mental illnesses differ conventional, bodily ailments? 

For Szasz, the key distinguishing factor is an identified underlying physiological cause for, or at least correlate of, the symptoms observed. Thus, Szasz writes: 

The traditional medical criterion for distinguishing the genuine from the facsimile – that is, real illness from malingering – was the presence of demonstrable change in bodily structure as revealed by means of clinical examination of the patient, laboratory tests on bodily fluids, or post-mortem study of the cadaver” (Myth of Mental Illness: p27) 

Thus, in all cases of what Szasz regards as ‘real’ disease, a real physiological correlate of some sort has been discovered, whether a microbe, a gene or a cancerous growth. 

In contrast, so-called mental illnesses were first identified, and named, purely on the basis of their symptomology, without any understanding of their underlying physiological cause. 

Of course, many diseases, including physical diseases, are, in practice, diagnosed by the symptoms they produce. A GP, for example, will typically diagnose flu without actually observing and identifying the flu virus itself inside the patient under a microscope. 

However, the existence of the virus, and its causal role in producing the symptoms observed, has indeed been demonstrated scientifically in other individuals afflicted with the same or similar symptoms. We therefore recognise the underlying cause of these symptoms (i.e. the virus) independently from the symptoms they produce. 

This is not true, however, for mental illnesses. The latter were named, identified and diagnosed long before there was any understanding of their underlying physiological basis. 

Rather than diseases, we might then more accurately call them syndromes, a word deriving from the Greek ‘σύνδρομον’, meaning ‘concurrence’, which is usually employed in medicine to refer simply to a cluster of signs and symptoms that seem to correlate together, whether or not the underlying cause is or is not understood.[3]

Causes and Correlates 

The main problem for Szasz’s position is that our understanding of the underlying physiological causes of psychiatric conditions – neurological, genetic and hormonal – has progressed enormously since he first authored The Myth of Mental Illness, the paper and the book, at the beginning of the 1960s. 

Yet reading ‘Psychiatry: The Science of Lies’, published in 2008, it seems that Szasz’s own position has advanced but little.[4]

Yet psychiatry, and psychology, have come a long way in the intervening half-century. 

Thus, in 1960, American psychiatry was still largely dominated by Freudian Fruedian psychoanalysis, a pseudoscience roughly on a par with phrenology, of which Szasz is rightly dismissive.[5]

Of particular relevance to Szasz’s thesis, the study of the underlying physiological basis for psychiatric disorders has also progressed massively.  

Every month, in a wide array of scientific journals, studies are published identifying neurological, genetic, hormonal and other physiological correlates for psychiatric conditions. 

In contrast, Szasz, although he never spells this out, seems to subscribe to an implicit Cartesian dualism, whereby human emotions, psychological states and behaviour are a priori assumed, in principle, to be irreducible to mere physiological processes.[6]

Szasz claims in Psychiatry: The Science of Lies that, once an underlying neurological basis for a mental illness has been identified, it ceases to be classified as a mental illness, and is instead classed as a neurological disorder. His paradigmatic example of this is Alzheimer’s disease (p2).[7]

Yet, today, the neurological correlates of many mental illnesses are increasingly understood. 

Nevertheless, despite the progress that has been made in identifying physiological correlates for mental disorders, there remains at least two differences between these correlates (neurological, genetic, hormonal etc.) and the recognised causes of both physiological and neurological diseases. 

First, in the case of mental illnesses, the neurological, genetic, hormonal and other physiological correlates remain just that, i.e. mere correlates

Here, I am not merely reiterating the familiar caution that correlation does not imply causation, but also emphasizing that the correlations in question tend to be far from perfect, and do not form the basis for a diagnosis, even in principle. 

In other words, as a rule, few such identified correlates are present in every single person diagnosed with the condition in question. The correlation is established only at the aggregate statistical level. 

Moreover, those persons who present the symptoms of a mental illness but who do not share the physiological correlate that has been shown to be associated with this mental illness are not henceforth identified as not truly suffering from the mental illness in question. 

In other words, not only is diagnosis determined, as a matter of convenience and practicality, by reference to symptoms (as is also often true for many physical illnesses), but mental illnesses remain, in the last instance, defined by the symptoms they produce, not any underlying physiological cause. 

Any physiological correlates for the condition are ultimately incidental and have not caused physicians to alter their basic definition of the condition itself. 

Second, the identified correlates are, again as a general rule, multiple, complex and cumulative in their effects. In other words, there is not one single identified physiological correlate of a given mental illness, but rather multiple identified correlates, often each having small cumulative effects of the probability of a person presenting symptoms. 

This second point might be taken as vindicating Szasz’s position that mental illnesses are not really illnesses. 

Thus, recent research on the genetic correlates of mental illnesses, as recently summarized by Robert Plomin in his book Blueprint: How DNA Makes Us Who We Are, has found that the genetic variants that cause psychiatric disorders are the exact same genetic variants which, when present in lesser magnitude, also cause normal, non-pathological variation in personality and temperament. 

This suggests that, at least at the genetic level (and thus presumably at the phenotypic level too), what we call mental illness is just an extreme presentation of what is normal variation in personality and behaviour. 

In other words, so-called mental illness simply represents the extreme tail-end of the normal bell curve distribution in personality attributes. 

This is most obviously true of the so-called personality disorders. Thus, a person extremely low in empathy, or the factor of personality referred to by psychometricians as agreeableness, might be diagnosed with anti-social personality disorder (or psychopathy). 

However, it is also true for so-called other mental disorders. For example, ADHD (attention deficit hyperactivity disorder) seems to be mere medical jargon for someone who is very impulsive, with a short attention span, and lacking self-discipline (i.e. low in the factor of personality that psychometricians call conscientiousness) – all traits which vary on a spectrum across the whole population. 

On the other hand, clinical depression, unlike personality, is a temporary condition from which most people recover. Nevertheless, it is so strongly predicted by the factor of personality known to psychometricians as neuroticism that psychologist Daniel Nettle writes: 

Neuroticism is not just a risk factor for depression. It is so closely associated with it that it is hard to see them as completely distinct” (Personality: p114). 

Yet calling someone ‘ill’ because they are at the extreme of a given facet of personality or temperament is not very helpful. It is roughly equivalent to calling a basketballer ‘ill’ because he is exceptionally tall, a jockey ‘ill’ because he is exceptionally small, or Albert Einsteinill’ because he was exceptionally intelligent

Mental illness or Malingering?

While Szasz has therefore correctly identified problems with the conventional disease model of mental health, the model which he proposes in its place is, in my view, even more problematic, and less scientific, than the disease model that he has rightly rejected as probematic and misleading. 

Most unhelpful is the central place given in his theory to the notion of malingering, i.e. the deliberate faking of symptoms by the patient. 

This analysis may be a useful way to understand the nineteenth century outbreak of so-called hysteria, to which Szasz devotes considerable attention, or indeed the modern diagnosis of Munchausen syndrome, which again involves complaining of imagined or exaggerated physical symptoms. 

It may also be a useful way to understand the recently developed diagnosis of chronic fatigue syndrome (CFS, formerly ME), which, like hysteria, involves the patient complaining of physical symptoms for which no physical cause has yet been identified. 

Interestingly from a psychological perspective, all three of these conditions are overwhelmingly diagnosed among women and girls rather than men and boys. 

However, malingering may also be a useful way to understand another psychiatric complaint that was primarily complained of by men, albeit for obvious historical reasons – namely, so-called ‘shell shock’ (now, classed as PTSD) among soldiers during World War One.[8]

Here, unlike with hysteria and CFS, the patient’s motive and rationale for faking the symptoms in question (if this is indeed what they were doing) is altogether more rational and comprehensible – namely, to avoid the horrors of trench warfare (from which women were, of course, exempt). 

However, this model of ‘malingering’ is clearly much less readily applicable to sufferers of, say, schizophrenia

Here, far from malingering or faking illness, those afflicted will often vehemently protest that they are not ill and that there is nothing wrong with them. However, their delusions are often such that, by any ordinary criteria, they are undoubtedly, in the colloquial if not the strict medical sense, completely fucking bonkers. 

The model of malingering can, therefore, only be taken so far. 

Defining Mental Illness? 

The fundamental fallacy at the heart of psychiatry is, according to Szasz, the mistaking of moral problems for medical ones. Thus, he opines: 

Psychiatrists cannot expect to solve moral problems by medical methods” (Myth of Mental Illness: p24). 

Szasz has a point. Despite employing the language of science, there is undoubtedly a moral dimension to defining what constitutes mental illness. 

Whether a given cluster of associated behaviours represents just a cluster of associated behaviours or a mental illness is not determined on the basis of objective scientific criteria. 

Rather, most American psychiatrists simply regard as a mental illness whatever the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association classifies as a mental disorder. 

This manual is treated as gospel by psychiatrists, yet there is no systematic or agreed criteria for inclusion within this supposedly authoritative work. 

Popular cliché has it that mental illnesses are caused by a ‘chemical imbalance’ in the brain.  

Certainly, if we are materialists, we must accept that it is ultimately the chemical composition of the brain that causes behaviour, pathological or otherwise. 

But on what criteria are we to say that a certain chemical composition of the brain is an ‘imbalance’ and another is ‘balanced’, one behaviour ‘pathological’ and one ‘normal’? 

The criteria on which we make this judgement is, as I see it, primarily a moral one.[9]

More specifically, mental illnesses are defined as such, at least in part, because the behavioral symptoms that they produce tend to cause suffering or distress either to the person defined as suffering from the illness, or to those around them. 

Thus, a person diagnosed with depression is themselves the victim of suffering or distress resulting from the condition; a person diagnosed with psychopathy, on the other hand, is likely to cause psychological distress to those around them with whom they come into contact. 

This is a moral, not a scientific, criterium, depending as it does on the notion of suffering or harm

Indeed, it is not only a moral question, but it is also one that has, in recent years, been heavily politicized. 

Thus, gay right activists actively and aggressively campaigned for many years to have homosexuality withdrawn from the DSM and reclassified as non-pathological, and, in 1974, they were successful.[10]

This campaign may have had laudable motives, namely to reduce the stigma associated with homosexuality and prejudice against homosexuals. Yet it clearly had nothing to do with science and everything to do with politics and morality. 

Indeed, homosexuality satisfies many criteria for illness.[11]

First, it is, despite some ingenious and some not so ingenious attempts to show otherwise, obviously biologically maladaptive. 

Whereas the politically correct view is that homosexuality is entirely natural, normal and non-pathological variation of normal sexuality, from a Darwinian perspective this view is obviously untenable. 

Homosexual sex cannot produce offspring. Homosexuality therefore involves a maladaptive misdirection of mating effort, which would surely strongly selected against by natural selection.[12]

Homosexuality is therefore best viewed as a malfunctioning of normal sexuality, just as cancer is a kind of malfunctioning of cell growth and division. In this sense, then, homosexuality is indeed best viewed as something akin to an illness. 

Second, homosexuality shows some degree of comorbidity with other forms of mental illness, such as depression.[13]

Finally, homosexuality is associated other undesirable life-outcomes, such as reduced longevity and, at least for male homosexuals, a greater lifetime susceptibility to various STDs.[14]

Yet, just as homosexuals successfully campaigned for the removal of homosexuality from the DSM, so ‘trans rights’ campaigners are currently embarking on a similar campaign in respect of gender dysphoria

The politically correct consensus today holds that an adult or child who claims to identify of the opposite ‘gender’ to their biological sex should be encouraged and supported in their ‘transition’, and provided with hormone therapy, hormone blockers and sex reassignment surgery, as requested. 

This is roughly the equivalent of, if a person is mentally ill and thinks they are Napoleon, then, instead of telling them that they are not Napoleon, instead we provide them with legions of troops with which to invade Prussia. 

Moving beyond the sphere of sexuality, some self-styled ‘neurodiversity’ activists have sought to reclassify autism as a normal variation of mental functioning, a claim that may appear superficially plausible in respect of certain forms of so-called ‘high functioning autism’, but is clearly untenable in respect of ‘low functioning autism’.[15]

Yet, on the other hand, there is oddly no similar, high-profile campaign to reclassify, say, anti-social personality disorder (ASPD) or psychopathy as a normal, non-pathological variant of human psychology. 

Yet psychopathy may indeed be biologically adaptive at least under some conditions (Mealey 1995). 

Yet no one proposes treating psychopathy as normal or natural variation in personality, even though it is likely just that. 

The reason that there is no campaign to remove psychopathy from the DSM is, of course, because, unlike homosexuals, transexuals and autistic people, psychopaths are hugely disproportionately likely to cause harm to innocent non-consenting third-parties. 

This is indeed a good reason to treat psychopathy and anti-social personality disorder as a problem for society at large. However, this is a moral not a scientific reason for regarding it as problematic. 

To return to the question of disorders of sexuality, another useful point of comparison is provided by paedophilia

From a purely biological perspective, paedophilia is analogous to homosexuality. Both are biologically maladaptive because they involve sexual attraction to a partner with whom reproduction is, for biological reasons, impossible.[16]

Yet, unlike in the case of homosexuality, there has been no mainstream political push for paedophilia to be reclassified as nonpathological or removed from the Diagnostic and Statistical Manual of Mental Disorders of the AMA.[17]

The reason for this is again, of course, obvious and entirely reasonable, yet it equally obviously has nothing to do with science and everything to do with morality – namely, whereas homosexual behaviour as between consenting adults is largely harmless, the same cannot be said for child sexual abuse.[18]

Perhaps an even better analogy would be between homosexuality and, say, necrophilia

Necrophilic sexual activity, like homosexual sexual activity, but quite unlike paedophilic sexual activity, represents something of a victimless crime. A corpse, by virtue of being dead, cannot suffer by virtue of being violated.[19]

Yet no one would argue that necrophilia is a healthy and natural variation on normal human sexuality. 

Of course, although numbers are hard to come by due to the attendent stigma, necrophilia is presumably much less common, and hence much less ‘normal’, than is homosexuality. However, if this is a legitimate reason for regarding homosexuality as more ‘normal’ than is necrophilia, then it is also a legitimate reason for regarding homosexuality itself as ‘abnormal’, because homosexuality is, of course, much less common than heterosexuality.

Necrophile rights is, therefore, the reductio ad absurdum of gay rights.[20]

Medicine or Morality? 

The encroachment of medicine upon morality continues apace, as part of what Szasz calls the medicalization of everyday life Thus, there is seemingly no moral failing or character defect that is not capable of being redefined as a mental disorder. 

Selfish people are now psychopaths, people lacking in willpower and with short attention spans now have ADHD

But if these are simply variations of personality, does it make much sense to call them diseases? 

Yet the distinction between ‘mad’ and ‘bad also has practical application in the operation of the criminal justice system. 

The assumption is that mentally ill offenders should not be punished for their wrongdoing, but rather treated for their illness, because they are not responsible for their actions. 

But, if we accept a materialist conception of mind, then all behaviour must have a basis in the brain. On what basis, then, do we determine that one person is mentally ill while another is in control of his faculties?

As Robert Wright observes: 

“[Since] in both British and American courts, women have used premenstrual syndrome to partly insulate themselves from criminal responsibility… can a ‘high-testosterone’ defense of male murderers be far behind?… If defense lawyers get their way and we persist in removing biochemically mediated actions from the realm of free will, then within decades [as science progresses] the realm will be infinitesimal” (The Moral Animal: p352-3).[21]

Yet a man claiming that, say, high testosterone caused his criminal behaviour is unlikely to be let off on this account, because, if high testosterone does indeed cause crime, then we have good reason to lock up high testosterone men precisely because they are likely to commit crimes.[22]

Szasz wants to resurrect the concept of free will and hold everyone, even those with mental illnesses, responsible for their actions. 

My view is the opposite: No one has free will. All behaviour, normal or pathological, is determined by the physical composition of the brain, which is, in turn, determined by some combination of heredity and environment. 

Indeed, determinism is not so much a finding of science as its basic underlying assumption and premise.[23]

In short, science rests on the assumption that all events have causes and that, by understanding the causes, we can predict behaviour. If this were not true, then there would be no point in doing science, and science would not be able to make any successful predictions. 

In short, criminal punishment must be based on consequentialist utilitarian considerations such as deterrence, incapacitation and rehabilitation rather than such unscientific moralistic notions as free will, just deserts and blame.[24]

A Moral Component to All Medicine? 

Szasz is right, then, to claim that there is a moral dimension to psychiatric diagnoses. 

This is why psychopathy is still regarded as a mental disorder even though it is likely an adaptive behavioural strategy and life history in certain circumstances (Mealey 1995). 

It is also why homosexuality is no longer regarded as a mental illness, despite its obviously biologically maladaptive consequences, yet there is no similar campaign to remove paedophilia from the DSM. 

Yet what Szasz fails to recognise is that there is also a moral element to the identification and diagnosis of physical illnesses too. 

Thus, physical illnesses, like psychiatric illnesses, are called illnesses, at least in part, because they cause pain, suffering and impairment in normal functioning to the person diagnosed as suffering from the illness. 

If, on the other hand, an infection did not produce any unpleasant symptoms, then the patient would surely never bother to seek medical treatment and thus the infection would probably never come to the attention of the medical profession in the first place. 

If it did come to their attention, would they still call it a disease? Would they expect time and resources attempting to ‘cure’ it? Hopefully not, as to do so would be a waste of time and resources. 

Extending this thought experiment, what if the infection in question, not only caused no negative symptoms, but actually had positive effects on the person infected.   

What if the infection in question caused people to be fitter, smarter, happier, kinder and more successful at their jobs? 

Would doctors still call the infection a ‘disease’, and the microscopic organism underlying it a ‘germ’? 

Actually, this hypothetical thought experiment may not be entirely hypothetical. 

After all, there are indeed surely many microorganisms that infect humans which have few or negligible effects, positive or negative, and with which neither patients nor doctors are especially concerned. 

On the other hand, some infections may be positively beneficial to their hosts. 

Take, for example, gastrointestinal microbiota (also known as gut microbiota). 

These are microorganisms that inhabit our digestive tracts, and those of other organisms, and are thought to have a positive beneficial effect on the health and functioning of the host organism. They have even been marketed as probiotics and good bacteria in the advertising campaigns for certain yoghurt-like drinks. 

Another less obvious example is perhaps provided by mitochondrial DNA

In our ancient evolutionary history, this began as the DNA of a separate organism, a bacterium, that infected host organisms, but ultimately formed a symbiotic and mutualistic relationship with us, and now plays a key role in the functioning of those organisms whose distant ancestors it first infected. 

In short, all medicine has a moral dimension.  

This is because medicine is an applied, not a pure, science. 

In other words, medicine aims not merely to understand disease in the abstract, but to treat it. 

We treat diseases to minimize human suffering, and the minimization of human suffering is ultimately a moral (or perhaps economic, since doctors are paid, and provide a service to their patients), rather than a purely scientific, endeavour. 

Endnotes

[1] Although this post is a review of Thomas Szasz’s Pyschiatry: The Science of Lies, readers may note that many of the quotations from Szasz in the review are actually taken from his earlier, more famous book, The Myth of Mental Illness, published some several decades previously. By way of explanation, while this essay is a review of Szasz’s Psychiatry: The Science of Lies, I listened to an audiobook version of this book, and do not have access to a print copy. It was therefore difficult to find source quotes from this book. In contrast, I own a copy of The Myth of Mental Illness, but have yet to read it in full. I thought it more useful to read a more recent statement of Szasz’s views, so as to find out how he has dealt with recent findings in biological psychiatry and behavioural genetics. Unfortunately, as I discuss above, it seems that Szasz has reacted to recent findings in biological psychiatry and behavioural genetics hardly at all, and includes few if any references to such developments in his more recent book.

[2] Thus, proponents of Darwinian medicine contend that many infections produce symptoms such as coughing, sneezing and diarrhea precisely because these symptoms facilitate the spread of the disease through contact with the bodily fluids expelled, hence promoting the pathogens’ own Darwinian fitness or reproductive success.

[3] For example, the underlying physical cause of chronic fatigue syndrome (CFS) is not fully understood. On the other hand, the underlying cause of acquired immunodeficiency syndrome (AIDS) is now understood, namely HIV infection, but, presumably because it involves increased susceptibility to many different infections, it is still referred to as a syndrome rather than a disease in and of itself.

[4] Indeed, according to Szasz himself, in an autobiographical interlude in ‘Psychiatry: The Science of Lies’, he had arrived at his opinion regarding the scientific status of psychiatry even earlier, when first making the decision to train to become a psychiatrist. Indeed, he claims to have made the decision to study psychiatry and qualify as a psychiatrist precisely in order to attack the field from within, with the authority which this professional qualification would confer upon him. This, it hardly needs to be said, is a very odd reason for a career choice.

[5] Attacking modern psychiatry by a critique of Freud is a bit like attacking neuroscience by critiquing nineteenth century phrenology. It involves constructing a straw man version of modern psychiatry. I am reminded in particular of Arthur Jensen’s review of infamous charlatan Stephen Jay Gould’s discredited The Mismeasure of Man, which Jensen titled The debunking of scientific fossils and straw persons, where he described Gould’s method of trying to discredit the modern science of IQ testing and intelligence research by citing the errors of nineteenth-century craniologists as roughly akin to “trying to condemn the modern automobile by merely pointing out the faults of the Model T”.

[6] In The Myth of Mental Illness, Szasz, writes: 

There remains a wide circle of physicians and allied scientists whose basic position concerning the problem of mental illness is essentially that expressed in Carl Wernicke’s famous dictum: ‘Mental diseases are brain diseases’. Because, in one sense, this is true of such conditions as paresis and the psychoses associated with systemic intoxications, it is argued that it is also true for all other things called mental diseases. It follows that it is only a matter of time until the correct physicochemical, including genetic, bases or cause’, of these disorders will be discovered. It is conceivable, of course, that significant physicochemical disturbances will be found in some ‘mental patients’ and in some ‘conditions’ now labeled ‘mental illnesses’. But this does not mean that all so-called mental diseases have biological ‘causes’, for the simple reason that it has become customary to use the term ‘mental illness’ to stigmatize, and thus control, those persons whose behavior offends society—or the psychiatrist making the ‘diagnosis’” (The Myth of Mental Illness: p103). 

Yet, if we accept a materialist conception of mind, then all behaviours, including those diagnostic of mental illness, must have a cause in the brain, though it is true that the same behaviours may result from quite different neuroanatomical causes.
It is certainly true that the concept of mental illness has been used to “stigmatize, and thus control, those persons whose behavior offends society”. So-called drapetomania provides an obvious example, albeit one that was never widely recognised by physicians, at least outside the American South. Another example would be the diagnosis of sluggish schizophrenia used to institutionalize political dissidents in the Soviet Union. Likewise, psychopathy (aka sociopathy or anti-social personality disorder) may, as I argue later in this post, have been classified as a mental disorder primarily because the behaviour of people diagnosed with this condition does indeed “offend society” and arguably demand the “control”, and sometimes detention, of such people.
However, this does not mean that the behaviours complained of (e.g. political dissidence, or anti-social behaviours) will not have neural or other physiological correlates. On the contrary they undoubtedly do, and psychologists have also investigated the neural and other physiological correlates of all behavours, not just those labelled as pathological and as ‘mental illnesses’.
However, Szasz does not quite go so far as to deny that behaviours have physical causes. On the contrary, in The Myth of Mental Illness, hedging his bets against future scientific advances, Szasz acknowledges:

I do not contend that human relations, or mental events, take place in a neurophysiological vacuum. It is more than likely that if a person, say an Englishman, decides to study French, certain chemical (or other) changes will occur in his brain as he learns the language. Nevertheless, I think it would be a mistake to infer from this assumption that the most significant or useful statements about this learning process must be expressed in the language of physics. This, however, is exactly what the organicist claims” (The Myth of Mental Illness: p102- 3). 

Here, Szasz makes a good point – but only up to a point. Whether we are what Szasz calls ‘organicists’ or not, I’m sure we can all agree that, for most purposes, it is not useful to explain the decision to learn French in terms of neurophysiology. To do so would be an example of what philosopher Daniel Dennett, in Darwin’s Dangerous Idea, calls ‘greedy reductionism’, which he distinguished from ‘good reductionism’, which is central to science.
However, it is not clear that the same is true of what we call mental illnesses. Often it may indeed be useful to understand mental illnesses in terms of their underlying physiological causes, including for therapeutic reasons, since understanding the physiological basis for behaviour that we deem undesirable may provide a means of changing these behaviours by altering the physical composition of the brain. For example, if the hormone serotonin is involved in regulating mood, then manipulating levels of serotonin in the brain, or their reabsorption may be a way of treating depression, anxiety and other mood disorders. Thus, SSRIs and SNRIs, which are thought to do just this, have been found to be effective in doing just this.
However, for other purposes, it may be useful to look at a different level of causation. For example, as I discuss in a later endnote, although it may be scientifically a nonsense, it may nevertheless be useful to inculcate a belief in free will among some psychiatric patients, since it may encourage them to overcome their problems rather adopting the fatalistic view that they are ill and there is hence nothing they can do to improve their predicament. Szasz sometimes seems to be arguing for something along these lines.

[7] In The Myth of Mental Illness, as quoted in the preceding endnote, Szasz also gives as examples of behavioural conditions with well-established physiological causes “paresis and the psychoses associated with systemic intoxications(The Myth of Mental Illness: p103).

[8] I hasten to emphasize in this context, lest I am misunderstood, I am not saying that Szasz’s model of ‘malingeringis indeed the appropriate way to understand conditions such as hysteria, Munchausen syndrome, chronic fatigue syndrome or shell shock – only that a reasonable case can be made to this effect. Personally, I do not regard myself as having a sufficient expertise on the topic to be willing to venture an opinion either way.

[9] Of course, we could determine whether a certain composition and structure of the brain is ‘balanced’ ‘imbalanced’ on non-moralistic, Darwinian criteria. In other words, if a certain composition/structure and the behaviour it produces is adaptive (i.e. contributes to the reproductive success or fitness of the organism) then we could call it ‘balanced’; if, on the other hand, it produces maladaptive behaviour we could call it ‘imbalanced’. However, this would produce a quite different inventory and classification of mental illnesses than that provided by the DSM of the APA and other similar publications, since, as we will see, homosexuality, being obviously biologically maladaptive, would presumably be classified as an ‘imbalance’ and hence a mental illness, whereas psychopathy, since it may well, under certain conditions, be adaptive, would be classed as non-pathological and hence ‘balanced’. This analysis, however, has little to do with mental illness as the concept is currently conceived.

[10] Oddly, Szasz himself is sometimes lauded by some politically correct-types as being among the first psychiatrists to deny that homosexuality was a mental illness. Yet, since he also denied that schizophrenia was a mental illness, and indeed rejected the whole concept of ‘mental illness’ as it is currently conceived, this is not necessarily as ‘progressive’ and ‘enlightened’ a view as it is sometimes credited as having been.

[11] Here, a few caveats are in order. Describing homosexuality as a mental illness no more indicates hatred towards homosexuals than describing schizophrenia as a mental illness indicates hatred towards people suffering from schizophrenia, or describing cancer as an illness indicates hatred towards people afflicted with cancer. In fact, regarding a person as suffering from an illness is generally more likely to elicit sympathy for the person so described than it is hatred.
Of course, being diagnosed with a disease may involve some stigma. But this is not the same as hatred.
Moreover, as should be clear from my conclusion, I am not, in fact, arguing that homosexuality should indeed be classified as a mental illness. Rather, I am simply pointing out that it is difficult a frame a useful definition of what constitutes a ‘mental disorder’ unless that definition includes moral criteria, which are necessarily extra-scientific. However, in the final section of this piece, I argue that there is indeed a moral component to all medicine, psychiatry included.
Of course, as I also discuss above, there are indeed some moral reasons for regarding homosexuality as undesirable, for example its association with reduced longevity, which is generally regarded as an undesirable outcome. However, whether homosexuality should indeed be classed as a ‘mental disorder’ strikes me as debatable and also dependent on the exact definition of ‘mental disorder’ adopted.

[12] If homosexuality is therefore maladaptive, this, of course, raises the question as to why homosexuality has not indeed been eliminated by natural selection. The first point to make here is that homosexuality is in fact quite rare. Although Kinsey famously originated the since-popularized claim that as many as 10% of the population are homosexual, reputable estimates using representative samples generally suggest less than 5% of the population identifies as exclusively or preferentially homosexual (though a larger proportion of people may have had homosexual experiences at some time, and the ‘closet factor’ makes it possible to argue that, even in an age of unprecedented tolerance and indeed celebration of homosexuality, and even in anonymous surveys, this may represent an underestimate due to underreporting).
Admittedly, there has recently been a massive increase in the numbers of teenage girls identifying as non-heterosexual, with numbers among this age group now slightly exceeding 10%. However, I suspect that this is more a matter of fashion than of sexuality. Thus, it is notable that the largest increase has been for identification as ‘bisexual’, which provides a convenient cover by which teenage girls can identify with the so-called ‘LGBT+ community’ while still pursuing normal, healthy relationships with opposite-sex boys or men. The vast majority of these girls will, I suspect, grow up to have sexual and romantic relationships primarily with members of the opposite sex.
Yet even these low figures are perhaps higher than one might expect, given that homosexuality would be strongly selected against by evolution. (However, it is important to remember that, when homosexuals were persecuted and hence mostly remained in the ‘closet’, homosexuality would have been less selected against, precisely because so many gay men and women would have married members of the opposite sex and reproduced if only to evade accusations of homosexuality. With greater tolerance, however, they no longer have any need to do so. The liberation of homosexuals may therefore, paradoxically, lead to their gradual disappearance through selection.)
A second point to emphasize is that, contrary to popular perception, homosexuality is not especially heritable. Indeed, it is rather less heritable than other behavioural traits about which it is much less politically correct to speculate regarding the heritability (e.g. criminality, intelligence).
If homosexuality is primarily caused by environmental factors, not genetics, then it would be more difficult for natural selection to weed it out. However, given that exclusive or preferential homosexuality would be strongly selected against by natural selection, humans should have evolved to be resistant to developing exclusive or preferential homosexuality under all environmental conditions that were encountered during evolutionary history. It is possible, however, environmental novelties atypical of the environments in which our psychological adaptations evolved are responsible for causing homosexuality.
For what it’s worth, my own favourite theory (although not necessarily the best supported theory) for the evolution of male homosexuality proposes that genes located on the X chromosome predispose a person to be sexually attracted to males. This attraction is adaptive for females, but maladaptive for males. However, since females have two X chromosomes and males only one, any X chromosome genes will find themselves in females twice as often as they find themselves in males. Therefore, any increase in fitness for females bearing these X chromosome genes only has to be half as great as the reproductive cost to males for the genes in question to be positively selected for.
This is sometimes called the ‘balancing selection theory of male homosexuality’. However, perahps more descriptive and memorable is Satoshi Kanazawa’s coinage, ‘the horny sister hypothesis’.
This theory also has some support, in that there is some evidence the female relatives of male homosexuals have a greater number of offspring than average and also that gay men report having more gay uncles on their mother’s than their father’s side, consistent with an X chromosome-linked trait (Hamer et al 1993; Camperio-Ciani et al 2004). Some genes on the X chromosome have also been linked to homosexuality (Hamer et al 1993; Hamer 1999).
On the other hand, other studies find no support for the hypothesis. For example, Bailey et al (1999) found that rates of reported homosexuality were no higher among maternal than among paternal male relatives, as did McKnight & Malcolm (1996). At any rate, as explained by Wilson and Rahman in their excellent book Born Gay: The Psychobiology of Sexual Orientation:

Increased rates of gay maternal relatives might also appear because of decreased rates of reproduction among gay men. A gay gene is unlikely to be inherited from a gay father because a gay man is unlikely to have children” (Born Gay: p51; see also Risch et al 1993).

[13] Gay rights activists assert that the only reason that homosexuality is associated with other forms of mental illness is because of the stigma to which homosexuals are subject on account of their sexuality. This has sometimes been termed the ‘social stress hypothesis’, ‘social stress model’ or ‘minority stress model’. There is indeed statistical support for the theory that the social stigma is indeed associated with higher rates of depression and other mental illnesses.
It is also notable that, while homosexuality is indeed consistently associated with higher levels of depression and suicide, conditions that can obviously be viewed as a direct response to social stigma, I am not aware of any evidence suggesting higher rates of, say, schizophrenia among homosexuals, which would less obviously, or at least less directly, result from social stress. However, I tend to agree with the conclusions of Mayer and McHugh, in their excellent review of the literature on this subject, that, while social stress may indeed explain some of the increased rate of mental illness among homosexuals, it is unlikely to account for the totality of it (Mayer & McHugh 2016).

[14] Yet, in describing the life outcomes associated with homosexuality, as undesirable, I am, of course, making am extra-scientific value judgement. Of course, the value judgement in question – namely that dying earlier and being disproportionately likely to contract STDs is a bad thing – is not especially controversial. However, it still illustrates the extent to which, as I discuss later in this post, definitions of mental illnesses, and indeed physical illnesses, always include a moral dimension – i.e. diseases are defined, in part, by the fact that they cause suffering, either to the person afflicted, or, in the case of some mental illnesses, to the people in contact with them.

[15] That autism is indeed maladaptive and pathological is also suggested by the well-established correlation between paternal age and autism in offspring, since this has been interpreted as reflecting the build up of deleterious mutations in the sperm of older males.

[16] Indeed, from a purely biological perspective, homosexuality is arguably even more biologically maladaptive than is paedophilia, since even very young children can, in some exceptional cases, become pregnant and even successfully birth offspring, yet same-sex partners are obviously completely incapable of producing offspring with one another.

[17] Indeed, far from there being any political pressure to remove paedophilia from the DSM of the AMA, as ocurred with homosexuality, there is instead increasing pressure to add hebephilia (i.e. attraction to pubescent and early-post-pubescent adolescents) to the DSM. If successful, this would probably lead to pressure to also add ‘ephebophilia’ (i.e. the biologically adaptive and normal male attraction to mid- to late-adolescents) to the DSM, and thereby effectively pathologize and medicalize, and further stigmatize, normal male sexuality.

[18] Of course, homosexual sex does have some dangers, such as STDs. However, the same is also true of heterosexual sex, although, for gay male sex, the risks are vastly elevated. Yet other perceived dangers result from only from heterosexual sex (e.g. unwanted pregnancies, marriage). Meanwhile, the other negative life outcomes associated with homosexuality (e.g. elevated risk of depression and suicide) probably result from a homosexual orientation rather than from gay sex as such. Thus, a celibate gay man is, I suspect, just as likely, if not more likely, to suffer depression than is a highly promiscuous gay man.
Yet, while gay sex may be mostly harmless, the same cannot, of course, be said for child sexual abuse. It may indeed be true that the long-term psychological effects of child sexual abuse are exaggerated. This was, of course, the infamous conclusion of the Rind et al meta-analysis, which resulted in much moral panic in the late-1990s (Rind et al 1998). This is especially likely to be the case when the sexual activity in question is consensual and involves post-pubertal, sexually mature (but still legally underage) teenagers. However, in such cases the sexual activity in question should not really be defined as ‘child sexual abuse’ in the first place, since it neither involves immature children in the biological sense, nor is it necessarily abusive. Yet, it must be emphasized, even if child sexual abuse does not cause long-term psychological harm, it may still cause immediate harm, namely the distress experienced by the victim at the time of the abuse.

[19] Of course, one might argue that the relatives of the deceased may suffer as a result of the idea of their dead relatives’ bodies being violated by necrophiles. However, much the same is also true of homosexuality. So-called ‘homophobes’, for example, may dislike the idea of their adult homosexual sons having consensual homosexual sex. Indeed, they may even dislike the idea of unrelated adult strangers being allowed to have consensual homosexual sex. This was indeed presumably the reason why homosexuality has been criminalized and prohibited in so many cultures across history in the first place, i.e. because other people were disgusted by the thought of it. However, we no longer regard this sort of puritanical, disapproval other people’s private lives as a sufficient reason to justify the criminalization of homosexual behaviour. Why then should it be a reason for criminalizing necrophilia?

[20] Other similar thought experiments involve the prohibitions on other sexual behaviours such as zoophilia and incest. In both these cases, however, the case is morally more complex, in the case of zoophilia on account of whether the animal participant suffers harm or has consented, and, in the case of incest, because of eugenic considerations, namely the higher rate of the expression of deleterious mutations among the offspring of incestuous unions.

[21] Indeed, the courts, in both Britain and America, have been all too willing to invent bogus pseudo-psychiatric diagnoses in order to excuse women, in particular, for culpability in their crimes, especially murder. For example, in Britain, the Infanticide Acts of 1922 and 1938 provide a defence against murder for women who kill their helpless new-born infants where “at the time of the act… the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child”. In terms of biology, physiology and psychology, this is, of course, a nonsense, and, of course, no equivalent defence is available for fathers, though, in practice, the treatment of mothers guilty of infanticide is more lenient still (Wilczynski and Morris 1993).
Similarly, in both Britain and America, women guilty of killing their husbands, often while the latter was asleep or otherwise similarly incapacitated, have been able to avoid being a murder conviction by claiming to have been suffering from so-called ‘battered women syndrome’. There is, of course, no equivalent defence for men, despite the consistent finding that men are somewhat more likely to be the victim of violence from their female intimate partners than women are to have been victimized by their male intimate partners (Fiebert 2014). This may partly explain why men who kill their wives receive, on average, sentences three time as long as women who kill their husbands (Langan & Dawson 1995).

[22] Of course, another possibility might be some form of hormone therapy to reduce the offender’s testosterone. Also, it must be acknowledged that this discussion is hypothetical. Whether testosterone is indeed correlated with criminal or violent behaviour is actually the subject of some dispute. Thus, Alan Mazur, a leading researcher in this area, argues that testosterone is not associated with aggression or violence as such, but rather only with dominance behaviours, which can also be manifested in non-violent ways. For example, a high-powered business tycoon is likely to be high in social dominance behaviours, but relatively unlikely to commit violent crimes. On the other hand, a prisoner, being of low status, may be able to exercise dominance only through violence. I am therefore giving the example of high testosterone only as a simplified hypothetical thought experiment.

[23] Of course, one finding of science, namely quantum indeterminism, complicates this assumption. Ironically, while determinism is the underlying premise of all scientific enquiry, nevertheless one finding of such enquiry is that, at the most fundamental level, determinism does not hold.

[24] Nevertheless, I am persuaded that there may be some value in the concept of free will, after all. Although it is a nonsense, it may, like some forms of religious belief, nevertheless be a useful nonsense, at least in some circumstances.
Thus, if a person is told that there is no free will, and that their behaviours are inevitable, this may encourage a certain fatalism and the belief that people cannot change their behaviours for the better. In fact, this is a fallacy. Actually, determinism does not suggest that people cannot change their behaviours. It merely concludes that whether people do indeed change their behaviours is itself determined. However, this philosophical distinction may be beyond many people’s understanding.
Thus, if people are led to believe that they cannot alter their own behaviour, then this may become something of a self-fulfilling prophecy, and thereby prevent self-improvement.
Therefore, just as religious beliefs may be untrue, but nevertheless serve a useful function in giving people a reason to live and to behave prosocially and for the benefit of society as a whole, so it may be beneficial to inculcate and encourage a belief in free will in order to encourage self-improvement, including among the mentally ill.

References

Bailey et al (1999) A Family History Study of Male Sexual Orientation Using Three Independent Samples, Behavior Genetics 29(2): 79–86. 
Camperio-Ciani (2004) Evidence for maternally inherited factors favouring male homosexuality and promoting female fecundity, Proceedings of the Royal Society B: Biological Sciences 271(1554): 2217–2221. 
Fiebert (2014) References Examining Assaults by Women on Their Spouses or Male Partners: An Updated Annotated Bibliography, Sexuality & Culture 18(2):405-467. 
Hammer et al (1993) A linkage between DNA markers on the X chromosome and male sexual orientation, Science 261(5119):321-7.  
Hammer (1999) Genetics and Male Sexual Orientation, Science 285(5429): 803. 
Langan & Dawson (1995) Spouse Murder Defendants in Large Urban Counties, U.S. Department of Justice Office of Justice Programs, Bureau of Justice Statistics: Executive Summary (NCJ-156831), September 1995. 
Mayer & McHugh (2016) Sexuality and Gender Findings from the Biological, Psychological, and Social Sciences, New Atlantis 50: Fall 2016. 
McKnight & Malcolm (2000) Is male homosexuality maternally linked? Evolution and Gender 2(3):229-252. 
Mealey (1995) The sociobiology of sociopathy: An integrated evolutionary model. Behavioral and Brain Sciences, 18(3): 523–599.
Rind et al(1998). A Meta-Analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples, Psychological Bulletin 124 (1): 22–53.
Risch et al (1993) Male Sexual Orientation and Genetic Evidence, Science 262(5142): 2063-2065. 
Szasz 1960 The Myth of Mental Illness. American Psychologist, 15, 113-118. 
Wilczynski & Morris (1993) Parents Who Kill their children, Criminal Law Review, 31-6.

The Biology of Beauty

Nancy Etcoff, Survival of the Prettiest: The Science of Beauty (New York: Anchor Books 2000) 

Beauty is in the eye of the beholder.  

This much is true by very definition. After all, the Oxford English Dictionary defines beauty as: 

A combination of qualities, such as shape, colour, or form, that pleases the aesthetic senses, especially the sight’. 

If beauty is in the eye of the beholder, then the ‘eye of the beholder’ has been shaped by a process of natural, and sexual, selection to find certain things beautful — and, if beauty is in the ‘eye of the beholder’, then sexiness is located in a different part of the male anatomy but similarly subjective

Thus, beauty is defined as that which is pleasing to an external observer. It therefore presupposes the existence of an external observer, separate from the person or thing that is credited with beauty, from whose perspective the thing or individual is credited with beauty.[1]

Moreover, perceptions of beauty do indeed differ.  

To some extent, preferences differ between individuals, and between different races and cultures. More obviously, and to a far greater extent, they also differ as between species.  

Thus, a male chimpanzee would presumably consider a female chimpanzee as more beautiful than a woman. The average human male, however, would likely disagree – though it might depend on the woman. 

As William James wrote in 1890: 

To the lion it is the lioness which is made to be loved; to the bear, the she-bear. To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her” (Principles of Psychology (vol 2): p387). 

Beauty is therefore not an intrinsic property of the person or object that is described as beautiful, but rather a quality attributed to that person or object by a third-party in accordance with their own subjective tastes. 

However, if beauty is then indeed a subjective assessment, that does not mean it is an entirely arbitrary one. 

On the contrary, if beauty is indeed in the ‘eye of the beholder’ then it must be remembered that the ‘eye of the beholder’—and, more importantly, the brain to which that eye is attached—has been shaped by a process of both natural and sexual selection

In other words, we have evolved to find some things beautiful, and others ugly, because doing so enhanced the reproductive success of our ancestors. 

Thus, just as we have evolved to find the sight of excrement, blood and disease disgusting, because each were potential sources of infection, and the sight of snakes, lions and spiders fear-inducing, because each likewise represented a potential threat to our survival when encountered in the ancestral environment in which we evolved, so we have evolved to find the sight of certain things pleasing on the eye. 

Of course, not only people can be beautiful. Landscapes, skylines, works of art, flowers and birds can all be described as ‘beautiful’. 

Just as we have evolved to find individuals of the opposite sex attractive for reasons of reproduction, so these other aspects of aesthetic preference may also have been shaped by natural selection. 

Thus, some research has suggested that our perception of certain landscapes as beautiful may reflect psychological adaptations that evolved in the context of habitat selection (Orians & Heerwagen 1992).  

However, Nancy Etcoff does not discuss such research. Instead, in ‘Survival of the Prettiest’, her focus is almost exclusively on what we might term ‘sexual beauty’. 

Yet, if beauty is indeed in the ‘in the eye of the beholder’, then sexiness is surely located in a different part of the male anatomy, but equally subjective in nature. 

Indeed, as I shall discuss below, even in the context of mate preferences, ‘sexiness’ and ‘beauty’ are hardly synonyms. As an illustration, Etcoff herself quotes that infamous but occasionally insightful pseudo-scientist and all-round charlatan, Sigmund Freud, whom she quotes as observing:  

The genitals themselves, the sight of which is always exciting, are nevertheless hardly ever judged to be beautiful; the quality of beauty seems, instead, to attach to certain secondary sexual characters” (p19: quoted from Civilization and its Discontents). 

Empirical Research 

Of the many books that have been written about the evolutionary psychology of sexual attraction (and I say this as someone who has read, at one time or another, a good number of them), a common complaint is that they are full of untested, or even untestable, speculation – i.e. what that other infamous scientific charlatan Stephen Jay Gould famously referred to as just so stories

This is not a criticism that could ever be levelled at Nancy Etcoff’s ‘Survival of the Prettiest’. On the contrary, as befits Etcoff’s background as a working scientist (not a mere journalist or popularizer), it is, from start to finish, it is full of data from published studies, demonstrating, among other things, the correlates of physical attractiveness, as well as the real-world payoffs associated with physical attractiveness (what is sometimes popularly referred to as ‘lookism’). 

Indeed, in contrast to other scientific works dealing with a similar subject-matter, one of my main criticisms of this otherwise excellent work would be that, while rich in data, it is actually somewhat deficient in theory. 

Youthfulness, Fertility, Reproductive Value and Attractiveness 

A good example of this deficiency in theory is provided by Etcoff’s discussion of the relationship between age and attractiveness. Thus, one of the main and recurrent themes of ‘Survival of the Prettiest’ is that, among women, sexual attractiveness is consistently associated with indicators of youth. Thus, she writes: 

Physical beauty is like athletic skill: it peaks young. Extreme beauty is rare and almost always found, if at all, in people before they reach the age of thirty-five” (p63). 

Yet Etcoff addresses only briefly the question of why it is that youthful women or girls are perceived as more attractive – or, to put the matter more accurately, why it is that males are sexually and romantically attracted to females of youthful appearance. 

Etcoff’s answer is: fertility

Female fertility rapidly declines with age, before ceasing altogether with menopause

There is, therefore, in Darwinian terms, no benefit in a male being sexually attracted to an older, post-menopausal female, since any mating effort expended would be wasted, as any resulting sexual union could not produce offspring. 

As for the menopause itself, this, Etcoff speculates, citing scientific polymath, popularizer and part-time sociobiologist Jared Diamond, evolved because human offspring enjoy a long period of helpless dependence on their mother, without whom they cannot survive. 

Therefore, after a certain age, it pays women to focus on caring for existing offspring, or even grandchildren, rather than producing new offspring whom, given their own mortality, they will likely not be around long enough to raise to maturity (p73).[2]

This theory has sometimes been termed the grandmother hypothesis.

However, the decline in female fertility with age is perhaps not sufficient to explain the male preference for youth. 

After all, women’s fertility is said to peak in their early- to mid-twenties.[3]

However, men’s (and boy’s) sexual interest, if anything, seems to peak in respect of females, if anything, somewhat younger, namely in their late-teens (Kenrick & Keefe 1992). 

To explain this, Douglas Kenrick and Richard Keefe propose, following a suggestion of Donald Symons, that this is because girls at this age, while less fertile, have higher reproductive value, a concept drawn from ecology, population genetics and demography, which refers to an individual’s expected future reproductive output given their current age (Kenrick & Keefe 1992). 

Reproductive value in human females (and in males too) peaks just after puberty, when a girl first becomes capable of bearing offspring. 

Before then, there is always the risk she will die before reaching sexual maturity; after, her reproductive value declines with each passing year as she approaches menopause. 

Thus, Kenrick and Keefe, like Symons before them, argue that, since most human reproduction occurs within long-term pair-bonds, it is to the evolutionary advantage of males to form long-term pair-bonds with females of maximal reproductive value (i.e. mid to late teens), so that, by so doing, they can monopolize the entirety of that woman’s reproductive output over the coming years. 

Yet the closest Etcoff gets to discussing this is a single sentence where she writes: 

Men often prefer the physical signs of a woman below peak fertility (under age twenty). Its like signing a contract a year before you want to start the job” (p72). 

Yet the theme of indicators of youth being a correlate of female attractiveness is a major theme of her book. 

Thus, Etcoff reports that, in a survey of traditional cultures: 

The highest frequency of brides was in the twelve to fifteen years of age category… Girls at this age are preternaturally beautiful” (p57). 

It is perhaps true that “girls at this age are preternaturally beautiful” – and Etcoff, being female, can perhaps even get away with saying this without being accused of being a pervert or ‘paedophile’ for even suggesting such a thing. 

Nevertheless, this age “twelve to fifteen” seems rather younger than most men’s, and even most teenage boys, ideal sexual partners, at least in western societies. 

Thus, for example, Kenrick and Keefe inferred from their data that around eighteen was the preferred age of sexual partner for most males, even those somewhat younger than this themselves.[4]

Of course, in primitive, non-western cultures, women may lose their looks more quickly, due to inferior health and nutrition, the relative unavailability of beauty treatments and because they usually undergo repeated childbirth from puberty onward, which takes a toll on their health and bodies. 

On the other hand, however, obesity is more prevalent in the West, decreases sexual attractiveness and increases with age. 

Moreover, girls in the west now reach puberty somewhat earlier than in previous centuries, and perhaps earlier than in the developing world, probably due to improved nutrition and health. This suggests that females develop secondary sexual characteristics (e.g. large hips and breasts) that are perceived as attractive because they are indicators of fertility, and hence come to be attractive to males, rather earlier than in premodern or primitive cultures. 

Perhaps Etcoff is right that girls “in the twelve to fifteen years of age category… are preternaturally beautiful” – though this is surely an overgeneralization and does not apply to every girl of this age. 

However, if ‘beauty’ peaks very early, I suspect ‘sexiness’ peaks rather later, perhaps late-teens into early or even mid-twenties. 

Thus, the latter is dependent on secondary sexual characteristics that develop only in late-puberty, namely larger breasts, buttocks and hips

Thus, Etcoff reports, rather disturbingly, that: 

When [the] facial proportions [of magazine cover girls] are fed into a computer, it guesstimates their age to be between six and seven years of age” (p151; citing Jones 1995). 

But, of course, as Etcoff is at pains to emphasize in the next sentence, the women pictured do not actually look like they are of this age, either in their faces let alone their bodies. 

Instead, she cites Douglas Jones, the author of the study upon which this claim is based, as arguing that the neural network’s estimate of their age can be explained by their display of “supernormal stimuli”, which she defines as “attractive features… exaggerated beyond proportions normally found in nature (at least in adults)” (p151). 

Yet much the same could be said of the unrealistically large, surgically-enhanced breasts favored among, for example, glamour models. These abnormally large breasts are likewise an example of “supernormal stimuli” that may never be found naturally, as suggested by Doyle & Pazhoohi (2012)

But large breasts are indicators of sexual maturity that are rarely present in girls before their late-teens. 

In other words, if the beauty of girls’ faces peaks at a very young age, the sexiness of their bodies peaks rather later. 

Perhaps this distinction between what we can term ‘beauty’ and ‘sexiness’ can be made sense of in terms of a distinction between what David Buss calls short-term and long-term mating strategies

Thus, if fertility peaks in the mid-twenties, then, in respect of short-term mating (i.e. one-night stands, casual sex, hook-ups and other one-off sexual encounters), men should presumably prefer partners of a somewhat greater age than their preferences in respect of long-term partners – i.e. of maximal fertility rather than maximum reproductive value – since in the case of short-term mating strategies there is no question of monopolizing the woman or girl’s long-term future reproductive output. 

In contrast, cues of beauty, as evinced by relatively younger females, might trigger a greater willingness for males to invest in a long-term relationship. 

This ironically suggests, contrary to contemporary popular perception, males’ sexual or romantic interest in respect of relatively younger women and girls (i.e. those still in their teens) would tend to reflect more ‘honourable intentions’ (i.e. more focussed on marriage or a long-term relationship rather than mere casual sex) than does their interest in older women. 

However, as far as I am aware, no study has ever demonstrated differences in men’s preferences regarding the preferred age-range of their casual sex partners as compared to their preferences in respect of longer-term partners. This is perhaps because, since commitment-free casual sex is almost invariably a win-win situation for men, and most men’s opportunities in this arena likely to be few and far between, there has been little selection acting on men to discriminate at all in respect of short-term partners. 

Are There Sex Differences in Sexiness? 

Another major theme of ‘Survival of the Prettiest’ is that the payoffs for good-looks are greater for women than for men. 

Beauty is most obviously advantageous in a mating context. But women convert this advantage into an economic one through marriage. Thus, Etcoff reports: 

The best-looking girls in high school are more than ten times as likely to get married as the least good-looking. Better looking girls tend to ‘marry up’, that is, marry men with more education and income then they have” (p65; see also Udry & Eckland 1984; Hamermesh & Biddle 1994). 

However, there is no such advantage accruing to better-looking male students. 

On the hand, according to Catherine Hakim, in her book Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom (which I have reviewed here, here and here) in the workplace, the wage premium associated with being better looking is actually, perhaps surprisingly, greater for men than for women. 

For Hakim herself: 

This is clear evidence of sex discrimination… as all studies show women score higher than men on attractiveness” (Money, Honey: p246). 

However, as I explain in my review of her book, the better view is that, since beauty opens up so many other avenues to social advancement for women, notably through marriage, relatively more beautiful women corresponding reduce their work-effort in the workplace since they have need of pursuing social advancement through their careers when they can far more easily achieve it through marriage. 

After all, by bother to earn money when you can simply marry it instead. 

According to Etcoff, there is only one sphere where being more beautiful is actually disadvantageous for women, namely in respect of same-sex friendships: 

Good looking women in particular encounter trouble with other women. They are less liked by other women, even other good-looking women” (p50; citing Krebs & Adinolfy 1975). 

She does not speculate as to why this is so. An obvious explanation is envy and dislike of the sexual competition that beautiful women represent. 

However, an alternative explanation is perhaps that beautiful women do indeed come to have less likeable personalities. Perhaps, having grown used to receiving preferential treatment from and being fawned over by men, beautiful women become entitled and spoilt. 

Men might overlook these flaws on account of their looks, but, other women, immune to their charms, may be a different story altogether.[5]

All this, of course, raises the question as to why the payoffs for good looks are so much greater for women than for men? 

Etcoff does not address this, but, from a Darwinian perspective, it is actually something of a paradox which I have discussed previously

After all, among other species, it is males for whom beauty affords a greater payoff in terms of the ultimate currency of natural selection – i.e. reproductive success. 

It is therefore male birds who usually evolve more beautiful plumages, while females of the same species are often quite drab, the classic example being the peacock and peahen

The ultimate evolutionary explanation for this pattern is called Bateman’s principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972). 

The basis of this theory is this: Females must make a greater minimal investment in offspring in order to successfully reproduce. For example, among humans, females must commit themselves to nine months pregnancy, plus breastfeeding, whereas a male must contribute, at minimum, only a single ejaculate. Females therefore represent the limiting factor in mammalian reproduction for access to whom males compete. 

One way in which they compete is by display (e.g. lekking). Hence the evolution of the elaborate tail of the peacock

Yet, among humans, it is females who seem more concerned with using their beauty to attract mates. 

Of course, women use makeup and clothing to attract men rather than growing or evolving long tails. 

However, behavior is no less subject to selection than morphology, so the paradox remains.[6]

Indeed, the most promising example of a morphological trait in humans that may have evolved primarily for attracting members of the opposite sex (i.e. a ‘peacock’s tail’) is, again, a female trait – namely, breasts

This is, of course, the argument that was, to my knowledge, first developed by ethologist Desmond Morris in his book The Naked Ape, which I have reviewed here, and which I discuss in greater depth here

As Etcoff herself writes: 

Female breasts are like no others in the mammalian world. Humans are the only mammals who develop rounded breasts at puberty and keep them whether or not they are producing milk… In humans, breast size is not related to the amount or quality of milk that the breast produces” (p187).[7]

Instead, human breasts are, save during pregnancy and lactation, composed predominantly of, not milk, but fat. 

This is in stark contrast to the situation among other mammals, who develop breasts only during pregnancy. 

Breasts are not sex symbols to other mammals, anything but, since they indicate a pregnant or lactating and infertile female. To chimps, gorillas and orangutans, breasts are sexual turn-offs” (p187). 

Why then does sexual selection seem, at least on this evidence, to have acted more strongly on women than men? 

Richard Dawkins, in The Selfish Gene (which I have reviewed here), was among the first to allude to this anomaly, lamenting: 

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (The Selfish Gene: p165). 

Yet this is surely not the case with regard to casual sex (i.e. hook-ups and one-night stands). Here, it is very much men who ardently pursue and women who are sought after. 

For example, in one study at a University campus, 72% of male students agreed to go to bed with a female stranger who propositioned them to this effect, yet not a single one of the 96 females approached agreed to the same request from a male stranger (Clark and Hatfield 1989). 

(What percentage of the students sued the university for sexual harassment was not revealed.) 

Indeed, patterns of everything from prostitution to pornography consumption confirm this – see The Evolution of Human Sexuality (which I have reviewed here). 

Yet humans are unusual among mammals in also forming long-term pair-bonds where male parental investment is the norm. Here, men have every incentive to be as selective as females in their choice of partner. 

In particular, in Western societies practising what Richard Alexander called socially-imposed monogamy (i.e. where there exist large differentials in male resource holdings, but polygynous marriage is unlawful) competition among women for exclusive rights to resource-abundant alpha males may be intense (Gaulin and Boser 1990). 

In short, the advantage to a woman in becoming the sole wife of a multi-millionaire is substantial. 

This, then, may explain the unusual intensity of sexual selection among human females. 

Why, though, is there not evidence of similar sexual selection operating among males? 

Perhaps the answer is that, since, in most cultures, arranged marriages are the norm, female choice actually played little role in human evolution. 

As Darwin himself observed in The Descent of Man as an explanation as to why intersexual selection seems, unlike among most other species, to operated more strongly on human females than on men:

Man is more powerful in body and mind than woman, and in the savage state he keeps her in a far more abject state of bondage than does the male of any other animal; therefore it is not surprising that he should have gained the power of selection” (The Descent of Man).

Instead, male mating success may have depended less upon what Darwin called intersexual selection and more upon intrasexual selection – i.e. less upon female choice and more upon male-male fighting ability (see Puts 2010). 

Male Attractiveness and Fighting Ability 

Paradoxically, this is reflected even in the very traits that women find attractive in men. 

Thus, although Etcoff’s book is titled ‘The Evolution of Prettiness’, and ‘prettiness’ is usually an adjective applied to women, and, when applied to men, is—perhaps tellingly—rarely a complement, Etcoff does discuss male attractiveness too.  

However, Etcoff acknowledges that male attractiveness is a more complex matter than female attractiveness: 

We have a clearer idea of what is going on with female beauty. A handsome male turns out to be a bit harder to describe, although people reach consensus almost as easily when they see him” (p155).[8]

Yet what is notable about the factors that Etcoff describes as attractive among men is that they all seem to be related to fighting ability. 

This is most obviously true of height (p172-176) and muscularity (p176-80). 

Indeed, in a section titled “No Pecs, No Sex”, though she focuses on the role of pectoral muscles in determining attractiveness, Etcoff nevertheless acknowledges: 

Pectoral muscles are the human male’s antlers. Their weapons of war” (p177). 

Thus, height and muscularity have obvious functional utility. 

This in stark contrast to traits such as the peacock’s tail, which are often a positive handicap to their owner. Indeed, one influential theory of sexual selection contends that it is precisely because they represent a handicap that they have evolved as a sexually-selected fitness indicator, because only a genetically superior male is capable of bearing the handicap of such an unwieldy ornament, and hence possession of such a handicap is paradoxically an honest signal of health. 

Yet, if men’s bodies have evolved more for fighting than attracting mates, the same is perhaps less obviously true of their faces. 

Thus, anthropologist David Puts proposes: 

Even [male] facial structure may be designed for fighting: heavy brow ridges protect eyes from blows, and robust mandibles lessen the risk of catastrophic jaw fractures” (Puts 2010: p168). 

Indeed, looking at the facial features of a highly dominant, masculine male face, like that of Mike Tyson, for example, one gets the distinct impression that, if you were foolish enough to try punching it, it would likely do more damage to your hand than to his face. 

Thus, if some faces are, as cliché contends, highly ‘punchable’, then others are presumably at the opposite end of this spectrum. 

This also explains some male secondary sexual characteristics that otherwise seem anomalous, for example, beards. These have actually been found in some studies “to decrease attractiveness to women, yet have strong positive effects on men’s appearance of dominance” (Puts 2010: p166). 

David Puts concludes: 

Men’s traits look designed to make men appear threatening, or enable them to inflict real harm. Men’s beards and deep voices seem designed specifically to increase apparent size and dominance” (Puts 2010: p168). 

Interestingly, Etcoff herself anticipates this theory, writing: 

Beautiful ornaments [in males] develop not just to charm the opposite sex with bright colors and lovely songs, but to intimidate rivals and win the intrasex competition—think of huge antlers. When evolutionists talk about the beauty of human males, they often refer more to their weapons of war than their charms, to their antlers rather than their bright colors. In other words, male beauty is thought to have evolved at least partly in response to male appraisal” (p74) 

Of course, these same traits are also often attractive to females. 

After all, if a tall muscular man has higher reproductive success because he is better at fighting, then it pays women to preferentially mate with tall, muscular men so that their male offspring will inherit these traits and hence themselves have high reproductive success, helping the spread the women’s own genes by piggybacking on the superior male’s genes.  

This is a version of sexy son theory

In addition, males with fighting prowess are better able to protect and provision their mates. 

However, this attractiveness to females is obviously secondary to the primary role in male-male fighting. 

Moreover, Etcoff admits, highly masculine faces are not always attractive. 

Thus, unlike the “supernormal” or “hyperfeminine” female faces that men find most attractive in women, women rated “hypermasculine” faces as less attractive (p158). This, she speculates, is because they are perceived as overaggressive and unlikely to invest in offspring

As to whether such men are indeed less willing to invest in offspring, this Etcoff does not discuss and there appears to be little evidence on the topic. But the association of testosterone with both physiological and psychological masculinization suggests that the hypothesis is at least plausible

Etcoff concludes: 

For men, the trick is to look masculine but not exaggeratedly masculine, which results in a ‘Neanderthal’ look suggesting coldness or cruelty” (p159). 

Examples of males with perhaps overly masculine faces are perhaps certain boxers, who tend to have highly masculine facial morphology (e.g. heavy brow ridges, deep set eyes, wide muscular jaws), but are rarely described as handsome. 

For example, I doubt anyone would ever call Mike Tyson handsome. But, then, no one would ever call him exactly ugly either – at least not to his face. 

An extreme example might be the Russian boxer Nikolai Valuev, whose extreme neanderthal-like physiognomy was much remarked on. 

Another example that sprung to mind was the footballer Wayne Rooney (also, perhaps not uncoincidentally, said to have been a talented boxer) who, when he first became famous, was immediately tagged by the newspapers, media and comedians as ugly despite – or indeed because of – his highly masculine, indeed thuggish, facial physiognomy

Likewise, Etcoff reports that large eyes are perceived as attractive in men, but these are a neotenous trait, associated with both immature infants and indeed with female beauty (p158). 

This odd finding Etcoff attributes to the fact that large eyes, as an infantile trait, evoke women’s nurturance, a trait that evolved in the context of parental investment rather than mate choice

Yet this is contrary to the general principle in evolutionary psychology of modularity of mind and the domain specificity of psychological adaptations, whereby it is assumed that that psychological adaptations for mate choice and for parental investment represent domain-specific modules with little or no overlap. 

Clearly, for psychological adaptations in one of these domains to be applied in the other would result in highly maladaptive behaviours, such as sexual attraction to infants and to your own close biological relatives.[9]

In addition to being more complex and less easy to make sense of than female beauty, male physical attractiveness is also of less importance in determining female mate choice than is female beauty in male mate choice

In particular, she acknowledges that male status often trumps handsomeness. Thus, she quotes a delightfully cynical, not especially poetic, line from the ancient Roman poet Ovid, who wrote: 

Girls praise a poem, but go for expensive presents. Any illiterate oaf can catch their eye, provided he’s rich” (quoted: p75). 

A perhaps more memorable formulation of the same idea is quoted on the same page from a less illustrious source, namely boxing promoter, numbers racketeer and convicted killer Don King, on a subject I have already discussed, namely the handsomeness (or not) of Mike Tyson, King remarking: 

Any man with forty two million looks exactly like Clark Gable” (quoted: p75). 

Endnotes

[1] I perhaps belabor this rather obvious point only because one prominent evolutionary psychologist, Satoshi Kanazawa, argues that, since many aspects of beauty standards are cross-culturally universal, beauty standards are not ‘in the eye of the beholder’. I agree with Kanazawa on the substantive issue that beauty standards are indeed mostly cross-culturally universal among humans (albeit not entirely so). However, I nevertheless argue, perhaps somewhat pedantically, that beauty remains strictly in the ‘eye of the beholder’, but it is simply that the ‘eye of the beholder’ (and the brain to which is attached) has been shaped by a process of natural selection so as to make different humans share the same beauty standards. 

[2] While Jared Diamond has indeed made many original contributions to many fields, this idea does not in fact originate with him, even though Etcoff oddly cites him as a source. Indeed, as far as I am aware, it is even especially associated with Diamond. Instead, it may actually originatea by another, lesser known, but arguably even more brilliant evolutionary biologist, namely George C Williams (Williams 1957). 

[3] Actually, pregnancy rates peak surprisingly young, perhaps even disturbingly young, with girls in their mid- to late-teens being most likely to become pregnant from any single act of sexual intercourse, all else being equal. However, the high pregnancy rates of teenage girls are said to be partially offset by their greater risk of birth complications. Therefore, female fertility is said to peak among women in their early- to mid-twenties.

[4] This Kenrick and Keefe inferred from, among other evidence, an analysis of lonely hearts advertisements, wherein, although the age of the female sexual/romantic partner sought was related to the advertised age of the man placing the ad (which Kenrick and Keefe inferred was a reflection of the fact that their own age delimited the age-range of the sexual partners whom they would be able to attract, and whom it would be socially acceptable for them to seek out) nevertheless the older the man, the greater the age-difference he sought in a partner. In addition, they reported evidence of surveys suggesting that, in contrast to older men, younger teenage boys, in an ideal world, actually preferred somewhat older sexual partners, suggesting that the ideal age of sexual partner for males of any age was around eighteen years of age (Kenrick & Keefe 1992).

[5] Etcoff also does not discuss whether the same is true of exceptionally handsome men – i.e. do exceptionally handsome men, like beautiful women, also have problems maintaining same-sex friendships. I suspect that this is not so, since male status and self-esteem is not usually based on handsomeness as such – though it may be based on things related to handsomeness, such as height, athleticism, earnings, and perceived ‘success with women’. Interestingly, however, French novelist Michel Houellebecq argues otherwise in his novel, Whatever, in which, after describing the jealousy of one of the main characters, the short ugly Raphael Tisserand, towards an particularly handsome male colleague, writes: 

Exceptionally beautiful people are often modest, gentle, affable, considerate. They have great difficulty in making friends, at least among men. They’re forced to make a constant effort to try and make you forget their superiority, be it ever so little” (Whatever: p63) 

[6] Thus, in other non-human species, behaviour is often subject to sexual selection, in, for example, mating displays, or the remarkable, elaborate and often beautiful, but non-functional, nests built by male bowerbirds, which Geoffrey Miller sees as analogous to human art. 

[7] An alternative theory for the evolution of human breasts is that they evolved, not as a sexually selected ornament, but rather as a storehouse of nutrients, analogous to the camel’s humps, upon which women can draw during pregnancy. On this view, the sexual dimorphism of their presentation (i.e. the fact that, although men do have breasts, they are usually much less developed than those of women) reflects, not sexual selection, but rather the calaric demands of pregnancy. 
However, these two alternative hypotheses are not mutually incompatible. On the contrary, they may be mutually reinforcing. Thus, Etcoff herself mentions the possibility that breasts are attractive precisely because: 

Breasts honestly advertise the presence of fat reserves needed to sustain a pregnancy” (p178.) 

On this view, men see fatty breasts as attractive in a sex partner precisely because only women with sufficient reserves of fat to grow large breasts are likely to be capable of successfully gestating an infant for nine months. 

[8] Personally, as a heterosexual male, I have always had difficulty recognizing ‘handsomeness’ in men, and I found this part of Etcoff’s book especially interesting for this reason. In my defence, this is, I suspect, partly because many rich and famous male celebrities are celebrated as ‘sex symbols’ and described as ‘handsome’ even though their status as ‘sex symbols’ owes more to the fact they are rich and famous than their actual looks. Thus, male celebrities sometimes become sex symbols despite their looks, rather than because of them. Many famous rock stars, for example, are not especially handsome but nevertheless succeed in becoming highly promiscuous and much sought after by women and girls as sexual and romantic partners. In contrast, men did not suddenly start idealizing fat or physically unattractive female celebrities as sexy and beautiful simply because they are rich famous celebrities.
Add to this the fact that much of what passes for good looks in both sexes is, ironically, normalness – i.e. a lack of abnormalities and averageness – and identifying which men women consider ‘handsome’ had, before reading Etcoff’s book, always escaped me.
However, Etcoff, for her part, might well call me deluded. Men, she reports, only claim they cannot tell which men are handsome and which are not, perhaps to avoid being accused of homosexuality

Although men think they cannot judge another man’s beauty, the agree among themselves and with women about which men are the handsomest” (p138). 

Nevertheless, there is indeed some evidence that judging male handsomeness is not as clear cut as Etcoff seems to suggests. Thus, it has been found that, not only do men claim to have difficulty telling handsome men from ugly men, but also women themselves are more likely to disagree among themselves about the physical attractiveness of members of the opposite sex as compared to men (Wood & Brumbaugh 2009Wake Forest University 2009). 
Indeed, not only do women not always agree with one another regarding the attractiveness of men, sometimes they can’t even agree with themselves. Thus, Etcoff reports: 

A woman makes her evaluations of men more slowly, and if another woman offers a different opinion, she may change her mind” (p76). 

This indecisiveness, for Etcoff, actually makes good evolutionary sense:

If women take a second look, compare notes with other women, or change their minds after more thought, it is not out of indecisiveness but out of wisdom. Mate choice is not just about fertility—most men are fertile most or all of their lives—but about finding a helpmate to bring up the baby” (p77). 

Another possible reason why women may consult other women as to whether a given man is attractive or not is sexy son theory
On this view, it pays for women to mate with men who are perceived as attractive by other women because then any offspring whom they bear by these men will likely inherit the very traits that made the father attractive to women, and hence themselves be attractive to women and hence be successful in spreading the woman’s own genes to subsequent generations. 
In other words, being attractive to other women is itself an attractive trait in a male. However, sexy son theory is not discussed by Etcoff.

[9] Another study discussed by Etcoff also reported anomalous results, finding that women actually preferred somewhat feminized male faces over both masculinized and average male faces (Perrett et al 1998). However, Etcoff cautions that: 

The Perrett study is the only empirical evidence to date that some degree of feminization may be attractive in a man’s face” (p159). 

Other studies concur that male faces that are somewhat, but not excessively, masculinized as compared to the average male face are preferred by women. 
However, one study published just after the first edition of ‘Survival of the Prettiest’ was written, holds the possibility of reconciling these conflicting findings. This study reported cyclical changes in female preferences, with women preferring more masculinized faces only when they are in the most fertile phase of their cycle, and at other times preferring more feminine features (Penton-Voak & Perrett 2000). 
This, together with other evidence, has been controversially interpreted as suggesting that human females practice a so-called dual mating strategy, preferring males with more feminine faces, supposedly a marker for a greater willingness to invest in offspring, as social partners, while surreptitiously attempting to cuckold these ‘beta providers’ with DNA from high-T alpha males, by preferentially mating with the latter when they are most likely to be ovulating (see also Penton-Voak et al 1999Bellis & Baker 1990). 
However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology
While the intensity of women’s sex drive does indeed seem to fluctuate cyclically, the evidence for more fine-grained changes in female mate preferences should be treated with caution. 

References 

Bateman (1948), Intra-sexual selection in DrosophilaHeredity, 2(3): 349–368. 
Bellis & Baker (1990). Do females promote sperm competition?: Data for humansAnimal Behavior, 40: 997-999. 
Clark & Hatfield (1989) Gender differences in receptivity to sexual offers. Journal of Psychology & Human Sexuality, 2(1), 39–55 
Doyle & Pazhoohi (2012) Natural and Augmented Breasts: Is What is Not Natural Most Attractive? Human Ethology Bulletin 27(4):4-14. 
Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005. 
Gildersleeve et al (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic reviewPsychological Bulletin 140(5):1205-59. 
Hamermesh & Biddle (1994) Beauty and the Labor Market, American Economic Review 84(5):1174-1194.
Jones 1995 Sexual selection, physical attractiveness, and facial neoteny: Cross-cultural evidence and implications, Current Anthropology, 36(5):723–748. 
Kenrick & Keefe (1992) Age preferences in mates reflect sex differences in mating strategies. Behavioral and Brain Sciences 15(1):75-133. 
Orians & Heerwagen (1992) Evolved responses to landscapes. In Barkow, Cosmides & Tooby (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 555–579). Oxford University Press. 
Penton-Voak et al (1999) Menstrual cycle alters face preferencesNature 399 741-2. 
Penton-Voak & Perrett DI (2000) Female preference for male faces changes cyclically: Further evidence. Evolution and Human Behavoir 21(1):39–48. 
Perrett et al (1998) Effects of sexual dimorphism on facial attractiveness. Nature 394(6696):884-7. 
Puts (2013) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31(3):157-175. 
Wake Forest University (2009) Rating Attractiveness: Consensus Among Men, Not Women, Study Finds. ScienceDaily. ScienceDaily, 27 June 2009. 
Trivers (1972) Parental investment and sexual selectionSexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago. 
Williams (1957) Pleiotropy, natural selection, and the evolution of senescence. Evolution. 11(4): 398–411. 
Wood & Brumbaugh (2009) Using Revealed Mate Preferences to Evaluate Market Force and Differential Preference Explanations for Mate Selection, Journal of Personality and Social Psychology 96(6):1226-44.
Udry & Eckland (1984) Benefits of Being Attractive: Differential Payoffs for Men and Women, Psychological Reports 54(1):47–56.
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.  

Desmond Morris’s ‘The Naked Ape’: A Pre-Sociobiological Work of Human Ethology 

Desmond Morris, Naked Ape: A Zoologist’s Study of the Human Animal (New York: Mcgraw-Hill Book Company, 1967)

First published in 1967, ‘The Naked Ape’, a popular science classic authored by the already famous British zoologist and TV presenter Desmond Morris, belongs to the pre-sociobiological tradition of human ethology

In the most general sense, the approach adopted by the human ethologists, who included, not only Morris, but also playwright Robert Ardrey, anthropologists Lionel Tiger and Robin Fox and the brilliant Nobel-prize winning ethologist, naturalist, zoologist, pioneering evolutionary epistemologist and part-time Nazi sympathizer Konrad Lorenz, was correct. 

They sought to study the human species from the perspective of zoology. In other words, they sought to adopt the disinterested perspective, and detachment, of, as Edward O Wilson was later to put it, “zoologists from another planet” (Sociobiology: The New Synthesis: p547). 

Thus, Morris proposed cultivating: 

An attitude of humility that is becoming to proper scientific investigation… by deliberately and rather coyly approaching the human being as if he were another species, a strange form of life on the dissecting table” (p14-5).  

In short, Morris proposed to study humans just as a zoologist would any other species of non-human animal. 

Such an approach was an obvious affront to anthropocentric notions of human exceptionalism – and also a direct challenge to the rather less scientific approach of most sociologists, psychologists, social and cultural anthropologists and other such ‘professional damned fools’, who, at that time, almost all studied human behavior in isolation from, and largely ignorance of, biology, zoology, evolutionary theory and the scientific study of the behavior of all animals other than humans. 

As a result, such books inevitably attracted controversy and criticism. Such criticism, however, invariably missed the point. 

The real problem was not that the ethologists sought to study human behavior in just the same way a zoologist would study the behavior of any nonhuman animal, but rather that the study of the behavior of nonhuman animals itself remained, at this time, very much in its infancy. 

Thus, the field of animal behavior was to be revolutionized just a decade or so after the publication of ‘The Naked Ape’ by the approach that came to be known as, first, sociobiology, now more often as behavioral ecology, or, when applied to humans, evolutionary psychology

These approaches, based on what became known as selfish gene theory, sought to understand behavior in terms of fitness maximization – in other words, on the basis of the recognition that organisms have evolved to engage in behaviors which tended to maximize their reproductive success in ancestral environments. 

Mathematical models, often drawn from economics and game theory, were increasingly employed. In short, behavioral biology was becoming a mature science. 

In contrast, the earlier ethological tradition was, even at its best, very much a soft science. 

Indeed, much such work, for example Jane Goodall’s rightly-celebrated studies of the chimpanzees of Gombe, was almost pre-scientific in its approach, involving observation, recording and description of behaviors, but rarely the actual testing or falsification of hypotheses. 

Such research was obviously important. Indeed, Goodall’s was positively groundbreaking. 

After all, the observation of the behavior or an organism is almost a prerequisite for the framing of hypotheses about the behavior of that organism, since hypotheses are, in practice, rarely generated in an informational vacuum from pure abstract theory. 

However, such research was hardly characteristic of a mature and rigorous science. 

When hypotheses regarding the evolutionary significance of behavior patterns were formulated by early ethologists, this was done on a rather casual ad hoc basis, involving a kind of ‘armchair adaptationism’, which could perhaps legitimately be dismissed as the spinning of, in Stephen Jay Gould’s famous phrase, just so stories

Thus, a crude group selectionism went largely unchallenged. Yet, as George C Williams was to show, and Richard Dawkins later to forcefully reiterate in The Selfish Gene (reviewed here), behaviours are unlikely to evolve that benefit the group or species if they involve a cost to the inclusive fitness or reproductive success of the individual engaging in the behavior. 

Robert Wright picks out a good example of this crude group selectionism from ‘The Naked Ape’ itself, quoting Morris’s claim that, over the course of human evolution: 

To begin with, the males had to be sure that their females were going to be faithful to them when they left them alone to go hunting. So the females had to develop a pairing tendency” (p64). 

To anyone schooled in the rudiments of Dawkinsian selfish gene theory, the fallacy should be obvious. But, just in case we didn’t spot it, Wright has picked it out for us: 

Stop right there. It was in the reproductive interests of the males for the females to develop a tendency toward fidelity? So natural selection obliged the males by making the necessary changes in the females? Morris never got around to explaining how, exactly, natural selection would perform this generous feat” (The Moral Animal: p56). 

In reality, couples have a conflict of interest here, and the onus is clearly on the male to evolve some mechanism of mate-guarding, though a female might conceivably evolve some way to advertise her fidelity if, by so doing, she secured increased male parental investment and provisioning, hence increasing her own reproductive success.[1]

In short, mating is Machiavellian. A more realistic view of human sexuality, rooted in selfish gene theory, is provided by Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here). 

Unsuccessful Societies? 

The problems with ‘The Naked Ape’ begin in the very first chapter, where Morris announces, rather oddly, that, in studying the human animal, he is largely uninterested in the behavior of contemporary foraging groups or other so-called ‘primitive’ peoples. Thus, he bemoans: 

The earlier anthropologists rushed off to all kinds of unlikely corners of the world… scattering to remote cultural backwaters so atypical and unsuccessful that they are nearly extinct. They then returned with startling facts about the bizarre mating customs, strange kinship systems, or weird ritual procedures of these tribes, and used this material as though it were of central importance to the behaviour of our species as a whole. The work done by these investigators… did not tell us was anything about the typical behaviour of typical naked apes. This can only be done by examining the common behaviour patterns that are shared by all the ordinary, successful members of the major cultures-the mainstream specimens who together represent the vast majority. Biologically, this is the only sound approach” (p10).[2]

Thus, today, political correctness has wholly banished the word ‘primitive’ from the anthropological lexicon. It is, modern anthropologists insist, demeaning and pejorative.  

Indeed, post-Boasian cultural anthropologists in America typically reject the very notion that some societies are more advanced than others, championing instead a radical cultural relativism and insisting we have much to learn from the lifestyle and traditions of hunter-gatherers, foragers, savage cannibals and other such ‘indigenous peoples’. 

Morris also rejects the term ‘primitive’ as a useful descriptor for hunter-gatherer and other technologically-backward peoples, but for diametrically opposite reasons. 

Thus, for Morris, to describe foraging groups as ‘primitive’ is to rather give them altogether too much credit: 

The simple tribal groups that are living today are not primitive, they are stultified. Truly primitive tribes have not existed for thousands of years. The naked ape is essentially an exploratory species and any society that has failed to advance has in some sense failed, ‘gone wrong’. Something has happened to it to hold it back, something that is working against the natural tendencies of the species to explore and investigate the world around it” (p10). 

Instead, Morris proposes to focus on contemporary western societies, declaring: 

North America… is biologically a very large and successful culture and can, without undue fear of distortion, be taken as representative of the modern naked ape” (p51). 

It is indeed true that, with the diffusion of American media and consumer goods, American culture is fast becoming ubiquitous. However, this is a very recent development in historical terms, let alone on the evolutionary timescale of most interest to biologists. 

Indeed, viewed historically and cross-culturally, it is we westerners who are the odd, aberrant ones. 

Thus, we even have been termed, in a memorable backcronym, WEIRD (Western, Educated, Industrialized, Rich and Democratic), and hence quite aberrant, not only in terms of our lifestyle and prosperity, but also in terms of our psychology and modes of thinking

Moreover, while extant foraging groups, and other pre-modern peoples that have survived into modern times, may now indeed now be tottering on the brink of extinction, this, again, is a very recent development in evolutionary terms. 

Indeed, far from being aberrant, this was the lifestyle adopted by all humans throughout most of the time we have existed as a species, including during the period when most of our unique physical and behavioural adaptations evolved

In short, although we may inhabit western cities today, this is not the environment where we evolved, nor that to which our brains and bodies are primarily adapted.[3]

Therefore, given that it represents the lifestyle of our ancestors during the period when most of our behavioral and bodily adaptations evolved, primitive peoples must necessarily have a special place in any evolutionary theory of human behaviour.[4]

Indeed, Morris himself admits as much himself just a few pages later, where he acknowledges that: 

The fundamental patterns of behavior laid down in our early days as hunting apes still shine through all our affairs, no matter how lofty they may be” (p40). 

Indeed, a major theme of ‘The Naked Ape’ is the extent to which the behaviour even of wealthy white westerners is nevertheless fundamentally shaped and dictated by the patterns of foraging set out in our ancient hunter-gatherer past. 

This, of course, anticipates the concept of the environment of evolutionary adaptedness (or EEA) in modern evolutionary psychology

Thus, Morris suggests that the pattern of men going out to work to financially provision wives and mothers who stay home with dependent offspring reflects the ancient role of men as hunters provisioning their wives and children: 

“Behind the façade of modern city life there is the same old naked ape. Only the names have been changed: for ‘hunting’ read ‘working’, for ‘hunting grounds’ read ‘place of business’, for ‘home base’ read ‘house’, for ‘pair-bond’ read ‘marriage’, for ‘mate’ read ‘wife’, and so on” (p84).[5]

In short, while we must explain the behaviors of contemporary westerners, no less than those of primitive foragers, in the light of Darwinian evolution, nevertheless all such behaviors must be explained ultimately in terms of adaptations that evolved over previous generations under very different conditions. 

Indeed, in the sequel to ‘The Naked Ape’, Morris further focuses on this very point, arguing that modern cities, in particular, are unnatural environments for humans, rejecting the then-familiar description of cities as concrete jungles on the grounds that, whereas jungles are the “natural habitat” of animals, modern cities are very much an unnatural habitat for humans. 

Instead, he argues, the better analogy for modern cities is a Human Zoo

The comparison we must make is not between the city dweller and the wild animal but between the city dweller and the captive animal. The city dweller is no longer living in conditions natural for his species. Trapped, not by a zoo collector, but by his own brainy brilliance, he has set himself up in a huge restless menagerie where he is in constant danger of cracking under the strain” (The Human Zoo: pvii). 

Nakedness 

Morris adopts what he calls a zoological approach. Thus, unlike modern evolutionary psychologists, he focuses as much on explaining our physiology and morphology as on our behavior and psychology. Indeed, it is in explaining the peculiarities of human anatomy that Morris is at his best.[6]

This begins, appropriately enough, with the trait that gives him his preferred name for our species, and also furnishes his book with its title – namely our apparent nakedness or hairlessness. 

Having justified calling us ‘The Naked Ape’ on zoological grounds, namely on the ground that this is the first thing the naturalist would notice upon observing our species, Morris then comes close to contradicting himself, admitting that, given the densely concentrated hairs on our heads (as well as the much less densely packed hairs which also cover much of the remainder of our bodies), we actually have more hairs on our bodies than do our closest relatives, chimpanzees.[7]

However, Morris summarily dispatches this objection: 

It is like saying that because a blind man has a pair of eyes, he is not blind. Functionally, we are stark naked and our skin is fully exposed” (p42). 

Why then are we so strangely hairless? Neoteny, Morris proposes, provides part of the answer. 

This refers to the tendency of humans to retain into maturity traits that are, in other primates, restricted to juveniles, nakedness among them. 

Neoteny is a major theme in Morris’s book – and indeed in human evolution

Besides our hairlessness, other human anatomical features that have been explained either partly or wholly in terms of neoteny, whether by Morris or by other evolutionists, include our brain size, growth patterns, inventiveness, upright posture, spinal curvature, smaller jaws and teeth, forward facing vaginas, lack of a penis bone, the length of our limbs and the retention of the hymen into sexual maturity (see below). Indeed, many of these traits are explicitly discussed by Morris himself as resulting from neoteny

However, while neoteny may supply the means by which our relative hairlessness evolved, it is not a sufficient explanation for why this development occurred, because, as Morris points out: 

The process of neoteny is one of the differential retarding of developmental processes” (p43). 

In other words, humans are neotenous in respect of only some of our characters, not all of them. After all, an ape that remained infantile in all respects would never evolve, for the simple reason that it would never reach sexual maturity and hence remain unable to reproduce. 

Instead, only certain specific juvenile or infantile traits are retained into adulthood, and the question then becomes why these specific traits were the ones chosen by natural selection to be retained. 

Thus, Morris concludes: 

It is hardly likely… that an infantile trait as potentially dangerous as nakedness was going to be allowed to persist simply because other changes were slowing down unless it had some special value to the new species” (p43). 

As to what this “special value” (i.e. selective advantage) might have been, Morris considers, in turn, various candidates.  

One theory considered by Morris theory relates to our susceptibility to insect parasites.  

Because humans, unlike many other primates, return to a home base to sleep most nights, we are, Morris reports, afflicted with fleas as well as lice (p28-9). Yet fur, Morris observes, is a good breeding ground for such parasites (p38-9). 

Perhaps, then, Morris imagines, we might have evolved hairlessness in order to minimize the problems posed by such parasites. 

However, Morris rejects this as an adequate explanation, since, he observes: 

Few other den dwelling mammals… have taken this step” (p43). 

An alternative explanation implicates sexual selection in the evolution of human hairlessness.  

Substantial sex differences in hairiness, as well as the retention of pubic hairs around the genitalia, suggests that sexual selection may indeed have played a role in the evolution of our relative hairlessness as compared to other mammals.

Interestingly, this was Darwin’s own proposed explanation for the loss of body hair during the course of our evolution, the latter writing in The Descent of Man that:

No one supposes that the nakedness of the skin is any direct advantage to man; his body therefore cannot have been divested of hair through natural selection” (The Descent of Man).

Darwin instead proposes:

Since in all parts of the world women are less hairy than men… we may reasonably suspect that this character has been gained through sexual selection” (The Descent of Man).

Morris, however, rejects this explanation on the grounds that: 

The loss of bodily insulation would be a high price to pay for a sexy appearance alone” (p46). 

But other species often often pay a high price for sexually selected bodily adornments. For example, the peacock sports a huge, brightly coloured and elaborate tail which is costly to grow and maintain, impedes his mobility and is conspicuous to predators. Yet this elaborate tail is thought to have evolved through sexual selection or female choice,

Indeed, according to Amotz Zahavi’s handicap principle, it is precisely the high cost of such sexually-selected adornments that made them reliable fitness indicators and hence attractive to potential mates, because only a highly ‘fit’, and hence attractive, male can afford to grow such a costly, inconvenient and otherwise useless appendage. 

Morris also gives unusually respectful consideration to the highly-controversial aquatic ape theory as an explanation for human hairlessness. 

Thus, if humans did indeed pass through an aquatic, or at least amphibious, stage during our evolution, then, Morris agrees, this may indeed explain our hairlessness, since it is indeed true that other aquatic or semiaquatic mammals, such as whales, dolphins and seals, also seem to have jettisoned most of their fur over the course of their evolution. 

This is presumably because fur increases frictional drag while in the water and hence impedes swimming ability, and is among the reasons that elite swimmers also remove their body-hair before competition. 

Indeed, our loss of body hair is among the human anatomical peculiarities that are most often cited by champions of aquatic ape theory in favor of the theory that humans did indeed pass through an aquatic phase during our evolution. 

However, aquatic ape theory is highly controversial, and is rejected by almost all mainstream evolutionists and biological anthropologists.  

As I have said, Morris, for his part, gives respectful consideration to the theory, and, unlike many other anthropologists and evolutionists, does not dismiss it out of hand as entirely preposterous and unworthy even of further consideration.[8]

On the contrary, Morris credits the theory as “ingenious”, acknowledging that, if true, it might explain many otherwise odd features of human anatomy, including not just our relative hairlessness, but also the retention of hairs on our head, the direction of the hairs on our backs, our upright posture, ‘streamlined’ bodies, dexterity of our hands and the thick extra layer of sub-cutaneous fat beneath our skin that is lacking in other primates. 

However, while acknowledging that the theory explains many curious anomalies of human physiology, Morris ultimately rejects ‘aquatic ape theory’ as altogether too speculative given the complete lack of fossil evidence in support of the theory – the same reason that most other evolutionists also reject the theory. 

Thus, he concludes: 

It demands… the acceptance of a hypothetical major evolutionary phase for which there is no direct evidence” (p45-6). 

Morris also rejects the theory that was, according to Morris himself, the most widely accepted explanation for our hairlessness among other evolutionists at the time he was writing – namely the theory that our hairlessness evolved as a cooling mechanism when our ancestors left the shaded forests for the open African savannah

The problem with this theory, as Morris explains it, is that:  

Exposure of the naked skin to the air certainly increases the chances of heat loss, but it also increases heat gain at the same time and risks damage from the sun’s rays” (p47). 

Thus, it is not at all clear that moving into the open savannah would indeed select for hairlessness. Otherwise, as Morris points out, we might expect other carnivorous, predatory mammals such as lions and jackals, who also inhabit the savannah, to have similarly jettisoned most of their fur. 

Ultimately, however, Morris accepts instead a variant on this idea – namely that hairlessness evolved to prevent overheating while chasing prey when hunting. 

However, this fails to explain why it is men’s bodies that are generally much hairier than those of women, even though, cross-culturally, in most foraging societies, it is men who do most, if not all, of the hunting

It also raises the question as to why other mammalian carnivores, including some that also inhabit the African Savannah and other similar environments, such as lions and jackals, have not similarly shed their body hair, especially since the latter rely more on their speed to catch prey species, whereas humans, armed with arrows and javelins as well as hunting dogs, do not always have to catch a prey by hand in order to kill it. 

I would tentatively venture an alternative theory, one which evidently did not occur to Morris – namely, perhaps our hairlessness evolved in concert with our invention and use of clothing (e.g. animal hides) – i.e. a case of gene-culture coevolution

Clothing would provide an alternative means of protect from both sun and cold alike, but one that has the advantage that, unlike bodily fur, it can be discarded (and put back on) on demand. 

This explanation suggests that, paradoxically, we became naked apes at the same time, and indeed precisely because, we had also become clothed apes. 

The Sexiest Primate? 

One factor said to have contributed to the book’s commercial success was the extent to which its thesis chimed with the prevailing spirit of the age during which it was first published, namely the 1960s. 

Thus, as already alluded to, it presented, in many ways, an idealized and romantic version of human nature, with its crude group-selectionism and emphasis on cooperation within groups without a concomitant emphasis on conflict between groups, and its depiction of humans as a naturally monogamous pair-bonding species, without a concomitant emphasis on the prevalence of infidelity, desertion, polygamy, Machiavellian mating strategies and even rape.  

Another element that jibed with the zeitgeist of the sixties was Morris’s emphasis on human sexuality, with Morris famously declaring: 

The naked ape is the sexiest primate alive” (p64). 

Are humans indeed the ‘sexiest’ of primates? How can we assess this claim? It depends, of course, on precisely how we define ‘sexiness’. 

Obviously, if beauty is in the eye of the beholder, then sexiness is located in a rather different part of the male anatomy, but equally subjective in nature. 

Thus, humans like ourselves find other humans more sexy than other primates (or most of us do) because we have evolved to do so. A male chimpanzee, however, would likely disagree and regard a female chimpanzee as sexier. 

However, Morris presumably has something else in mind when he describes humans as the “sexiest” of primates. 

What he seems to mean is that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates. Thus, for example, he cites as evidence the extended or continuous sexual receptivity of human females, writing: 

There is much more intense sexual activity in our own species than in any other primates” (p56) 

However, the claim that sexuality and sexual behavior permeates the life of humans to a greater degree than for other primates is difficult to maintain when you have read about the behavior of some of our primate cousins. Thus, for example, both chimpanzees and especially bonobos, our closest relatives among extant non-human primates, are far more promiscuous than all but the sluttiest of humans

Indeed, one might cynically suggest that what Morris had most in mind when he described humans as “the sexiest primate alive” was simply a catchy marketing soundbite that very much tapped into the zeitgeist of the era (i.e. the 1960s) and might help boost sales for his book. 

Penis Size

As further evidence for our species’ alleged “sexiness” Morris also cites the supposedly unusually large size of the human penis, reporting: 

The [human] male has the largest penis of any primate. It is not only extremely long when fully erect, but also very thick when compared with the penises of other species” (p80). 

This claim, namely that the human male has an unusually large penis, may originate with Morris, and has certainly since enjoyed wide currency in subsequent decades. 

Thus, competing theories have been formulated to account for the (supposedly) unusual size of our penes.

One idea is that our large penes evolved through sexual selection, more specifically female choice, with females preferring either the appearance, or the internal ‘feel’, of a large penis during coitus, and hence selecting for increased penis size among men (e.g. Mautz et al 2013; The Mating Mind: p234-6).

Of course, one might argue that the internal ‘feel’ of a large penis during intercourse is a bit late for mate choice to operate, since, by this time, the choice in question has already been made. Indeed, in cultures where, prior to the immiediate initiation of sexual intercourse, the genitalia are usually covered with clothing, even exercising mate choice on the basis of the external appearance of the penis, especially of an erect penis, might prove difficult or, at the very least, socially awkward.

However, given that, in humans, most sexual intercourse is non-reproductive (i.e. does note result in conception, let alone in offspring), the idea is not entirely implausible.

This idea, namely the our large penes evolved through sexual selection, dovetails neatly with Richard Dawkins’ tentative suggestion in an endnote appended to later editions of The Selfish Gene (reviewed here) that the capacity to maintain an erection (presumably especially a large erection) without a penis bone (since most other primates do possess a penis bone) may function as an honest signal of health in accordance with Zahavi’s handicap principle, an idea I have previously discussed here (The Selfish Gene: p307-8).

An alternative explanation for the relatively large size of our penes implicates sperm competition. On this view, human penes are designed to remove sperm deposited by rival males in the female reproductive tract by functioning as a “suction piston” during intercourse, as I discuss below (Human Sperm Competition: p170-171; Gallup & Burch 2004; Gallup et al 2004; Goetz et al 2005; Goetz et al 2007). 

Yet, in fact, according to Alan F Dixson, the human penis is not unusually long by primate standards, being roughly the same length as that of the chimpanzee (Sexual Selection and the Origins of Human Mating Systems: p64). 

Instead, Dixson reports: 

The erect human penis is comparable in length to those of other primates, in relation to body size. Only its circumference is unusual when compared to the penes of other hominids” (Sexual Selection and the Origins of Human Mating Systems: p65). 

The human penis is unusual, then, only in its width or girth. 

As to why our penes are so wide, the answer is quite straightforward, and has little to do with the alleged ‘sexiness’ of the human species, whatever that means. 

Instead, it is a simple, if indirect, reflection of our increased brain-size.

Increased brain-size first selected for changes in the size and shape of female reproductive anatomy. This, in turn, led to changes in male reporoductive anatomy.

Thus, Bowman suggests: 

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008). 

Similarly, Robin Baker and Mark Bellis report: 

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: Copulation, Masturbation and Infidelity: p171). 

In turn, larger vaginas select for larger penises in order to fill this larger vagina (Bowman 2008).  

Interestingly, this theory directly contradicts the alleged claim of infamous race scientist Philippe Rushton (whose work I have reviewed here) that there is an inverse correlation between brain-size and penis-size, which relationship supposedly explains race differences in brain and genital size. Thus, Rushton was infamously quoted as observing: 

It’s a trade off, more brains or more penis. You can’t have everything.[9]

On the contrary, this analysis suggests that, at least as between species (and presumably as between sub-species, i.e. races, as well), there is a positive correlation between brain-size and penis-size.[10]

According to Baker and Bellis, one reason male penis size tracks that of female vagina size (both being relatively large, and especially wide, in humans) is that the penis functions as, in Baker and Bellis’s words, a “suction piston” during intercourse, the repeated thrusting functioning to remove any sperm previously deposited by rival males – a form of sperm competition

Thus, they report:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171). 

Interestingly, this theory – namely that the human penis functions as a sperm displacement device – although seemingly fanciful, actually explains some otherwise puzzling aspects of human coitus (and presumably coitus in some other species too), such as its relatively extended duration, the male refractory period and related Coolidge effect – i.e. why a male cannot immediately recommence intercourse immediately after orgasm, unless perhaps with a new female (though this exception has yet to be experimentally demonstrated in humans), since to do so would maladaptively remove one’s own sperm from the female reproductive tract. 

Though seemingly fanciful, this theory even has some empirical support (Gallup & Burch 2004; Goetz et al 2005; Goetz et al 2007), including some delightful experiments involving sex toys of various shapes and sizes (Gallup et al 2004). 

Morris writes:

“[Man] is proud that he has the biggest brain of all the primates, but attempts to conceal the fact that he also has the biggest penis, preferring to accord this honor falsely to the mighty gorilla” (p9). 

Actually, the gorilla, mighty though he indeed may be, has relatively small genitalia. This is on account of his polygynous, but non-polyandrous, mating system, which involves minimal sperm competition.[11]

Moreover, the largeness of our brains, in which, according to Morris, we take such pride, may actually be the cause of the largeness of our penes, for which, according to Morris, we have such shame (here, he speaks for few men). 

Thus, large brains required larger heads which, in turn, required larger vaginas in order to successfully birth larger-headed babies. This in turn selected for larger penises to fill the larger vagina. 

In short, the large size, or rather large girth/width, of our penes has less to do with our being the “sexiest primate” and more to do with our being the brainiest

Female Breasts

In addition to his discussion of human penis size, Morris also argues that various other features of human anatomy that not usually associated with sex nevertheless evolved, in part, due to their role in sexual signaling. These include our earlobes (p66-7), everted lips (p68-70) and, tentatively and rather bizarrely, perhaps even our large fleshy noses (p67). 

He makes the most developed and persuasive case, however, in respect of another physiological peculiarity of the human species, and of human females in particular, namely the female breasts

Thus, Morris argues: 

For our species, breast design is primarily sexual rather than maternal in function” (p106). 

The evolution of protruding breasts of a characteristic shape appears to be yet another example of sexual signalling” (p70). 

As evidence, he cites the differences in shape between women’s breasts and both the breasts of other primates and the design of baby bottles (p93). In short, the shape of human breasts do not seem ideally conducive to nursing alone. 

The notion that breasts have a secondary function as sexual advertisements is indeed compelling. In most other mammals, large breasts develop only during pregnancy, but human breasts are permanent, developing at puberty, and, except during pregnancy and lactation, composed predominantly of fat not milk (see Møller et al 1995; Manning et al 1997; Havlíček et al 2016). 

On the other hand, it is difficult to envisage how breasts ever first became co-opted as a sexually-selected ornament. 

After all, the presence of developed breasts on a female would originally, as among other primates, have indicated that the female in question was pregnant, and hence infertile. There would therefore initially have been strong selection pressure among males against ever finding breasts sexually attractive, since it would lead to their pursuing infertile women whom they could not possibly impregnate. As a consequence, there would be strong selection against a female ever developing permanant breasts, since it would result in her being perceived as currently infertile and hence unattractive to males.

How then did breasts ever make the switch to a sexually attractive, sexually-selected ornament? This is what George Francis, at his blog, ‘Anglo Reaction’, terms the breast paradox.[12]

Morris does not address, nor even draw attention to or seemingly recognise, this not insignificant problem. However, he does suggest that two other human traits that are, among primates, unique to humans may have facilitated the process. 

Our so-called nakedness (i.e. relative hairlessness as compared to other mammals), the trait that furnished Morris’s book with its title, and Morris himself with his preferred name for our species, is the first of these traits. 

Swollen breast-patches in a shaggy-coated female would be far less conspicuous as signalling devices, but once the hair has vanished they would stand out clearly” (p70-1). 

Secondly, Morris argues that our bipedalism (i.e. the fact we walk on two legs) and resulting vertical posture, necessarily put the female reproductive organs out of sight underneath a woman when she adopts a standing position, and hence generally out of the sight of potential mates. There was therefore, Morris suggests, a need for some frontal sexual-signaling. 

This, he argues, was further necessitated by what he argues is our species’ natural preference for ventro-ventral (i.e. missionary position) intercourse. 

In particular, Morris argues that human female breasts evolved in order to mimic the appearance of the female buttocks, a form of what he terms ‘self-mimicry’. 

The protuberant, hemispherical breasts of the female must surely be copies of the fleshy buttocks” (p76). 

Everted Lips 

Interestingly, he makes a similar argument in respect of another trait of humans not shared by other extant primates – namely, our inverted lips.

The word ‘everted’ refers to the fact that our lips are turned outwards, as is easily perceived by comparing human lips with the much thinner-appearing lips of our closest non-human relatives

Again, this seems intuitively plausible, since, like female breasts, lips do indeed seem to be a much-sexualized part of the human anatomy, at least in western societies, and in at least some non-western cultures as well, if erotic art is to be taken as evidence.[13]

These everted lips, he argues, evolved to mimic the appearance of the female labia. Again, as with breasts, this was supposedly required because our bipedalism and resulting posture put the female genitals out of sight of most males.

As with Morris’s idea that female breasts evolved to mimic the appearance of female buttocks, the idea that our lips, and women’s use of lipstick, is designed to imitate the appearance of the female sexual organs has been much mocked.[14]

However, the similarity in appearance of the labia and human lips can hardly be doubted. After all, it is even attested to in the very etymology of the word ‘labia, which derives from the Old English word for the lips. 

Of course, inverted lips reach their most extreme form among extant sub-species of human among black Africans. This Morris argues is because: 

If climatic conditions demand a darker skin, then this will work against the visual signalling capacity of the lips by reducing their colour contrast. If they really are important as visual signals, then some kind of compensating development might be expected, and this is precisely what seems to have occurred, the negroid lips maintaining their conspicuousness by becoming larger and more protuberant. What they have lost in colour contrast, they have made up for in size and shape” (p69-70).

Unforunately, however, if we look at other relatively dark-skinned, but non-Negroid, populations of human, the theory receives, at best, only partial support.

On the one hand, Australian Aboriginals, another dark-skinned but unrelated group, do indeed tend to have quite large lips. However, these lips are not especially everted.

On the other hand, however, the dark-skinned Dravidian peoples of South India are not generally especially large-lipped, but are rather quite Caucasoid in facial morphology. Indeed, they, like the generally lighter-complexioned, Indo-European speaking, ‘Aryan’ populations of North India, were generally (but not always) classified as ‘Caucasoid by most early-twentieth century racial anthropologists, though some suggested.

At any rate, rejecting the politically-incorrect notion that black Africans are, as a race, somehow more primitive than other humans, Morris instead emphasizes the fact that, in respect of this trait (namely, everted lips), they are actually the most differentiated from non-human primates.  

Thus, all humans, compared to non-human primates, have everted lips, but black African lips are the most everted. Therefore, Morris concludes, using the word ‘primitive’ is in the special phylogenetic sense

Anatomically, these negroid characters do not appear to be primitive, but rather represent a positive advance in the specialization of the lip region” (p70).

In other words, whereas whites and Asians may be more advanced than blacks when it comes to intelligence, brain-size, science, technology and building civilizations, when it comes to everted lips, black Africans have us all beaten! 

Female Orgasm

Morris also discusses the function of the female orgasm, a topic which has subsequently been the subject of much speculation and no little controversy among evolutionists.  

Again, Morris suggests that humans’ unusual vertical posture, brought on by our bipedal means of locomotion, may have been central to the evolution of this trait. 

Thus, if a female were to walk off immediately after sexual intercourse had occurred, then: 

Under the simple influence of gravity the seminal fluid would flow back down the vaginal tract and much of it would be lost” (p79).  

This obviously makes successful impregnation less likely. As a result, Morris concludes: 

There is therefore a great advantage in any reaction that tends to keep the female horizontal when the male ejaculates and stops copulating” (p79). 

The chief adaptive function of the female orgasm therefore, according to Morris, is the tiredness, and perhaps post-coital tristesse, that immediately follows orgasm, and motivates the female experiencing these emotions to remain in a horizontal position even after intercourse has ended, and hence retain the male ejaculate within her reproductive tract. 

The violent response of female orgasm, leaving the female sexually satiated and exhausted has precisely this effect” (p79).[15]

However, there are several problems with Morris’s theory, the first being is that it predicts that female orgasm should be confined to humans, since, at least among extant primates, we represent the only bipedal ape.

Morris does indeed argue that the female orgasm is, like our nakedness, bipedal locomotion and large brains, an exclusively human trait, describing how, among most, if not all, non-human primates: 

At the end of a copulation, when the male ejaculates and dismounts, the female monkey shows little sign of emotional upheaval and usually wanders off as if nothing had happened” (p79). 

Unfortunately for Morris’s theory, however, evidence has subsequently accumulated that some non-human (and non-bipedal) female primates do indeed seem to sometimes experience responses seemingly akin to orgasm during copulation. 

As professor of philosophy Elizabeth Lloyd relates in her book The Case of the Female Orgasm:

There is robust evidence—developed since Morris wrote—that some nonhuman primate females do have orgasm. The best evidence comes from experiments in which stumptail macaques were wired up so that their heart and respiration rates and the muscle contractions in their uteruses or vaginas could be measured electronically… previous observations by Suzanne Chevalier-Skolnikoff… showed a ‘naturally occurring complete orgasmic behavioral pattern for female stumptails’. She documented three occasions on which a female mounting another female (rubbing her genitals against the back of the mounted female) displayedall the behavioral manifestations of male stumptail orgasm and ejaculation” (The Case of the Female Orgasm: p54-5).

Thus, Alan Dixson reports: 

Female orgasm is not confined to Homo sapiens. Putatively homologous responses [have] been reported in a number of non-human primates, including stump-tail and Japanese Macaques, rhesus monkeys and chimpanzees… Pre-human ancestors of Homo sapiens, such as the australopithecines, probably possessed a capacity to exhibit female orgasm, as do various extant ape and monkey species. The best documented example concerns the stump tailed macaque (Macaca arctoides), in which orgasmic uterine contractions have been recorded during female-female mounts… as well as during copulation… De Waal… estimates that female stump-tails show their distinctive ‘climax face’ (which correlates with the occurrence of uterine contractions) once in every six copulations. Vaginal spasms were noted in two female rhesus monkeys as a result of extended periods of stimulation (using an artificial penis) by an experimenter… Likewise, a female chimpanzee exhibited rhythmical vaginal contractions, clitoral erection, limb spasms, and body tension in response to manual stimulation of its genitalia… Masturbatory behaviour, accompanied by behavioural and physiological responses indicative of orgasm, has also been noted in Japanese macaques… and chimpanzees” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Thus, in relation to Morris’s theory, Dixson concludes that the theory lacks “comparative depth” because: 

Monkey and apes exhibit female orgasm in association with dorso-ventral copulatory postures and an absence of post-mating rest periods” (Sexual Selection and the Origins of Human Mating Systems: p77). 

Certainly, female orgasm, unlike male orgasm, is hardly a prerequisite for successful impregnation. 

Thus, American physician, Robert Dickson, in his book, Human Sex Anatomy (1933), reports that, in a study of a thousand women who attended his medical practice afflicted with so-called ‘frigitity’ (i.e they were incapable of orgasmic response during intercourse): 

The frigid were not notably infertile, having the expected quota of living children, and somewhat less than the average incidence of sterility” (Human Sex Anatomy: p92). 

Further problems with Morris’s theory are identified by Elizabeth Lloyd in The Case of the Female Orgasm: Bias in the Theory of Evolution, the only book-length treatment of the topic of the evolution of the female orgasm.

In particular, unlike among human males, women do not, in general, appear to experience sensations of tiredness immediately following orgasm. On the contrary, she reports:

States of sleepiness and exhaustion [experienced following orgasm] are, in fact, predominantly true for men but not for women” (The Case of the Female Orgasm: p52).

On the contrary, she quotes feminist sexologist Shere Hite as reporting, in her famous Hite Report, that, the most common post-orgasmic sensations reported by women were “wanting to be close, and ‘feeling strong and wide awake, energetic and alive’”, both of which reactions “represent continued arousal” (Ibid.).

Thus, she reports:

A sizable proportion of women are not ‘satiated and exhausted’ by orgasm but, rather, energized and aroused. An ‘energized’ woman seems less rather than more likely to lie down” (The Case of the Female Orgasm: p57).

This, in turn, suggests that a female who had experienced orgasm during intercourse would be likely to lose more semen through the force of gravity than a woman who had not experienced orgasm, since, if a person is “energized and aroused”, they are supposedly more, not less, likely to stand up and move around.

Finally, Lloyd repeated the familiar feminist factoid (basaed on the Kinsey data) that women are actually most likely to achieve orgasm during intercourse by using sexual positions where the female partner is on top, where, again, gravitational forces would presumably work against successful conception, writing:

Given that a relatively low percentage of women have orgasms during intercourse, and that of those who do, a high percentage have them in the superior position, it seems more likely that the occurrence of female orgasm would have the reverse gravitational effect from the one that Morris describes” (The Case of the Female Orgasm: p57).

In conclusion, therefore, the bulk of the evidence seems incompatible with Morris’s superficiallly plausible gravitational theory of the evolution of the female orgasm and it must be rejected.

Why then did the female orgasm evolve, not only in humans, but also apparently also in other species of primate, if not other mammals?

In the years since the first publication of Morris’s book, various other theories for the evolution of the female orgasm have been developed by evolutionists.

However, as argued by Donald Symons in his groundbreaking The Evolution of Human Sexuality (which I have reviewed here), the most parsomonious theory of the evolution of female orgasm remains that it represents simply a non-adaptive byproduct of male orgasm, which is, of course, itself adaptive (see Sherman 1989Case Of The Female Orgasm: Bias in the Science of Evolution; see also my discussion here).

The female orgasm and clitoris thus represents, if you like, the female equivalent of male nipples – only more fun.

Hymen

Interestingly, Morris also hypothesizes regarding the evolutionary function of another peculiarity of human female reproductive anatomy which, in contrast to the controversy regarding the evolutionary function, if any, of the female orgasm and clitoris (and of the female breasts), has received surprisingly scant attention from evolutionists – namely, the hymen

In most mammals, Morris reports, “it occurs as an embryonic stage in the development of the urogenital system” (p82). However, only in humans, he reports, is it, when not ruptured, retained into adulthood. 

Regarding the means by which it evolved, the trait is then, Morris concludes, like our large brains, upright posture and hairlessness, “part of the naked ape’s neoteny” (p82). 

However, as with our hairlessness, neoteny only the means by which this trait was retained into adulthood among humans, not the evolutionary reason for its retention.  

In other words, he suggests, the hymen, like other traits retained into adulthood among humans, must serve some evolutionary function. 

What is this evolutionary function? 

Morris suggests that, by making first intercourse painful for females, it deters young women from engaging in intercourse too early, and hence risking pregnancy, without first entering a relationship (‘pair-bond’) of sufficient stability to ensure that male parental investment, and provisioning, will be forthcoming (p73). 

However, the problem with the theory is that the pain experienced during intercourse obviously occurs rather too late to deter first intercourse, because, by the time this pain is experienced, intercourse has already occurred. 

Of course, given our species’ unique capacity for speech and communication, the pain experienced during first intercourse could be communicated to young virginal women through conversation with other non-virginal women who had already experienced first intercourse.  

However, this would be an unreliable method of inducing fear and avoidance regarding first intercourse, especially given the sort of taboos regarding discussion of sexual activities which are common in many cultures. 

At any rate, why would natural, or sexual, selection not instead simply directly select for fear and anxiety regarding first intercourse – i.e. a psychological rather than a physiological adaptation.

After all, as evolutionary psychologists and sociobiologists have convincingly demonstrated, our psychology is no less subject to natural selection than is our physiology. 

Although, as already noted, the evolutionary function, if any, of the female hymen has received surprisingly little attention from evolutionists, I can myself independently formulate at least three alternative hypotheses regarding the evolutionary significance of the hymen. 

First, it may have evolved among humans as a means of advertising to prospective suitors a prospective bride’s chastity, and hence reassuring the suitor of the paternity of offspring.  

This would, in turn, increase the perceived attractiveness of the female in question, and help secure her a better match with a higher-status male, who would then also be more willing to invest in offspring whose paternity is not in doubt, and hence increase her own reproductive success

Thus, it is notable that, in many cultures, prospective brides are inspected for virginity, a so-called virginity test, sometimes by the prospective mother-in-law or another older woman, before being considered marriageable and accepted as brides. 

Alternatively, and more prosaically, the hymen may simply function to protect against infection, by preventing dirt and germs from entering a woman’s body by this route. 

This, of course, would raise the question as to why, at least according to Morris, the trait is retained into sexual maturity only among humans?  

Actually, however, as with his claim that the female orgasm is unique to humans, Morris’s claim that only humans retain the hymen into sexual maturity is disputed by other sources. Thus, for example, Catherine Blackledge reports: 

Hymens, or vaginal closure membranes or vaginal constrictions, as they are often referred to, are found in a number of mammals, including llamas, guinea-pigs, elephants, rats, toothed whales, seals, dugongs, and some primates, including some species of galagos, or bushbabys, and the ruffed lemur” (The story of V: p145). 

Finally, perhaps even more prosaically, the hymen may simply represent a nonadaptive vestige of the developmental process, or a nonadaptive by-product of our species’ neoteny

This would be consistent with the apparent variation with which the trait presents itself, suggesting that it has not been subject to strong selection pressure that has weeded out suboptimal variations. 

This then would appear to be the most parsimonious explanation. 

Zoological Nomenclature 

The works on human ethology of both Richard Ardrey and Konrad Lorenz attracted much attention and no little controversy in their day. Indeed, they perhaps attracted even more controversy than Morris’s own ‘The Naked Ape’, not least because they tended to place greater emphasis on humankind’s capacity, and alleged innate proclivity, towards violence. 

In contrast, Morris’s own work, placing less emphasis on violence, and more on sex, perhaps jibed better with the zeitgeist of the era, namely the 1960s, with its hippy exhortations to ‘make love not war’. 

Yet, although all these works were first published at around the same time, the mid- to late-sixties (though Adrey continued publishing books of this subject into the 1970s), Morris’s ‘The Naked Ape’ seems to be the only of these books that remains widely read, widely known and still in print, to this day. 

Partly, I suspect, this reflects its brilliant and provocative title, which works on several levels, scientific and literary.  

Morris, as we have seen, justifies referring to humans by this perhaps unflattering moniker on zoological grounds.  

Certainly, he acknowledges that humans possess many other exceptional traits that distinguish us from all other extant apes, and indeed all other extant mammals. 

Thus, we walk on two legs, use and make tools, have large brains and communicate via a spoken language. Thus, the zoologist could refer to us by any number of descriptors – “the vertical ape, the tool-making ape, the brainy ape” are a few of Morris’s own suggestions (p41).  

But, he continues, adopting the disinterested detachment of the proverbial alien zoologist: 

These were not the first things we noticed. Regarded simply as a zoological specimen in a museum, it is the nakedness that has the immediate impact” (p41) 

This name has, Morris observes, several advantages, including “bringing [humans] into line with other zoological studies”, emphasizing the zoological approach, and hence challenging human vanity. 

Thus, he cautions: 

The naked ape is in danger of being dazzled by [his own achievements] and forgetting that beneath the surface gloss he is still very much a primate. (‘An ape’s an ape, a varlet’s a valet, though they be clad in silk or scarlet’). Even a space ape must urinate” (p23). 

Thus, the title works also on another metaphoric level, which also contributed to the title’s power.  

The title ‘Naked Ape’ promises to reveal, if you like, the ‘naked’ truth about humanity—to strip humanity down in order to reveal the naked truth that lies beneath the façade and finery. 

Morris’s title reduces us to a zoological specimen in the laboratory, stripped naked on the laboratory table, for the purposes of zoological classification and dissection. 

Interestingly, humans have historically liked to regard ourselves as superior to other animals, in part, precisely because we are the only ones who did clothe ourselves. 

Thus, beside Adam and Eve, it was only primitive tropical savages who went around in nothing but a loincloth, and they were disparaged as uncivilized precisely on this account. 

Yet even tropical savages wore loincloths. Indeed, clothing, in some form, is sometimes claimed to be a human universal

Yet animals, on the other hand, go completely unclothed – or so we formerly believed. 

But Morris turns this reasoning on its head. In the zoological sense, it is humans who are the naked ones, being largely bereft of hairs sufficient to cover most of our bodies. 

Stripping humanity down in this way, Morris reveals the naked truth that beneath, the finery and façade of civilization, we are indeed an animal, an ape and a naked one at that. 

The power of Morris’s chosen title ensures that, even if, like all science, his book has quickly dated, his title alone has stood the test of time and will, I suspect, be remembered, and employed as a descriptor of the human species, long after Morris himself, and the books he authored, are forgotten and cease to be read. 

Endnotes

[1] In fact, as I discuss in a later section of this review, it is possible that the female hymen evolved through just such a process, namely as a means of advertising female virginity and premarital chastity (and perhaps implying post-marital fidelity), and hence as a paternity assurance mechanism, which benefited the female by helping secure male parental investment, provisioning and hypergamy.

[2] Morris is certainly right that anthropologists have overemphasized the exotic and unfamiliar (“bizarre mating customs, strange kinship systems, or weird ritual procedures”, as Morris puts it). Partly, this is simply because, when first encountering an alien culture, it is the unfamiliar differences that invariably stand out, whereas the similarities are often the very things which we tend to take for granted.
Thus, for example, on arriving in a foreign country, we are often struck by the fact that everyone speaks a foreign unintelligible language. However, we often take for granted the more remarkable fact that all cultures around the world do indeed have a spoken language, and also that all languages supposedly even share in common a universal grammar.
However, anthropologists have also emphasized the alien and bizarre for other reasons, not least to support theories of radical cultural malleability, sometimes almost to the verge of outright fabrication (e.g. Margaret Mead’s studies in Samoa).

[3] It is true that there has been some significant human evolution since the dawn of agriculture, notably the evolution of lactase persistence in populations with a history of dairy agriculture. Indeed, as Cochran and Harpending emphasize in their book The 10,000 Year Explosion, far from evolution having stopped at the dawn of agriculture or the rise of ‘civilization’, it has in fact sped up, as a natural reflection of the rapid change in environmental conditions that resulted. Thus, as Nicholas Wade concludes in A Troublesome Inheritance, much human evolution has been “recent, copious and regional”, leading to substantial differentiation between populations (i.e. race differences), including in psychological traits such as intelligence. Nevertheless, despite such tinkering, the core adaptations that identify us as a species were undoubtedly molded in ancient prehistory, and are universal across the human species.

[4] However, it is indeed important to recognize that the lifestyle of our own ancestors was not necessarily identical to that of those few extant hunter-gatherer groups that have survived into modern times, not least because the latter tend to be concentrated in marginal and arid environments (e.g. the San people of the Kalahari DesertEskimos of the Arctic region, Aboriginal Austrailians of the Australian outback), with those formerly inhabiting more favorable environments having either themselves transitioned to agriculture or else been displaced or absorbed by more advanced invading agriculturalists with higher population densities and superior weapons and other technologies.

[5] This passage is, of course, sure to annoy feminists (always a good thing), and is likely to be disavowed even by many modern evolutionary psychologists since it relies on a rather crude analogy. However, Morris acknowledges that, since “’hunting’… has now been replaced by ‘working‘”: 

The males who set off on their daily working trips are liable to find themselves in heterosexual groups instead of the old all-male parties. All too often it [the pair bond] collapses under the strain” (p81). 

This factor, Morris suggests, explains the prevalence of marital infidelity. It may also explain the recent hysteria, and accompanying witch-hunts, regarding so-called ‘sexual harassment’ in the workplace.
Relatedly, and also likely to annoy feminists, Morris champions the then-popular man the hunter theory of hominid evolution, which posited that the key development in human evolution, and the development of human intelligence in particular, was the switch from a largely, if not wholly, herbivorous diet and lifestyle, to one based largely on hunting and the consumption of meat. On this view, it was the cognitive demands that hunting placed on humans that selected for increased intelligence among humans, and also the nutritional value of meat that made possible increases in  highly metabolically expensive brain tissue.
This theory has since fallen into disfavor. This seems to be primarily because it gives the starring role in human evolution to men, since men do most of the hunting, and relegates women to a mere supporting role. It hence runs counter to the prevailing feminist zietgeist.
The main substantive argument given against the ‘man the hunter theory’ is that other carnivorous mammals (e.g. lions, wolves) adapted to carnivory without any obvious similar increase in brain-size or intelligence. Yet Morris actually has an answer to this objection.
Our ancestors, fresh from the forests, were relative latecomers to carnivory. Therefore, Morris contends, had we sought to compete with tigers and wolves by mimicking them (i.e. growing our fangs and claws instead of our brains) we would inevitably have been playing a losing game of evolutionary catch-up. 

Instead, an entirely new approach was made, using artificial weapons instead of natural ones, and it worked” (p22).

However, this theory fails to explain how female intelligence evolved. One possibility is that increases in female intelligence are an epiphenomenal byproduct of selection for male intelligence, rather like the female equivalent of male nipples.
On this view, men would be expected to have higher intelligence than women, just as male nipples (and breasts) are smaller than female nipples, and the male penis is bigger than the female clitoris. That adult men have greater intelligence than adult women is indeed the conclusion of a recent controversial theory (Lynn 1999). However, the difference, if it even exists (which remains unclear), is very small in magnitude, certainly much smaller than than the relative difference in size betweeen male and female breasts. There is also evidence this sexual division of labour between hunting and gathering led to sex differences spatio-visual intelligence (Eals & Silverman 1994).

[6] Another difference from modern evolutionary psychologists derives from Morris’s ethological approach, which involves a focus on human-typical behaviour patterns. For example, he discusses the significance of body language and facial expressions, such as smiling, which is supposedly homologous with an appeasement gesture (baring clenched teeth, aka a ‘fear grin’) common to many primates, and staring, which represents a form of threat across many species.

[7] Interestingly, however, he acknowledges that this statement does not apply to all human races. Thus, he observes: 

Negroes have undergone a real as well as an apparent hair loss” (p42). 

Thus, it seems blacks, unlike Caucasians, have fewer hairs on their body than do chimpanzees. This fact is further evidence that, contrary to the politically correct orthodoxy, race differences are real and important, though this fact is, of course, played down by Morris and other popular science writers.

[8] Edward O Wilson, for example, in Sociobiology: The New Synthesis (which I have reviewed here) dismisses aquatic ape theory, as then championed by Elaine Morgan in The Descent of Woman, as feminist-inspired pop-science “contain[ing] numerous errors” and as being “far less critical in its handling of the evidence than the earlier popular books”, including, incidentally, that of Morris, who is mentioned by name in the same paragraph (Sociobiology: The New Synthesis: p29).

[9] Actually, I suspect this infamous quotation may be apocryphal, or at best a misconstrued joke. Certainly, while I think Rushton’s theory of race differences (which he calls ‘differential K theory’) is flawed, as I explain in my review of his work, there is nothing in it to suggest a direct trade-off between penis-size and brain-size. Indeed, one problem with Rushton’s theory, or at least his presentation of it, is that he never directly explains how traits such as penis-size actually relate to r/K selection in the first place.
The quotation is usually traced to a hit piece in Rolling Stone, a leftist hippie rag with a notorious reputation for low editorial standards, misinformation and so-called ‘fake news. However, Jon Entine, in his book on race differences in athletic ability, instead traces it to a supposed interview between Rushton and Geraldo Rivera broadcast on the Geraldo’ show in 1989 (Taboo: Why Black Athletes Dominate Sports: p74).
Interestingly, one study has indeed reported that there is a “demonstrated negative evolutionary relationship”, not between brain-size and penis-size, but rather between brain-size and testicle size, if only on account of the fact that each contain “metabolically expensive tissues” (Pitnick et al 2006).

[10] Interestingly, Baker and Bellis attribute race differences in penis-size, not to race differences in brain-size, but rather to race differences in birth weight. Thus, they conclude:

Racial differences in size of penis (Mongoloid < Caucasoid < Negroid…) reflects racial differences in birth weight… and hence presumably, racial differences in size of vagina” (Human Sperm Competition: p171). 

[11] In other words, a male silverback gorilla may mate with the multiple females in his harem, but each of the females in his harem likely have sex with only one male, namely that silverback. This means that sperm from rival males are rarely simultaneously present in the same female’s oviduct, resulting in minimal levels of sperm competition, which is known to select for larger testicles in particular, and also often more elaborate penes as well.

[12] Alternative theories for the evolution of permanent fatty breasts in women is that they function analogously to camel humps, i.e. as a storehouse of nutrients to guard against and provide reserves in the event of future scarcity or famine. On this view, the sexually dimorphic presentation (i.e. the fact that fatty breasts are largely restricted to women) might reflect the caloric demands of pregnancy. Indeed, this might explain why women have higher levels of fat throughout their bodies. (For a recent review of rival theories for human breast evolution see Pawłowski & Żelaźniewicz 2021.)

[13] However, to be pedantic, this phraseology is perhaps problematic, since, to say that breasts and lips are ‘sexualized’ in western, and at least some non-western, cultures implicitly presupposes that they are not already inherently sexual parts of our anatomy by virtue of biology, which is, of course, the precisely what Morris is arguing. 

[14] For example, if I recall correctly, extremely annoying, left-wing 1980s-era British comedian Ben Elton once commented in a one of his stand-up routines that the male anthropologist (i.e. Morris, actually not an anthropologist, at least not by training) who came up with this idea (namely, that lips and lipstick mimiced the appearance of the labia) had obviously never seen a vagina in his life. He also, if I recall correctly, attributed this theory to the supposed male-dominated, androcentric nature of the field of anthropology – an odd notion given that Morris is not an anthropologist by training, and cultural anthropology is, in fact, one of the most leftist-dominated, feminist-infested, politically correct fields in the whole of academia, this side of ‘gender studies’, which, in the present, politically-correct world of academia, is saying a great deal.

[15] This theory is rather simpler, and has hence always struck me as more plausible, than the more elaborate, but also more widely championed so-called ‘uterine upsuck hypothesis’, whereby uterine contractions experienced by women during orgasm are envisaged as somehow functioning to aid the transfer of semen deeper into the cervix. This idea is largely based on a single study involving two experiments on a single human female subject (Fox et al 1970). However, two other studies failed to produce any empirical support for the theory (Grafenberg 1950; Masters & Johnson 1966). Baker and Bellis’s methodologically problematic work on what they call ‘flowback’ provides, at best, ambivalent evidence (Baker & Bellis 1993). For detailed critique, see Dixson’s Sexual Selection and the Origins of Human Mating Systems: p74-6.

References 

Baker & Bellis (1993) Human sperm competition: ejaculate manipulation by females and a function for the female orgasm. Animal Behaviour 46:887–909. 
Bowman EA (2008) Why the human penis is larger than in the great apes. Archives of Sexual Behavior 37(3): 361. 
Eals & Silverman (1994) The Hunter-Gatherer theory of spatial sex differences: Proximate factors mediating the female advantage in recall of object arrays. Ethology and Sociobiology 15(2): 95-105.
Fox et al 1970. Measurement of intra-vaginaland intra-uterine pressures during human coitus by radio-telemetry. Journal of Reproduction and Fertility 22:243–251. 
Gallup et al (2004). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277–289 
Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology 2:12-23. 
Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: A preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences 38:749-763 
Goetz et al (2007) Sperm Competition in Humans: Implications for Male Sexual Psychology, Physiology, Anatomy, and Behavior. Annual Review of Sex Research 18:1. 
Grafenberg (1950) The role of urethra in female orgasm. International Journal of Sexology 3:145–148. 
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226. 
Lynn (1999) Sex differences in intelligence and brain size: A developmental theory. Intelligence 27(1):1-12.
Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236. 
Masters & Johnson (1966) Human Sexual Response (Boston: Little, Brown, 1966).
Mautz et al (2013) Penis size interacts with body shape and height to influence male attractiveness, Proceedings of the National Academy of Sciences 110(17): 6925–30.
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219. 
Pawłowski & Żelaźniewicz (2021) The evolution of perennially enlarged breasts in women: a critical review and a novel hypothesis. Biological reviews of the Cambridge Philosophical Society 96(6): 2794-2809. 
Pitnick et al (2006) Mating system and brain size in bats. Proceedings of the Royal Society B: Biological Sciences 273(1587): 719-24. 

Pierre van den Berghe’s ‘The Ethnic Phenomenon’: Ethnocentrism and Racism as Nepotism Among Extended Kin

Pierre van den Berghe, The Ethnic Phenomenon (Westport: Praeger 1987) 

Ethnocentrism is a pan-human universal. Thus, a tendency to prefer one’s own ethnic group over and above other ethnic groups is, ironically, one thing that all ethnic groups share in common. 

In ‘The Ethnic Phenomenon’, pioneering sociologist-turned-sociobiologist Pierre van den Berghe attempts to explain this universal phenomenon. 

In the process, he not only provides a persuasive ultimate evolutionary explanation for the universality of ethnocentrism, but also produces a remarkable synthesis of scholarship that succeeds in incorporating virtually every aspect of ethnic relations as they have manifested themselves throughout history and across the world, from colonialism, caste and slavery to integration and assimilation, within this theoretical and explanatory framework. 

Ethnocentrism as Nepotism? 

At the core of Pierre van den Berghe’s theory of ethnocentrism and ethnic conflict is the sociobiological theory of kin selection. According to van den Berghe, racism, xenophobia, nationalism and other forms of ethnocentrism can ultimately be understood as kin-selected nepotism, in accordance with biologist William D Hamilton’s theory of inclusive fitness (Hamilton 1964a; 1964b). 

According to inclusive fitness theory (also known as kin selection), organisms evolved to behave altruistically towards their close biological kin, even at a cost to themselves, because close biological kin share genes in common with one another by virtue of their kinship, and altruism towards close biological kin therefore promotes the survival and spread of these genes. 

Van den Berghe extends this idea, arguing that humans have evolved to sometimes behave altruistically towards, not only their close biological relatives, but also sometimes their distant biological relatives as well – namely, members of the same ethnic group as themselves. 

Thus, van den Berghe contends: 

Racial and ethnic sentiments are an extension of kinship sentiments [and] ethnocentrism and racism are… extended forms of nepotism” (p18). 

Thus, while social scientists, and social psychologists in particular, rightly emphasize the ubiquity, if not universality, of in-group preference, namely a preference for and favouring of individuals of the same social group as oneself, they also, in my view, rather underplay the extent to which the group identities which have led to the most conflict, animosity, division and discrimination throughout history and across the world, and are also most apparently impervious to resolution, are ethnic identities.

Thus, divisions such as those between social classes, or the sexes, different generations, or between members of different political factions, or youth subcultures (e.g. between mods’ and ‘rockers), or supporters of different sports teams, may indeed lead to substantial conflict, at least in the short-term, and are often cited as quintessential examplars of ‘tribal’ identity and conflict.

Indeed, social psychologists emphasize that individuals even evince an in-group preference in what they referred to as the minimal group situation – namely where experimental subjects have been assigned to one group or another on the basis of wholly arbitrary, trivial or even entirely fictitious criteria.

However, in the real world, the most violent and intransigent of group conflicts almost invariably seem to be those between ethnic groups – namely groups to which a person is assigned at birth, and where this group membership is passed down in families, from parent to offspring, in a quasi-biological fashion, and where group identity is based on a perception of shared kinship.

In contrast, aspects of group identity that vary even between individuals within a single family, including those that are freely chosen by individuals, tend to be somewhat muted in intensity, perhaps precisely because most people share bonds with close family members of a different group identity.

Thus, there has never, to my knowledge, been a civil war arising from conflict between the sexes, or between supporters of one or another football team.[1]

Ethnic Groups as Kin Groups?

Before reading van den Berghe’s book, I was skeptical regarding whether the degree of kinship shared among co-ethnics would ever be sufficient to satisfy Hamilton’s rule, whereby, for altruism to evolve, the cost of the altruistic act to the altruist, measured in terms of reproductive success, must be outweighed by the benefit to the recipient, also measured in terms of reproductive success, multiplied by the degree of relatedness of the two parties (Brigandt 2001; cf. Salter 2008; see also On Genetic Interests). 

Thus, Brigandt (2001) takes van den Berghe to task for his formulation of what the latter catchily christens “the biological golden rule”, namely: 

Give unto others as they are related unto you” (p20).[2]

However, contrary to both critics of his theory (e.g. Brigandt 2001) and others developing similar ideas (e.g. Rushton 2005; Salter 2000), van den Berghe is actually agnostic on the question of whether ethnocentrism is ever actually adaptive in modern societies, where the shared kinship of large nations or ethnic groups is, as van den Berghe himself readily acknowledges, “extremely tenuous at best” (p243). Thus, he concedes: 

Clearly, for 50 million Frenchmen or 100 million Japanese, any common kinship that they may share is highly diluted … [and] when 25 million African-Americans call each other ‘brothers’ and ‘sisters’, they know that they are greatly extending the meaning of these terms” (p27).[3]

Instead, van den Berghe suggests that nationalism and racism may reflect the misfiring of a mechanism that evolved when our ancestors still still lived in small kin-based groups of hunter-gatherers that represented little more than extended families (p35; see also Tooby and Cosmides 1989; Johnson 1986). 

Thus, van den Berghe explains: 

Until the last few thousand years, hominids interacted in relatively small groups of a few score to a couple of hundred individuals who tended to mate with each other and, therefore, to form rather tightly knit groups of close and distant kin” (p35). 

Therefore, in what evolutionary psychologists now call the environment of evolutionary adaptedness or EEA:

The natural ethny [i.e. ethnic group] in which hominids evolved for several thousand millennia probably did not exceed a couple of hundred individuals at most” (p24) 

Thus, van den Berghe concludes: 

The primordial ethny is thus an extended family: indeed, the ethny represents the outer limits of that inbred group of near or distant kinsmen whom one knows as intimates and whom therefore one can trust” (p25). 

On this view, ethnocentrism was adaptive when we still resided in such groups, where members of our own clan or tribe were indeed closely biologically related to us, but is often maladaptive in contemporary environments, where our ethnic group may include literally millions of people. 

Another not dissimilar theory has it that racism in particular might reflect the misfiring of an adaptation that uses phenotype matching, in particular physical resemblance, as a form of kin recognition

Thus, Richard Dawkins in his seminal The Selfish Gene (which I have reviewed here), cautiously and tentatively speculates: 

Conceivably, racial prejudice could be interpreted as an irrational generalization of a kin-selected tendency to identify with individuals physically resembling oneself, and to be nasty to individuals different in appearance” (The Selfish Gene: p100). 

Certainly, van den Berghe takes pains to emphasize that ethnic sentiments are vulnerable to manipulation – not least by exploitative elites who co-opt kinship terms such as ‘motherland’, fatherland and ‘brothers-in-arms‘ to encourage self-sacrifice, especially during wartime (p35; see also Johnson 1987; Johnson et al 1987; Salmon 1998). 

However, van den Berghe cautions, “Kinship can be manipulated but not manufactured [emphasis in original]” (p27). Thus, he observes how: 

Queen Victoria could cut a motherly figure in England; she even managed to proclaim her son the Prince of Wales; but she could never hope to become anything except a foreign ruler of India; [while] the fiction that the Emperor of Japan is the head of the most senior lineage descended from the common ancestor of all Japanese might convince the Japanese peasant that the Emperor is an exalted cousin of his, but the myth lacks credibility in Korea or Taiwan” (p62-3). 

This suggests that the European Union, while it may prove successful as customs union, single market and even an economic union, and while integration in other non-economic spheres may also prove a success, will likely never command the sort of loyalty and allegiance that a nation-state holds over its people, including, sometimes, the willingness of men to fight and lay down their lives for its sake. This is because its members come from many different cultures and ethnicities, and indeed speak many different languages. 

For van den Berghe, national identity cannot be rooted in anything other than a perception of shared ancestry or kinship. Thus, he observes: 

Many attempts to adopt universalistic criteria of ethnicity based on legal citizenship or acquisition of educational qualifications… failed. Such was the French assimilation policy in her colonies. No amount of proclamation of Algérie française could make it so” (p27). 

Thus, so-called civic nationalism, whereby national identity is based, not on ethnicity, but rather, supposedly, on a shared commitment to certain common values and ideals (democracy, the ‘rule of law’ etc.), as encapsulated by the notion of America as a proposition nation’, is, for van den Berghe, a complete non-starter. 

Yet this is today regarded as the sole basis for national identity and patriotic feeling that is recognised as legitimate, not only in the USA, but also all other contemporary western polities, where any assertion of racial nationalism or a racially-based or ethnically-based national identity is, at least for white people, anathema and beyond the pale. 

Moreover, due to the immigration policies of previous generations of western political leaders, policies that largely continue today, all contemporary western polities are now heavily multi-ethnic and multi-racial, such that any sense of national identity that was based on race or ethnicity is arguably untenable as it would necessarily exclude a large proportion of their populations.

On the other hand, however, van den Berghe’s reasoning also suggests that the efforts of some white nationalists to construct a pan-white, or pan-European, ethnic identity is also, like the earlier efforts of Japanese imperialist propagandists to create a pan-Asian identity, and of Marcus Garvey’s UNIA to construct a pan-African identity, likely to end in failure.[4]

Racism vs Ethnocentrism 

Whereas ethnocentrism is therefore universal, adaptive and natural, van den Berghe denies that the same can be said for racism

There is no evidence that racism is inborn, but there is considerable evidence that ethnocentrism is” (p240). 

Thus, van den Berge concludes: 

The genetic propensity is to favor kin, not those who look alike” (p240).[5]

As evidence, he cites:

The ease with which parental feelings take precedence over racial feeling in cases of racial admixture” (p240). 

In other words, fathers who sire mixed-race offspring with women of other races, and the women of other races with whom they father such offspring, often seemingly love and care for the resulting offspring just as intensely as do parents whose offspring is of the same race as themselves.[6]

Thus, cultural, rather than racial, markers are typically adopted to distinguish ethnic groups (p35). These include: 

  • Clothing (e.g. hijabs, turbans, skullcaps);
  • Bodily modification (e.g. tattoos, circumcision); and 
  • Behavioural criteria, especially language and dialect (p33).

Bodily modification and language represent particularly useful markers because they are difficult to fake, bodily modification because it is permanent and hence represents a costly commitment to the group (in accordance with Zahavi’s handicap principle), and language/dialect, because this is usually acquirable only during a critical period during childhood, after which it is generally not possible to achieve fluency in a second language without retaining a noticeable accent. 

In contrast, racial criteria, as a basis for group affiliation, is, van den Berghe reports, actually quite rare: 

Racism is the exception rather than the rule in intergroup relations” (p33). 

Racism is also a decidedly modern phenomenon. 

This is because, prior to recent technological advances in transportation (e.g. ocean-going ships, aeroplanes), members of different races (i.e. groups distinguishable on the basis of biologically inherited physiological traits such as skin colour, nose shape, hair texture etc.) were largely separated from one another by the very geographic barriers (e.g. deserts, oceans, mountain ranges) that reproductively isolated them from one another and hence permitted their evolution into distinguishable races in the first place. 

Moreover, when different races did make contact, then, in the absence of strict barriers to exogamy and miscegenation (e.g. the Indian caste system), racial groups typically interbred with one another and hence become phenotypically indistinguishable from one another within just a few generations. 

This, van den Berghe explains, is because: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Therefore, in the ancestral environment for which our psychological adaptations are designed (i.e. before the development of ships, aeroplanes and other methods of long-distance intercontinental transportation), different races did not generally coexist in the same locale. As a result, van den Berghe concludes: 

We have not been genetically selected to use phenotype as an ethnic marker, because, until quite recently, such a test would have been an extremely inaccurate one” (p 240). 

Humans, then, have simply not had sufficient time to have evolved a domain-specificracism module’ as suggested by some researchers.[7]

Racism is therefore, unlike ethnocentrism, not an innate instinct, but rather “a cultural invention” (p240). 

However, van den Berghe rejects the fashionable, politically correct notion that racism is “a western, much less a capitalist monopoly” (p32). 

On the contrary, racism, while not innate, is, not a unique western invention, but rather a recurrent reinvention, which almost invariably arises where phenotypically distinguishable groups come into contact with one another, if only because: 

Genetically inherited phenotypes are the easiest, most visible and most reliable predictors of group membership” (p32).

For example, van den Berghe describes the relations between the Tutsi, Hutu and Pygmy Twa of Rwanda and neighbouring regions as “a genuine brand of indigenous racism” which, according to van den Berghe, developed quite independently of any western colonial influence (p73).[8]

Moreover, where racial differences are the basis for ethnic identity, the result is, van den Berghe claims, ethnic hierarchies that are particularly rigid, intransient and impermeable.

For van den Berghe, this then explains the failure of African-Americans to wholly assimilate into the US melting pot in stark contrast to successive waves of more recently-arrived European immigrants. 

Thus, van den Berghe observes: 

Blacks who have been English-speaking for several generations have been much less readily assimilated in both England… and the United States than European immigrants who spoke no English on arrival” (p219). 

Thus, language barriers often break down within a generation. 

As Judith Harris emphasizes in support of peer group socialization theory, the children of immigrants whose parents are not at all conversant in the language of their host culture nevertheless typically grow up to speak the language of their host culture rather better than they do the first language of their parents, even though the latter was the cradle tongue to which they were first exposed, and first learnt to speak, inside the family home (see The Nurture Assumption: which I have reviewed here). 

As van den Berghe observes: 

It has been the distressing experience of millions of immigrant parents that, as soon as their children enter school in the host country, the children begin to resist speaking their mother tongue” (p258). 

While displeasing to those parents who wish to pass on their language, culture and traditions to their offspring, this response is wholly adaptive from the perspective of the offspring themselves:  

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (p258).[9]

However, whereas one can learn a new language, it is not usually possible to change one’s race – the efforts of Rachel Dolezal, Elizabeth Warren, Jessica Krug and Michael Jackson notwithstanding. However, due to the one-drop rule and the history of miscegenation in America, passing is sometimes possible (see below). 

Instead, phenotypic (i.e. racial) differences can only be eradicated after many generations of miscegenation, and sometimes, as in the cases of countries like the USA and Brazil, not even then. 

Meanwhile, van den Berghe observes, often the last aspect of immigrant culture to resist assimilation is culinary differences. However, he observes, increasingly even this becomes only a ‘ceremonial’ difference reserved for family gatherings (p260). 

Thus, van den Berghe surmises, Italian-Americans probably eat hamburgers as often as Americans of any other ethnic background, but at family gatherings they still revert to pasta and other traditional Italian cuisine

Yet even culinary differences eventually disappear. Thus, in both Britain and America, sausage has almost completely ceased to be thought of as a distinctively German dish (as have hamburgers, originally thought to have been named in reference to the city of Hamburg) and now pizza is perhaps on the verge of losing any residual association with Italians. 

Is Racism Always Worse than Ethnocentrism? 

Yet if raciallybased ethnic hierarchies are particularly intransigent and impermeable, they are also, van den Berghe claims, “peculiarly conflict-ridden and unstable” (p33). 

Thus, van den Berghe seems to believe that racial prejudice and animosity tends to be more extreme and malevolent in nature than mere ethnocentrism as exists between different ethnic groups of the same race (i.e. not distinguishable from one another on the basis of inherited phenotypic traits such as skin colour). 

For example, van den Berghe claims that, during World War Two: 

There was a blatant difference in the level of ferociousness of American soldiers in the Pacific and European theaters… The Germans were misguided relatives (however distant), while the ‘Japs’ or the ‘Nips’ were an entirely different breed of inscrutable, treacherous, ‘yellow little bastards.’ This was reflected in differential behavior in such things as the taking (versus killing) of prisoners, the rhetoric of war propaganda (President Roosevelt in his wartime speeches repeatedly referred to his enemies as ‘the Nazis, the Fascists, and the Japanese’), the internment in ‘relocation camps’ of American citizens of Japanese extraction, and in the use of atomic weapons” (p57).[10]

Similarly, in his chapter on ‘Colonial Empires’, by which he means “imperialism over distant peoples who usually live in noncontiguous territories and who therefore look quite different from their conquerors, speak unrelated languages, and are so culturally alien to their colonial masters as to provide little basis for mutual understanding”, van den Berghe writes: 

Colonialism is… imperialism without the restraints of common bonds of history, culture, religion, marriage and blood that often exist when conquest takes place between neighbors” (p85). 

Thus, he claims: 

What makes for the special character of the colonial situation is the perception by the conqueror that he is dealing with totally unrelated, alien and, therefore, inferior people. Colonials are treated as people totally beyond the pale of kin selection” (p85). 

However, I am unpersuaded by van den Berghe’s claim that conflict between more distantly related ethnic groups is always, or even typically, more brutal than that among biologically and culturally more closely related groups. 

After all, even conquests of neighbouring peoples, identical in race, if not always in culture, to the conquering group, are often highly brutal, for example the British in Ireland or the Japanese in Korea and China during the first half of the twentieth century. 

Indeed, many of the most intense and intractable ethnic conflicts are those between neighbours and ethnic kin, who are racially (and culturally) very similar to one another. 

Thus, for example, Catholics and Protestants in Northern Ireland, Greeks and Turks in Cyprus, and Bosnians, Croats, Serbs and Albanians in the Balkans, and even Jews and Palestinians in the Middle East, are all racially and genetically quite similar to one another, and also share many aspects of their culture with one another too. (The same is true, to give a topical example at the time of writing, of Ukrainians and Russians.) However, this has not noticeably ameliorated the nasty, intransient and bloody conflicts that have been, and continue to be, waged among them.  

Of course, the main reason that most ethnic conflict occurs between close neighbours is because neighbouring groups are much more likely to come into contact, and hence into conflict, with one another, especially over competing claims to land.[11]

Yet these same neighbouring groups are also likely to be related to one another, both culturally and genetically, because of both shared origins and the inevitable history of illicit intermarriage or miscegenation, and cultural borrowings, that inevitably occur even among the most hostile of neighbours.[12]

Nevertheless, the continuation of intense ethnic animosity between ethnic groups who are genetically, close to one another seems to pose a theoretical problem, not only for van den Berghe’s theory, but also, to an even greater degree, for Philippe Rushton’s so-called genetic similarity theory (which I have written about here), which argues that conflict between different ethnic groups is related to their relative degree of genetic differentiation from one another (Rushton 1998a; 1998b; 2005). 

It also poses a problem for the argument of political scientist Frank K Salter, who argues that populations should resist immigration by alien immigrants proportionally to the degree to which the alien immigrants are genetically distant from themselves (On Genetic Interests; see also Salter 2002). 

Assimilation, Acculturation and the American Melting Pot 

Since racially-based hierarchies result in ethnic boundaries that are both “peculiarly conflict-ridden and unstable” and also peculiarly rigid and impermeable, Van den Berghe controversially concludes: 

There has never been a successful multiracial democracy” (p189).[13]

Of course, in assessing this claim, we must recognize that ‘success’ is not only a matter of degree, but also can also be measured on several different dimensions. 

Thus, many people would regard the USA as the quintessential “successful… democracy”, even though the US has been multiracial, to some degree, for the entirety of its existence as a nation. 

Certainly, the USA has been successful economically, and indeed militarily.

However, the US has also long been plagued by interethnic conflict, and, although successful economically and militarily, it has yet to be successful in finding a way to manage its continued interethnic conflict, especially that between blacks and whites.

The USA is also afflicted with a relatively high rate of homicide and gun crime as compared to other developed economies, as well as low levels of literacy and numeracy and educational attainment. Although it is politically incorrect to acknowledge as much, these problems also likely reflect the USA’s ethnic diversity, in particular its large black underclass.

Indeed, as van den Berghe acknowledges, even societies divided by mere ethnicity rather than race seem highly conflict-prone (p186). 

Thus, assimilation, when it does occur, occurs only gradually, and only under certain conditions, namely when the group which is to be assimilated is “similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (p219). 

Thus, van den Berghe observes: 

People tend to assimilate and acculturate when their ethny [i.e. ethnic group] is geographically dispersed (often through migration), when they constitute a numerical minority living among strangers, when they are in a subordinate position and when they are allowed to assimilate by the dominant group” (p185). 

Moreover, van den Berghe is careful distinguish what he calls assimilation from mere acculturation.  

The latter, acculturation, involves a subordinate group gradually adopting the norms, values, language, cultural traditions and folkways of the dominant culture into whom they aspire to assimilate. It is therefore largely a unilateral process.[14]

In contrast, however, assimilation goes beyond this and involves members of the dominant host culture also actually welcoming, or at least accepting, the acculturated newcomers as a part of their own community.  

Thus, van den Berghe argues that host populations sometimes resist the assimilation of even wholly acculturated and hence culturally indistinguishable out-groups. Examples of groups excluded in this way include, according to van den Berghe, pariah castes, such as the untouchable dalits of the Indian subcontinent, the Burakumin of Japan and blacks in the USA.[15]

In other words, assimilation, unlike acculturation, is very much a two-way street. Thus, just as it ‘takes two to tango’, so assimilation is very much a bilateral process: 

It takes two to assimilate” (p217).  

On the one hand, minority groups may sometimes themselves resist assimilation, or even acculturation, if they perceive themselves as better off maintaining their distinct identify. This is especially true of groups who perceive themselves as being, in some respects, better-off than the host outgroup into whom they refuse to be absorbed. 

Thus, middleman minorities, or market-dominant minorities, such as Jews in the West, the overseas Chinese in contemporary South-East Asia, the Lebanese in West Africa and South Asians in East Africa, being, on average, much wealthier than the bulk of the host populations among whom them live, often perceive no social or economic advantage to either assimilation or acculturation and hence resist the process, instead stubbornly maintaining their own language and traditions and marrying only among themselves. 

The same is also true, more obviously, of alien ruling elites, such as the colonial administrators, and settlers, in European colonial empires in Africa, India and elsewhere, for whom assimilation into native populations would have been anathema.

Passing’, ‘Pretendians’ and ‘Blackfishing’ 

Interestingly, just as market-dominant minorities, middleman minorities, and European colonial rulers usually felt no need to assimilate into the host society in whose midst they lived, because to do so would have endangered their privileged position within this host society, so recent immigrants to America may no longer perceive any advantage to assimilation. 

On the contrary, there may now be an economic disincentive operating against assimilation, at least if assimilation means forgoing from the right to benefit from affirmative action in employment and college admissions

Thus, in the nineteenth and early twentieth centuries, the phenomenon of passing, at least in America, typically involved non-whites, especially light-skinned mixed-race African-Americans, attempting to pass as white or, if this were not realistic, sometimes as Native American.  

Some non-whites, such as Bhagat Singh Thind and Takao Ozawa, even brought legal actions in order to be racially reclassified as ‘white’ in order to benefit from America’s then overtly racialist naturalization law.

Contemporary cases of passing, however, though rarely referred to by this term, typically involve whites themselves attempting to somehow pass themselves off as some variety of non-white (see Hannam 2021). 

Recent high-profile recent examples have included Rachel Dolezal, Elizabeth Warren and Jessica Krug

Interestingly, all three of these women were both employed in academia and involved in leftist politics – two spheres in which adopting a non-white identity is likely to be especially advantageous, given the widespread adoption of affirmative action in college admissions and appointments, and the rampant anti-white animus that infuses so much of academia and the cultural Marxist left.[16]

Indeed, the phenomenon is now so common that it even has its own associated set of neologisms, such as Pretendian, ‘blackfishing’ and, in Australia, box-ticker.[17]

Indeed, one remarkable recent survey purported to uncover that fully 34% of white college applicants in the United States admitted to lying about their ethnicity on their applications, in most cases either to improve their chances of admission or to qualify for financial aid

Although Rachel Dolezal, Elizabeth Warren and Jessica Krug were all women, this survey found that white male applicants were even more likely to lie about their ethnicity than were white female applicants, with only 16% of white female applicants admitting to lying, as compared to nearly half (48%) of white males.[18]

This is, of course, consistent with the fact that it is white males who are the primary victims of affirmative action and other forms of discrimination.  

This strongly suggests that, whereas there were formerly social (and legal) benefits that were associated with identifying as white, today the advantages accrue to instead to those able to assume a non-white identity.  

For all the talk of so-called ‘white privilege’, when whites and mixed-race people, together with others of ambiguous racial identity, preferentially choose to pose as non-white in order to take advantage of the perceived benefits of assuming such an identity, they are voting with their feet and thereby demonstrating what economists call revealed preferences

This, of course, means that recent immigrants to America, such as Hispanics, will have rather less incentive in integrate into the American mainstream than did earlier waves of European immigrants, such as Irish, Poles, Jews and Italians, the latter having been, primarily, the victims of discrimination rather than its beneficiaries

After all, who would want to be another, boring whiteAnglo’ or unhyphenated American when to do so would presumably mean relinquishing any right to benefit from affirmative action in job recruitment or college admissions, not to mention becoming a part of the hated white ‘oppressor’ class. 

In short, ‘white privilege’ isn’t all it’s cracked up to be. 

This perverse incentive against assimilation obviously ought to be worrying to anyone concerned with the future of American as a stable unified polity. 

Ethnostates – or Consociationalism

Given the ubiquity of ethnic conflict, and the fact that assimilation occurs, if at all, only gradually and, even then, only under certain conditions, a pessimist (or indeed a racial separatist) might conclude that the only way to prevent ethnic conflict is for different ethnic groups to be given separate territories with complete independence and territorial sovereignty. 

This would involve the partition of the world into separate ethnically homogenous ethnostates, as advocated by racial separatists and many in the alt-right. 

Yet, quite apart from the practical difficulties such an arrangement would entail, not least the need for large-scale forcible displacements of populations, this ‘universal nationalism’, as championed by political scientist Frank K Salter among others, would arguably only shift the locus of ethnic conflict from within the borders of a single multi-ethnic state to between those of separate ethnostates – and conflict between states can be just as destructive as conflict within states, as countless wars between states throughout history have amply proven.  

In the absence of assimilation, then, perhaps fairest and least conflictual solution is what van den Berghe terms consociationalism. This term refers to a form of ethnic power-sharing, whereby elites from both groups agree to share power, each usually retaining a veto power regarding major decisions, and there is proportionate representation for each group in all important positions of power. 

This seems to be roughly the basis of the power sharing agreement imposed on Northern Ireland in the Good Friday Agreement, which was largely successful in bringing an end to the ethnic conflict known as ‘the Troubles.[19]

On the other hand, however, power-sharing was explicitly rejected by both the ANC and the international anti-apartheid movement as a solution in another ethnically-divided polity, namely South Africa, in favour of majority rule, even though the result has been a situation very similar to the situation in Northern Ireland which led to the Troubles, namely an effective one-party state, with a single party in power for successive decades and institutionalized discrimination against minorities.[20]

Consociationalism or ethnic power-sharing also arguably the model towards which the USA and other western polities are increasingly moving, with quotas and so-called ‘affirmative action increasingly replacing the earlier ideals of appointment by merit, color blindness or freedom of association, and multiculturalism and cultural pluralism replacing the earlier ideal of assimilation

Perhaps the model consociationalist democracy is van den Berghe’s own native Belgium, where, he reports: 

All the linguistic, class, religious and party-political quarrels and street demonstrations have yet to produce a single fatality” (p199).[21]

Belgium is, however, very much the exception rather than the rule, and, at any rate, though peaceful, remains very much a divided society

Indeed, power-sharing institutions, in giving official, institutional recognition to the existing ethnic divide, function only to institutionalize and hence reinforce and ossify the existing ethnic divide, making successful integration and assimilation almost impossible – and certainly even less likely to occur than it had been in the absence of such institutional arrangements. 

Moreover, consociationalism can be maintained, van den Berghe emphasizes, only in a limited range of circumstances, the key criterion being that the groups in question are equal, or almost equal, to one another in status, and not organized into an ethnic hierarchy. 

However, even when the necessary conditions are met, it invariably involves a precarious balancing act. 

Just how precarious is illustrated by the fate of other formerly stable consociationalist states. Thus, van den Bergh notes the irony that earlier writers on the topic had cited Lebanon as “a model [consociationalist democracy] in the Third World” just a few years before the Lebanese Civil War broke out in the 1970s (p191). 

His point is, ironically, only strengthened by the fact that, in the three decades since his book was first published, two of his own examples of consociationalism, namely the USSR and Yugoslavia, have themselves since descended into civil war and fragmented along ethnic lines. 

Slavery and Other Recurrent Situations  

In the central section of the book, van den Berghe discusses such historically recurrent racial relationships as “slavery”, middleman minorities, “caste” and “colonialism”. 

In large part, his analyses of these institutions and phenomena do not depend on his sociobiological theory of ethnocentrism, and are worth reading even for readers unconvinced by this theory – or even by readers skeptical of sociobiology and evolutionary psychology altogether. 

Nevertheless, the sociobiological model continues to guide his analysis. 

Take, for example, his chapter on slavery. 

Although the overtly racial slavery of the New World was quite unique, slavery often has an ethnic dimension, since slaves are often captured during warfare from among enemy groups. 

Indeed, the very word slave is derived from the ethnonym, Slav, due to the frequency with which the latter were captured as slaves, both by Christians and Muslims.[22]

In particular, van den Berghe argues that: 

An essential feature of slave status is being torn out of one’s network of kin selection. This condition generally results from forcible removal of the slave from his home group by capture and purchase” (p120).

This then partly explains, for example, why European settlers were far less successful in enslaving the native inhabitants of the Americas than they were in exploiting the slave labour of African slaves who had been shipped across the Atlantic, far from their original kin groups, precisely for this purpose.[23]

Thus, for van den Berghe, the quintessential slave is: 

Not only involuntarily among ethnic strangers in a strange land: he is there alone, without his support group of kinsmen and fellow ethnics” (p115)

Here van den Berghe seemingly anticipates the key insight of Jamaican sociologist Orlando Peterson in his comparative study of slavery, Slavery and Social Death, who terms this key characteristic of slavery natal alienation.[24]

This, however, is likely to be only a temporary condition, since, at least if allowed to reproduce, then, gradually over time, slaves would put down roots, produce new families, and indeed whole communities of slaves.[25]

When this occurs, however, slaves gradually, over generations, cease to be true slaves. The result is that: 

Slavery can long endure as an institution in a given society, but the slave status of individuals is typically only semipermanent and nonhereditary… Unless a constantly renewed supply of slaves enters a society, slavery, as an institution, tends to disappear and transform itself into something else” (p120). 

This then explains the gradual transformation of slavery during the medieval period into serfdom in much of Europe, and perhaps also the emergence of some pariah castes such as the untouchables of India. 

Paradoxically, van den Berghe argues that racism became particularly virulent in the West precisely because of Western societies’ ostensible commitment to notions of liberty and the rights of man, notions obviously incompatible with slavery. 

Thus, whereas most civilizations simply took the institution of slavery for granted, feeling no especial need to justify the practice, western civilization, given its ostensible commitment to such lofty notions as individual liberty and the equality of man, was always on the defensive, feeling a constant need to justify and defend slavery. 

The main justification hit upon was racialism and theories of racial superiority

If it was immoral to enslave people, but if at the same time it was vastly profitable to do so, then a simple solution to the dilemma presented itself: slavery became acceptable if slaves could somehow be defined as somewhat less than fully human” (p115).  

This then explains much of the virulence of western racialism in the much of the eighteenth, nineteenth and even early-twentieth centuries.[26]

Another important, and related, ideological justification for slavery was what van den Berghe refers to as ‘paternalism’. Thus, Van den Berghe observes that: 

All chattel slave regimes developed a legitimating ideology of paternalism” (p131). 

Thus, in the American South, the “benevolent master” was portrayed a protective “father figure”, while slaves were portrayed as childlike and incapable of living an independent existence and hence as benefiting from their own enslavement (p131). 

This, of course, was a nonsense. As van den Berghe cynically observes: 

Where the parentage was fictive, so, we may assume, is the benevolence” (p131). 

Thus, exploitation was, in sociobiological terms, disguised as kin-selected parental benevolence

However, despite the dehumanization of slaves, the imbalance of power between slave and master, together with the men’s innate and evolved desire for promiscuity, made the sexual exploitation of female slaves by male masters all but inevitable.[27]

As van den Berghe observes: 

Even the strongest social barriers between social groups cannot block a specieswide [sic] sexual attraction. The biology of reproduction triumphs in the end over the artificial barriers of social prejudice” (p109). 

Thus, he notes the hypocrisy whereby: 

Dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women” (p33). 

The result was that the fictive ideology of ‘paternalism’ that served to justify slavery often gave way to literal paternity of the next generation of the slave population. 

This created two problems. First, it made the racial justification for slavery, namely the ostensible inferiority of black people, ring increasingly hollow, as ostensibly ‘black slaves acquired greater European ancestry, lighter skins and more Caucasoid features with each successive generation of miscegenation. 

Second, and more important, it also meant that the exploitation of this next generation of slaves by their owners potentially violated the logic of kin selection, because: 

If slaves become kinsmen, you cannot exploit them without indirectly exploiting yourself” (p134).[28]

This, van den Berghe surmises, led many slave owners to free those among the offspring of slave women whom they themselves, or their male relatives, had fathered. As evidence, he observes:  

In all [European colonial] slave regimes, there was a close association between manumission and European ancestry. In 1850 in the United States, for example, an estimated 37% of free ‘negroes’ had white ancestry, compared to about 10% of the slave population” (p132). 

This leads van den Bergh to conclude that many such free people of color – who were referred to as people of color precisely because their substantial degree of white ancestry precluded any simple identification as black or negro – had been freed by their owner precisely because their owner was now also their kinsmen. Indeed, many may have been freed by the very slave-master who had been responsible for fathering them. 

Thus, to give a famous example, Thomas Jefferson is thought to have fathered six offspring, four of whom survived to adulthood, with his slave, Sally Hemings – who was herself already three-quarters white, and indeed Jefferson’s wife’s own half-sister, on account of miscegenation in previous generations. 

Of these four surviving offspring, two were allowed to escape, probably with Jefferson’s tacit permission or at least acquiescence, while the remaining two were freed upon his death in his will.[29]

This seems to have been a common pattern. Thus, van den Berghe reports: 

Only about one tenth of the ‘negro’ population of the United States was free in 1860. A greatly disproportionate number of them were mulattoes, and, thus, presumably often blood relatives of the master who emancipated them or their ancestors. The only other slaves who were regularly were old people past productive and reproductive age, so as to avoid the cost of feeding the aged and infirm” (p129). 

Yet this made the continuance of slavery almost impossible, because each new generation more and more slaves would be freed.  

Other slave systems got around this problem by continually capturing or importing new slaves in order to replenish the slave population. However, this option was denied to American slaveholders by the abolition of the slave trade in 1807

Instead, the Americans were unique in attempting to ‘breed’ slaves. This leads van den Berghe to conclude that: 

By making the slave woman widely available to her master…Western slavery thus literally contained the genetic seeds of its own destruction” (p134).[30]

Synthesising Marxism and Sociobiology 

Given the potential appeal of his theory to nationalists, and even to racialists, it is perhaps surprising that van den Berghe draws heavily on Marxist theory. Although Marxists were almost unanimously hostile to sociobiology, sociobiologists frequently emphasized the potential compatibility of Marxist theory and sociobiology (e.g. The Evolution of Human Sociality). 

However, van den Berghe remains, to my knowledge, the only figure (except myself) to actually successfully synthesize sociobiology and Marxism in order to produce novel theory.  

Thus, for example, he argues that, in almost every society in existence, class exploitation is disguised by an ideology (in the Marxist sense) that disguises exploitation as either: 

1) Kin-selected nepotistic altruism – e.g. the king or dictator is portrayed as benevolent ‘father’ of the nation; or
2) Mutually beneficial reciprocity – i.e. social contract theory or democracy (p60). 

However, contrary to orthodox Marxist theory, van den Berghe regards ethnic sentiments as more fundamental than class loyalty since, whereas the latter is “dependent on a commonality of interests”, the former is often “irrational” (p243). 

Nationalist conflicts are among the most intractable and unamenable to reason and compromise… It seems a great many people care passionately whether they are ruled and exploited by members of their own ethny or foreigners” (p62). 

In short, van den Berghe concludes: 

Blood runs thicker than money” (p243). 

Another difference is that, whereas Marxists view control over the so-called means of production (i.e. the means necessary to produce goods for sale) as the ultimate factor determining exploitation and conflict in human societies, Darwinians instead focus on conflict over access to what I have termed the means of reproduction – in other words, the means necessary to produce offspring (i.e. fertile females, their wombs and vaginas etc.). 

This is because, from a Darwinian perspective: 

The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (p165). 

Thus, unlike his contemporary Darwin, Karl Marx was, for all his ostensible radicalism, in his emphasis on economics rather than sex, just another Victorian sexual prude.[31]

Mating, Miscegenation and Intermarriage 

Given that reproduction, not production, is the ultimate focus of individual and societal conflict and competition, van den Berghe argues that ultimately questions of equality, inequality and assimilation must be also determined by reproductive, not economic, criteria. 

Thus, he concludes, intermarriage, especially if it occurs, not only frequently, but also in both directions (i.e. involves both males and females of both ethnicities, rather than always involving males of one ethnic group, usually the dominant ethnic group, taking females of the other ethnic group, usually the subordinate group, as wives), is the ultimate measure of racial equality and assimilation: 

Marriage, especially if it happens in both directions, that is with both men and women of both groups marrying out, is probably the best measure of assimilation” (p218). 

In contrast, however, he also emphasizes that mere “concubinage is frequent [even] in the absence of assimilation” (p218). 

Moreover, such concubinage invariably involves males of the dominant-group taking females from the subordinate-group as concubines, whereas dominant-group females are invariably off-limits as sexual partners for subordinate group males. 

Thus, van den Berghe observes, although “dominant group men, whether racist or not, are seldom reluctant to maximize their fitness with subordinate-group women”, they nevertheless are jealously protective of their own women and enforce strict double-standards (p33). 

For example, historian Wynn Craig Wade, in his history of the Ku Klux Klan (which I have reviewed here), writes: 

In [antebellum] Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation.” (The Fiery Cross: p20). 

The result, van den Berghe reports, is that: 

The subordinate group in an ethnic hierarchy invariably ‘loses’ more women to males of the dominant group than vice versa” (p75). 

Indeed, this same pattern is even apparent in the DNA of contemporary populations. Thus, geneticist James Watson reports that, whereas the mitochondrial DNA of contemporary Columbians, which is passed down the female line, shows a “range of Amerindian MtDNA types”, the Y-chromosomes of these same Colombians, are 94% European. This leads him to conclude: 

The virtual absence of Amerindian Y chromosome types, reveals the tragic story of colonial genocide: indigenous men were eliminated while local women were sexually ‘assimilated’ by the conquistadors” (DNA: The Secret of Life: p257). 

As van den Berghe himself observes: 

It is no accident that military conquest is so often accompanied by the killing, enslavement and castration of males, and the raping and capturing of females” (p75). 

This, of course, reflects the fact that, in Darwinian terms, the ultimate purpose of power is to maximize reproductive success

However, while the ethnic group as a whole inevitably suffers a diminution in its fitness, there is a decided gender imbalance in who bears the brunt of this loss. 

The men of the subordinate group are always the losers and therefore always have a reproductive interest in overthrowing the system. The women of the subordinate group, however frequently have the option of being reproductively successful with dominant-group males” (p27). 

Indeed, subordinate-group females are not only able, and sometimes forced, to mate with dominant-group males, but, in purely fitness terms, they may even benefit from such an arrangement.  

Hypergamy (mating upward for women) is a fitness enhancing strategy for women, and, therefore, subordinate-group women do not always resist being ‘taken over’ by dominant-group men” (p75). 

This is because, by so doing, they thereby obtain access to both the greater resources that dominant group males are able to provide in return for sexual access or as provisioning for their offspring, as well as the superior’ genes which facilitated the conquest in the first place. 

Thus, throughout history, women and girls have been altogether too willing to consort and intermarry with their conquerors. 

The result of this gender imbalance in the consequences of conquest and subjugation, is, a lack of solidarity as between men and women of the subjugated group. 

This sex asymmetry in fitness strategies in ethnically stratified societies often creates tension between the sexes within subordinate groups. The female option of fitness maximization through hypergamy is deeply resented by subordinate group males” (p76). 

Indeed, even captured females who were enslaved by their conquerers sometimes did surprisingly well out of this arrangement, at least if they were young and beautiful, and hence lucky enough to be recruited into the harem of a king, emperor or other powerful male.

One slave captured in Eastern Europe even went on to become effective queen of the Ottoman Empire at the height of its power. Hurrem Sultan, as she came to be known, was, of course, exceptional, but only in degree. Members of royal harems may have been secluded, but they also lived in some luxury.

Indeed, even in puritanical North America, where concubinage was very much frownded upon, van den Berghe reports that “slavery was much tougher on men than on women”, since: 

Slavery drastically reduced the fitness of male slaves; it had little or no such adverse effect on the fitness of female slaves whose masters had a double interest – financial and genetic – in having them reproduce at maximum capacity” (p133) 

Van den Berghe even tentatively ventures: 

It is perhaps not far-fetched to suggest that, even today, much of the ambivalence in relations between black men and women in America… has its roots in the highly asymmetrical mating system of the slave plantation” (p133).[32]

Miscegenation and Intermarriage in Modern America 

Yet, curiously, however, patterns of interracial dating in contemporary America are anomalous – at least if we believe the pervasive myth that America is a ‘systemically racist’ society where black people are still oppressed and discriminated against

On the one hand, genetic data confirms that, historically, matings between white men and black women were more frequent than the reverse, since African-American mitochondrial DNA, passed down the female line, is overwhelmingly African in origin, whereas their Y chromosomes, passed down the male line, are often European in origin (Lind et al 2007). 

However, recent census data suggests that this pattern is now reversed. Thus, black men are now about two and a half times as likely to marry white women as black women are to marry white men (Fryer 2007; see also Sailer 1997). 

This seemingly suggests white American males are actually losing out in reproductive competition to black males. 

This observation led controversial behavioural geneticist Glayde Whitney to claim: 

By many traditional anthropological criteria African-Americans are now one of the dominant social groups in America – at least they are dominant over whites. There is a tremendous and continuing transfer of property, land and women from the subordinate race to the dominant race” (Whitney 1999: p95). 

However, this conclusion is difficult to square with the continued disproportionate economic deprivation of much of black America. In short, African-Americans may be reproductively successful, and perhaps even, in some respects, socially privileged, but, despite benefiting from systematic discrimination in employment and admission to institutions of higher education, they are clearly also, on average, economically much worse-off as compared to whites and Asians in modern America.  

Instead, perhaps the beginnings of an explanation for this paradox can be sought in van den Berghe’s own later collaboration with anthropologist, and HBD blogger, Peter Frost

Here, in a co-authored paper, van den Berghe and Frost argue that, across cultures, there is a general sexual preference for females with somewhat lighter complexion than the group average (van den Berghe and Frost 1986). 

However, as Frost explains in a more recent work, Fair Women, Dark Men: The Forgotten Roots of Racial Prejudice, preferences with regard to male complexion are more ambivalent (see also Feinman & Gill 1977). 

Thus, whereas, according to the title of a novel, two films and a hit Broadway musical, ‘Gentlemen Prefer Blondes’ (who also reputedly, and perhaps as a consequence, have more fun), the idealized male romantic partner is instead tall, dark and handsome

In subsequent work, Frost argues that ecological conditions in sub-Saharan Africa permitted high levels of polygyny, because women were economically self-supporting, and this increased the intensity of selection for traits (e.g. increased muscularity, masculinity, athleticism and perhaps outgoing, sexually-aggressive personalities) which enhance the ability of African-descended males to compete for mates and attract females (Frost 2008). 

In contrast, Frost argues that there was greater selection for female attractiveness (and perhaps female chastity) in areas such as Northern Europe and Northeast Asia, where, to successfully reproduce, women were required to attract a male willing to provision them during cold winters throughout their gestation, lactation and beyond (Frost 2008). 

This then suggests that African males have simply evolved to be, on average, more attractive to women, whereas European and Asian females have evolved to be more attractive to men

This speculation is supported by a couple of recent studies of facial attractiveness, which found that black male faces were rated as most attractive to members of the opposite sex, but that, for female faces, the pattern was reversed (Lewis 2011; Lewis 2012). 

These findings could also go some way towards explaining patterns of interracial dating in the contemporary west (Lewis 2012). 

The Most Explosive Aspect of Interethnic Relations” 

However, such an explanation is likely to be popular neither with racialists, for whom miscegenation is anathema, nor with racial egalitarians, for whom, as a matter of sacrosanct dogma, all races must be equal in all things, even aesthetics and sex appeal.[33]

Thus, when evolutionary psychologist Satoshi Kanazawa made a similar claim in 2011 in a blog post (since deleted), outrage predictably ensued, the post was swiftly deleted, his then-blog dropped by its host, Psychology Today, and the author reprimanded by his employer, the London School of Economics, and forbidden from writing any blog or non-scholarly publications for a whole year. 

Yet all of this occurred within a year of the publication of the two papers cited above that largely corroborated Kanazawa’s finding (Lewis 2011; Lewis 2012). 

Yet such a reaction is, in fact, little surprise. As van den Berghe points out: 

It is no accident that the most explosive aspect of interethnic relations is sexual contact across ethnic (or racial) lines” (p75). 

After all, from a sociobiological perspective, competition over reproductive access to fertile females is Darwinian conflict in its most direct and primordial form

Van den Berghe’s claim that interethnic sexual contact is “the most explosive aspect” of interethnic relations also has support from the history of racial conflict in the USA and elsewhere

The spectre of interracial sexual contact, real or imagined, has motivated several of the most notorious racially-motivated ‘hate-crimes’ of American history, from the torture-murder of Emmett Till for allegedly propositioning a white woman, to the various atrocities of the reconstruction-era Ku Klux Klan in defence of the ostensible virtue of ‘white womanhood, to the recent Charleston church shooting, ostensibly committed in revenge for the allegedly disproportionate rate of rape of white women by black man.[34]

Meanwhile, interracial sexual relations are also implicated in some of American history’s most infamous alleged miscarriages of justice, from the Scottsboro Boys and Groveland Four cases, and the more recent Central Park jogger case, all of which involved allegations of interracial rape, to the comparatively trivial conduct alleged, but by no means trivial punishment imposed, in the so-called Monroe ‘kissing case

Allegations of interracial rape also seem to be the most common precursor of full-blown race riots

Thus, in early-twentieth century America, the race riots in Springfield, Illinois in 1908, in Omaha, Nebraska in 1919, in Tulsa, Oklahoma in 1921 and in Rosewood, Florida in 1923 were all ignited, at least in part, by allegations of interracial rape or sexual assault

Meanwhile, on the other side of the Atlantic, multi-racial Britain’s first modern post-war race riot, the Notting Hill riot in London 1958, began with a public argument between an interracial couple, when white passers-by joined in on the side of the white woman against her black Jamaican husband (and pimp) before turning on them both. 

Meanwhile, Britain’s most recent unambiguous race riot, the 2005 Birmingham riot, an entirely non-white affair, was ignited by the allegation that a black girl had been gang-raped by South Asians.

[Edit: Interestingly, Britain’s latest race riot, which occurred in Kirkby, Merseyside, and took place some months after this piece was first posted, also follows the same pattern, having been provoked by the allegation that local underage girls were being sexually propositioned and harassed by asylum seekers who were being housed in a local hotel.]

Meanwhile, at least in the west, whites no longer seem participate in race riots, save as victims. However, an exception was the 2005 Cronulla riots in Sydney, Australia, which were ignited by the allegation that Middle Eastern males were sexually harassing white Australian girls on Sydney beaches. 

Similarly, in Britain, though riots have yet to result, the spectre of so-called Muslim grooming gangs, preying on, and pimping out, underage white British girls in northern towns across the England, has arguably done more to ignite anti-Muslim sentiment among whites in the UK than a whole series of Jihadist terrorist attacks on British civilian targets

Thus, in Race: The Reality of Human Differences (which I have reviewed here) Sarich and Miele caution that miscegenation, often touted as the universal panacea to racism simply because, if practiced sufficiently widely, it would eventually eliminate all racial differences, or at least blur the lines between racial groups, may actually, at least in the short-term, actually incite racist attacks. 

This, they argue, is because: 

Viewed from the racial solidarist perspective, intermarriage is an act of race war. Every ovum that is impregnated by the sperm of a member of a different race is one less of that precious commodity to be impregnated by a member of its own race and thereby ensure its survival” (Race: The Reality of Human Differences: p256) 

This “racial solidarist perspective” is, of course, a crudely group selectionist view of Darwinian competition, and it leads Sarich and Miele to hypothesize: 

Paradoxically, intermarriage, particularly of females of the majority group with males of a minority group, is the factor most likely to cause some extremist terrorist group to feel the need to launch such an attack” (Race: The Reality of Human Differences: p255). 

In other words, in sociobiological terms, ‘Robert’, a character from one of Michel Houellebecq’s novels, has it right when he claims: 

What is really at stake in racial struggles… is neither economic nor cultural, it is brutal and biological: It is competition for the cunts of young women” (Platform: p82). 

Endnotes

[1] Admittedly, the Croatian War of Independence is indeed sometimes said to have been triggered, or at least precipitated, by a football match between Dinamo Zagreb and Red Star Belgrade, and the riot that occurred at the ground on that day. However, this war was, of course, ethnic in origin, fought between Croats and Serbians, and the football match served as a triggering event only because the two teams were overwhelmingly supported by Croats and Serbians respectively.
This leads to an interesting observation – namely that rivalries such as those between supporters of different football teams tend to become especially malignant and acrimonious when support for one team or the other comes to be inextricably linked to ethnic identity.
Thus it is surely no accident that, in the UK, the most intense rivalry between groups of football supporters is that between between supporters of Ragners and Celtic in Glasgow, at least in part because the rivalry has become linked to religion, which was, at least until recently, a marker for ancestry and ethnicity, while an apparently even more intense rivalry was that between Linfield and Belfast Celtic in Northern Ireland, which was also based on a parallel religious and ethnic divide, and ultimately became so acrimonious that one of the two teams had to withdraw from domestic football and ultimately ceased to exist.

[2] Actually, however, contrary to Brigandt’s critique, it is clear that van den Berghe intended his “biological golden rule” only as a catchy and memorable aphorism, crudely summarizing Hamilton’s rule, rather than a quantitative scientific law akin to, or rivalling, Hamilton’s Rule itself. Therefore, this aspect of Brigandt’s critique is, in my view, misplaced. Indeed, it is difficult to see how this supposed rule could be applied as a quantitative scientific law, since relatedness, on the one hand, and altruism, on the other, are measured in different currencies. 

[3] Thus, van den Berghe concedes that: 

In many cases, the common descent acribed to an ethny is fictive. In fact, in most cases, it is partly fictive” (p27). 

[4] The question of racial nationalism (i.e. encompassing all members of a given race, not just those of a single ethnicity or language group) is actually more complex. Certainly, members of the same race do indeed share some degree of kinship, in so far as they are indeed (almost by definition) on average more closely biologically related to one another than to members of other races – and indeed that relatedness is obviously apparent in their phenotypic resemblance to one another. This suggests that racial nationalist movements such as that of, say, UNIA or of the Japanese imperialists, might have more potential as a viable form of nationalism than do attempts to unite racially disparate ethnicities, such as civic nationalism in the contemporary USA. The same may also be true of Oswald Mosley’s Europe a Nation campaign, at least while Europe remained primarily monoracial (i.e. white). However, any such racial nationalism would incorporate a far larger and more culturally, linguistically and genetically disparate group than any form of nationalism that has previously proven capable of mobilizing support.
Thus, Marcus Garvey’s attempt to create a kind of pan-African ethnic identity enjoyed little success and was largely restricted to North America, where African-Americans, do indeed share a common language and culture in addition to their race. Similarly, the efforts of Japanese nationalists to mobilize a kind of pan-Asian nationalism in support of their imperial aspirations during the first half of the twentieth century was an unmitigated failure, though this was partly because of the brutality with which they conquered and suppressed the other Asian nationalities whose support for pan-Asianism they intermittently and half-heartedly sought to enlist.
On the other hand, it is sometimes suggested that, in the early twentieth century, a white supremacist ideology was largely taken for granted among whites. However, while to some extent true, this shared ideology of white supremacism did not prevent the untold devastation wrought by the European wars of the early twentieth century, namely World Wars I and II, which Patrick Buchanan has collectively termed The Great Civil War of the West.
Thus, European nationalisms usually defined themselves by opposition to other European peoples and powers. Thus, just as Irish nationalism is defined largely by opposition to Britain, and Scottish nationalism by opposition to England, so English (and British) nationalism has itself traditionally been directed against rival European powers such as France and Germany (and formerly Spain), while French nationalism seems to have defined itself primarily in opposition to the Germans and the British, and German nationalism in opposition to the French and Slavs, etc.
It is true that, in the USA, a kind of pan-white American nationalism did seem to prevail in the early twentieth century, albeit initially limited to white protestants, and excluding at least some recent European immigrants (e.g. Italians, Jews). This is, however, a consequence of the so-called melting pot, and really only amounts to yet another parochial nationalism, namely that of a newly-formed ethnic group – white Americans.
At any rate, today white American nationalism is, at most, decidedly muted in form – a kind of implicit white racial consciousness, or, to coin a phrase, the nationalism that dare not speak its name. Thus, Van den Berghe observes: 

In the United States, the whites are an overwhelming majority, so much so that they cannot be meaningfully conceived of as a ruling group at all. The label ‘white’ in the United States does not correspond to a well-defined ethnic or racial group with a high degree of social organization or even self-consciousness, except regionally in the south” (p183). 

Van den Berghe wrote this in 1981. Today, of course, whites are no longer such an “overwhelming majority” of the US population. On the contrary, they are already well on the way to becoming a minority in America, a milestone that is likely to be reached over the coming decades.
Yet, curiously, white ‘racially consciousness’ is seemingly even more muted and implicit today than it was back when van den Berghe authored his book – and this is seen even in the South, which van den Berghe cited as an exception and lone bastion of white identity politics.
True, White Southerners may vote as a solidly for Republican candidates as they once did for the Democrats. However, overt appeals to white racial interests are now as anathema in the South as elsewhere.
Thus, as recently as 1990, a more or less open white racialist like David Duke was able to win a majority of the white vote in Louisiana in his run for the Senate. Today, this is unimaginable.
If the reason that whites lack any ‘racial consciousness’ is indeed, as van den Berghe claims, because they represent such an “overwhelming majority” of the American population, then it is interesting to speculate if and when, during the ongoing process of white demographic displacement, this will cease to be the case.
One thing seems certain: If and when it does ever occur, it will be too late to make any difference to the ongoing process of demographic displacement that some have termed ‘The Great Replacement’ or a third demographic transition.

[5] Of course, a preference for those who look similar to oneself (or one’s other relatives) may itself function as a form of kin recognition (i.e. of recognizing who is kin and who is not). This is referred to in biology as phenotype matching. Moreover, as Richard Dawkins has speculated in The Selfish Gene (reviewed here), racial feeling could conceivably have evolved through a misfiring of such a crude heuristic (The Selfish Gene: p100).

[6] Actually, I suspect that, on average, at least historically, both mothers and fathers may indeed, on average, have provided rather less care for their mixed-race offspring than for offspring of the same race as themselves, simply because mixed-race offspring were more likely to be born out of wedlock, not least because interracial marriage was, until recently, strongly frowned upon, and, in some jurisdictions, either not legally permitted or even outright criminalized, and both mothers and fathers tended to provide less care for illegitimate offspring, fathers because they often refused to acknowledge their illegitimate offspring and had little or no contact with them and may not even have been aware of their existence, and mothers because, lacking paternal support, they usually had no means of raising their illegitimate offspring alone and hence often gave them up for adoption or fostering.

[7] On the other hand, in his paper, An integrated evolutionary perspective on ethnicity, controversial antiSemitic evolutionary psychologist Kevin Macdonald disagrees with this conclusion, citing personal communication from geneticist and anthropologist Henry Harpending for the argument that: 

Long distance migrations have easily occurred on foot and over several generations, bringing people who look different for genetic reasons into contact with each other. Examples include the Bantu in South Africa living close to the Khoisans, or the pygmies living close to non-pygmies. The various groups in Rwanda and Burundi look quite different and came into contact with each other on foot. Harpending notes that it is ‘very likely’ that such encounters between peoples who look different for genetic reasons have been common for the last 40,000 years of human history; the view that humans were mostly sessile and living at a static carrying capacity is contradicted by history and by archaeology. Harpending points instead to ‘starbursts of population expansion’. For example, the Inuits settled in the arctic and exterminated the Dorsets within a few hundred years; the Bantu expansion into central and southern Africa happened in a millennium or less, prior to which Africa was mostly the yellow (i.e., Khoisan) continent, not the black continent. Other examples include the Han expansion in China, the Numic expansion in northern America, the Zulu expansion in southern Africa during the last few centuries, and the present day expansion of the Yanomamo in South America. There has also been a long history of invasions of Europe from the east. ‘In the starburst world people would have had plenty of contact with very different looking people’” (Macdonald 2001: p70). 

[8] Others have argued that the differences between Tutsi and Hutu are indeed largely a western creation, part of the divide and rule strategy supposedly deliberately employed by European colonialists, as well as a theory of Tutsi racial superiority promulgated by European racial anthropologists known as the Hamitic theory of Tutsi origins, which suggested that the Tutsi had migrated from the Horn of Africa, and had benefited from Caucasoid ancestry, as reflected in their supposed physiological differences from the indigenous Hutu (e.g. lighter complexions, greater height, narrower noses).
On this view, the distinction between Hutu and Tutsi was originally primarily socioeconomic rather than racial, and, at least formerly, the boundaries between the two groups were quite fluid.
I suspect this view is nonsense, reflecting political correctness and the leftist tendency to excuse any evidence of dysfunction or oppression in non-Western cultures as necessarily of product of the malign influence of western colonizers. (Most preposterously, even the Indian caste system has been blamed on British colonizers, although it actually predated them, in one form or another, by several thousand years.)
With respect to the division between Tutsi and Hutu, there are not only morphological differences between the two groups in average stature, nose width and complexion, but also substantial differences in the prevalence of genes for lactose tolerance and sickle-cell. These results do indeed seem to suggest that, as predicted by the reviled ‘Hamitic theory’, the Tutsi do indeed have affinities with populations from the Horn of Africa and East Africa. Modern genome analysis tends to confirm this conclusion. 

[9] Exceptions, where immigrant groups retain their distinctive language for multiple generations, occur where immigrants speaking a particular language arrive in sufficient numbers, and are sufficiently isolated in ethnic enclaves and ghettos, that they mix primarily or exclusively with people speaking the same language as themselves. A related exception is in respect of economically, politically or socially dominant minorities, such as alien colonizers, as well as market-dominant or middleman minorities, who often resist assimilation into the mainstream culture precisely so as to maintain their cultural separateness and hence their privileged position within society, and who also, partly for this reason, take steps to socialize, and ensure their offspring socialize, primarily among their own group. 

[10] Some German-Americans were also interred during World War II. However, far fewer were interred than among Japanese-Americans, especially on a per capita basis.
Nevertheless, some German-Americans were treated very badly indeed, yet the latter, unlike the Japanese, have yet to receive a government apology or compensation. Moreover, there was perhaps justification for the differing treatment accorded Japanese- and German-Americans, since the latter were generally longer established and, being white, were also more successfully integrated into mainstream American society, and there was perceived to be a real threat of enemy sabotage.
Also, with regard to van den Berghe’s observation that nuclear atomic weapons were used only against Japan, this is rather misleading. Nuclear weapons could not have been used against Germany, since, by the time of the first test detonation of a nuclear device, Germany had already surrendered. Yet, in fact, the Manhattan Project seems to have been begun with the Germans very much in mind as a prospective target. (Many of the scientists involved were Jewish, many having fled Nazi-occupied Europe for America, and hence their hostility towards the Nazis, and perhaps Germans in general, is easy to understand.)
Whether it is true that, as van den Berghe claims, atomic bombs were never actually likely to be “dropped over, say, Stuttgart or Dortmund” is a matter of supposition. Certainly, there were great animosity towards the Germans in America, as illustrated by the Morgenthau Plan, which, although ultimately never put into practice, was initially highly influential in directing US policy in Europe and even supported by President Roosevelt.
On the other hand, Roosevelt’s references to ‘the Nazis, the Fascists, and the Japanese’ might simply reflect the fact that there was no obvious name for the faction or regime in control of Japan during the Second World War, since, unlike in Germany and Italy, no named political party had seized power. I am therefore unconvinced that a great deal can necessarily be read into this.

[11] This was especially so in historical times, before the development of improved technologies of long-distance transportation (ships, aeroplanes) enabled more distantly related populations to come into contact, and hence conflict with one another (e.g. blacks and whites in the USA and South Africa, South Asians and English in the UK or under the British Raj). Thus, the ancient Indian treatise on statecraft and strategy, Arthashastra, observed that a ruler’s natural enemies are his immediate neighbours, whereas his next-but-one neighbours, being immediate neighbours of his own immediate neighbours, are his natural allies. This is sometimes credited as the origin of the famous aphorism, The enemy of my enemy is my friend.

[12]  The idea that neighbouring groups tend to be in conflict with one another precisely because, being neighbours, they are also in close contact, and hence competition, with one another, ironically posits almost the exact opposite relationship between ‘contact’ and intergroup relations than that posited by the famous contact theory of mid-twentieth psychology, which posited that increased contact between members of different racial and ethnic groups would lead to reduced prejudice and animosity.
This, of course, depends, at least partly, on the nature of the ‘contact’ in question. Contact that involves territorial rivalry, economic competition and war, obviously exacerbates conflict and animosity. In contrast, proponents of contact theory typically had in mind personal contact, rather than, say, the sort of impersonal, but often deadly, contact that occurs between rival belligerent combatants in wartime.
In fact, however, even at the personal level, contact can take many different forms, and often functions to increase inter-ethnic animosity. Hence the famous proverb, ‘familiarity breeds contempt’.
Indeed, social psychologists now concede that only ‘positive’ interactions with members with members of other groups (e.g. friendship, cooperation, acts of altruism, mutually beneficial trade) reduces animosity and conflict.
In contrast, negative interactions (e.g. being robbed, mugged or attacked by members of another group) only serves to reinforce, exacerbate, or indeed create intergroup animosity. This, of course, reduces the contact hypothesis to little more than common sense – positive experiences with a given group lead to positive perceptions of that group; negative interactions to negative perceptions.
This in turn suggests that stereotypes are often based on real experiences and therefore tend to be true – if not of all individuals, then at least at the statistical, aggregate group level.
I would add that, anecdotally, even positive interactions with members of disdained outgroups do not always shift perceptions regarded the disdained outgroup as a whole. Instead, the individuals with whom one enjoys positive interactions, and even friendships, are often seen as exceptions to the rule (‘one of the good ones’), rather than representative of the demographic to which they belong. Hence the familiar phenomenon of even virulent racists having friendships and sometimes even heroes among members of races whom they generally otherwise disdain. 

[13] However, Van den Berghe acknowledges that racially diverse societies have lived in “relative harmony” in places such as Latin America, where government gives no formal political recognition to racial groups (e.g. racial preferences and quotas for members of certain races) and where the latter do not organize on a racial basis, such that government is, in van den Berghe’s terminology, “non-racial” rather than “multiracial” (p190). However, this is perhaps a naïvely benign view of race relations in Latin American countries such as Brazil, which is, despite the fluidity of racial identity and lack of clear dividing lines between races, nevertheless now viewed by most social scientists, not so much as a model racial democracy, so much as a racially-stratified pigmentocracy , where skin tone correlates with social status. It is also arguably an outdated view of race relations in Latin America, because, perhaps due to indirect cultural and political influence emanating from the USA, ethnic groups in much of Latin America (e.g. blacks in Brazil, indigenous populations in Bolivia) increasingly do organize and agitate on a racial basis.

[14] I am careful here not to refer to refer the dominant culture as that of either a ‘host population’ or a ‘majority population’, or the subordinate group as a ‘minority group’ or an incoming group of migrants. This is because sometimes newly-arrived settlers successfully assimilate the indigenous populations among whom they settle, and sometimes it is the majority group who ultimately assimilate to the norms and culture of the minority. Thus, for example, the Anglo-Saxons imposed their Germanic language on the indigenous inhabitants of what is today England, and indeed ultimately most of the inhabitants of Scotland, Wales and Ireland as well, even though they likely never represented a majority of the population even in England, and may have made only a comparatively modest contribution to the ancestry of the people whom we today call ‘English’.

[15] Interestingly, and no doubt controversially, Van den Berghe argues that blacks in the USA do not have any distinctive cultural traits that distinguish them from the white American mainstream, and that their successful assimilation has been prevented only by the fact that, until very recently, whites have refused to ‘assimilate’ them. He is particularly skeptical regarding the notion of any cultural inheritances from Africa, dismissing “the romantic search for survivals of African Culture” as “elusive” (p177).
Indeed, for van den Berghe, the whole notion of a distinct African-American culture is “largely ideological and romantic” (p177). “Afro-Americans are,” he argues, “culturally ‘Anglo-Saxon’” and hence paradoxically ”as Anglo as anyone… in America” (p177). He concludes:

The case for ‘black culture’ rests… largely on the northern ghetto lumpenproletariat, a class which has no direct counterpart. Even in that group, however, much of the distinctiveness is traceable to their southern, rural origins” (p177). 

This reference to “southern rural origins” anticipates Thomas Sowell’s later black redneck hypothesis. Certainly, many aspects of black culture, such as dialect (e.g. the use of terms such as y’all and ain’t and the pronunciation of ‘whores’ as ‘hoes’) and stereotypical fondness for fried chicken, are obvious inheritances from Southern culture rather than distinctively black, let alone an inheritance from Africa. Thus, van den Berghe observes:

Ghetto lumpenproletariat blacks in Chicago, Detroit and New York may seem to have a distinct subculture of their own compared collectively to their white neighbors, but the black Mississippi sharecropper is not very different, except for his skin pigment, from his white counterparts” (p177). 

Any remaining differences not attributable to their Southern origins are, van den Berghe claims, not “African survivals, but adaptation to stigma” (p177). Here, van den Berghe perhaps has in mind the inverse morality, celebration of criminality, and bad nigger’ archetype prevalent in, for example, gangsta rap music. Thus, van den Berghe concludes that: 

Afro-Americans owe their distinctiveness overwhelmingly to the fact that they have been first enslaved and then stigmatized as a pariah group. They lack a territorial base, the necessary economic, and political resources and the cultural and linguistic pluralism ever to constitute a successful nation. Their pluralism is strictly a structural pluralism inflicted on them by racism. A stigma is hardly an adequate basis for successful nationalism” (p184). 

[16] Thus, Elizabeth Warren was a law professor who became a Democratic Party Senator and Presidential candidate, and had described herself as ‘American Indian, and been cited by her University employers as an ethnic minority, in order to benefit from informal affirmative action, despite having only a very small amount of Native American ancestry. Krug and Dolezal, meanwhile, taking advantage of the one drop rule, both identified as African-American, Krug, a history professor and leftist activist, taking advantage of her Middle-Eastern appearance, itself likely a reflection of her Jewish ancestry. Dolezal, however, was formerly a white, blonde girl, but, through the simple expedient of getting a perm and tan, managed to become an adjunct professor of black studies at a local university and local chapter president of the NAACP in an overwhelmingly white town and state. Whoever said blondes have more fun? 

[17] It has even given rise to a popular new hairstyle among young white males attempting to escape the stigma of whiteness by adopting a racially ambiguous appearance – the mulatto perm

[18] Interestingly, the examples cited by Paddy Hannam in his piece on the phenomenon, The rise of the race fakers also seem to have been female (Hannam 2021). Steve Sailer wisely counsels caution with regard to the findings of this study, noting that anyone willing to lie about their ethnicity on their college application, is also likely even more willing to lie in an anonymous survey (Sailer 2021 ; see also Hood 2007). 

[19] Actually, the Northern Ireland settlement is often classed as centripetalist rather than consociationalist. However, the distinction is minimal, with the former arrangement representing a modification of the latter designed to encourage cross-community cooperation, and prevent, or at least mitigate, the institutionalization and ossification of the ethnic divide that is perceived to occur under consociationalism, where constitutional recognition is accorded to the divide between the two (or more) communities. There is, however, little evidence that centripetalism have ever actually been successful in encouraging cross-community cooperation, beyond what is necessitated by the consitutional system, let alone encouraging assimilation of the rival communities and the depoliticization of ethnic identity. 

[20] The reason for the difference in the attitudes of leftists and liberals towards majority-rule in Northern Ireland and South Africa respectively seems to reflect the fact that, whereas in Northern Ireland, the majority protestant population were perceived of as the dominant oppressor’ group, the black majority in South Africa were perceived of as oppressed.
However, it is hard to see why this would mean black majority-rule in South Africa would be any less oppressive of South Africa’s white, coloured, and Asian minorities than Protestant majority rule had been of Catholics in Ulster. On the contrary, precisely because the black majority in South Africa perceive themselves as having been ‘oppressed’ in the past, they are likely to be especially vengeful and feel justified in seeking recompense for their earlier perceived oppression. This indeed seems to be what is occurring in South Africa, and Zimbabwe, today. 
Interestingly, van den Berghe, writing in 1981 was wisely prophetic regarding the long-term prospects for both apartheid – and for white South Africans. Thus, on the one hand he predicted: 

Past experience with decolonization elsewhere in Africa, especially in Zimbabwe (which is in almost every respect a miniature version of South Africa) seems to indicate that the end of white domination is in sight. The only question is whether it will take the form of a prolonged civil war, a negotiated partition or a frantic white exodus. The odds favor, I think, a long escalating war of attrition accompanied by a gradual economic winddown and a growing white emigration” (p174). 

Thus, van den Berghe was right in so far as he predicted the looming end of the apartheid system – though hardly unique in making this prediction. However, he was wrong in his predictions as to how this end would come about. On the other hand, however, with ongoing farm murders and the overtly genocidal rhetoric of populist politicians like Julius Malema, van den Berghe was probably right regarding the long-term prognosis of the white community in South Africa when he observed: 

Five million whites perched precariously at the tip of a continent inhabited by 400 millions blacks, with no friends in sight. No matter what happens whites will lose heavily, perhaps their very lives, or at least their place in the African sun that they love so much” (p172). 

However, perhaps surprisingly, van den Berghe denies that apartheid was entirely a failure: 

Although apartheid failed in the end, it was a rational course for the Afrikaners to take, given their collective aims, and probably did postpone the day of reckoning by about 30 years” (p174).

[21] The only other polity that perhaps has a competing claim to representing the world’s model consociationalist democracy is Switzerland. However, van den Berghe emphasizes that Switzerland is very much a special case, the secret of its success being that:

Switzerland is one of those rare multiethnic states that did not originate either in conquest or in the breakdown of multinational empires” (p194).

It managed to avoid conquest by its richer and more powerful neighbours simply because:

The Swiss had the dual advantage in resisting outside conquest: favorable terrain and lack of natural resources” (p194)

Also, it provided valuable services to these neighbours, first providing mercenaries to fight in their armed forces and later specialising in the manufacture of watches and what van den Berghe terms “the management of shady foreigners’ ill-gotten capital” (p194).
In reality, however, although divided linguistically and religiously, Switzerland does not, in van den Berghe’s constitute true consociationalism, since the country, with originated as confederation of fomerly independent hill tribes, remains highly decentralized, and power is shared, not by ethnic groups, but rather between regional cantons. Therefore, van den Berghe concludes:

The ethnic diversity of Switzerland is only incidental to the federalism, it does not constitute the basis for it” (p196-7).

In addition, most cantons, where much of the real power lies, are themselves relatively monoethnic and monoliguistic, at least as compared to the country as a whole.

[22] Indeed, since the Slavs of Eastern Europe were the last group in Europe to be converted to Christianity, and it was forbidden by Papal decree to enslave fellow-Christians or sell Christian slaves to non-Christians (i.e. Muslims, among whom there was a great demand for European slaves), Slavs were preferentially targeted by Christians for enslavement, and even those non-Slavic people who were enslaved or sold into bondage were often falsely described as Slavs in order to justify their enslavement and sale to Muslim slaveholders. The Slavs, for geographic reasons, were also vulnerable to capture and enslavement directly by the Muslims themselves.

[23] Another reason that it proved difficult to enslave the indigenous inhabitants of the Americas, according to van den Berghe, is the lifestyle of the latter prior to colonization. Thus, prior to the arrival of Euopean colonists, the indigenous people in many parts of the Americas were still relatively primitive, many subsisting, in whole or in part, as nomadic or semi-nomadic hunter-gatherers. This meant, not only that they had low population densities and were hence few in number and vulnerable to infectious diseases introduced by European colonizers, but also that:

Such aborigines as existed were mobile, elusive and difficult to control. They typically had a vast hinterland into which they could escape labor exploitation” (p93).

Thus, van den Berghe reports, when, in what is today Brazil, Portuguese colonists led raiding expeditions in an attempt to capture and enslave natives, so many of the latter “escaped, committed suicide or died of disease” that the attempt was soon abandoned (p93).
Perhaps more interestingly, van den Berghe also argues that another reason that it proved difficult to enslave nomadic peoples was that:

Nomads typically are unused to being exploited since their own societies are often relatively egalitarian, ill-adapted to steady hard labor and lacking in the skills useful to colonial exploiters (as cultivators, for example). They are, in short, lovers of freedom and make very poor colonial underlings… They are regarded by their conquerors as lazy, shiftless and unreliable, as an obstacle to development and as a nuisance to be displaced” (p93).

In contrast, whereas sub-Saharan Africa is usually stereotyped, not entirely inaccurately, as technologically backward as compared to other cultures, and this very backwardness as facilitating their enslavement, in fact, van den Berghe explains, it was the relatively socially advanced nature of West African societies that permitted the transatlantic slave trade to be so successful.

Contrary to general opinion, Africans were so successfully enslaved, not because they belonged to primitive cultures, but because they had a complex enough technology and social organization to sustain heavy losses of manpower without appreciable depopulation. Even the heavy slaving of the 18th century made only a slight impact on the demography of West Africa. The most heavily raided areas are still today among the most densely populated” (p126).

[24] Although this review is based on the 1987 edition, The Ethnic Phenomenon was first published in 1981, whereas Orlando Peterson’s Slavery and Social Death came out just a year later in 1982.

[25] In the antebellum American South, much is made of the practice of slave-owners selling the spouses and offspring of their slaves to other masters, thereby breaking up families. On the basis of van den Berghe’s arguments, this might actually have represented an effective means of preventing slaves from putting down roots and developing families and slave communities, and might therefore have helped perpetuate the institution of slavery.
However, even assuming that such practices would indeed have had this effect, it is doubtful that there was any such deliberate long-term policy among slaveholders to break up families in this way. On the contrary, van den Berghe reports:

It is not true that slave owners systematically broke up slave couples… On the contrary, it was in their interest to foster stable slave families for the sake of morale, and to discourage escape” (p133). 

Thus, though it certainly occurred and may indeed have been tragic where it did occur, slaveholders generally preferred to keep slave families intact, precisely because, in forming families, slaves would indeed ‘put down roots’ and hence be less likely to try to escape, lest, in the process, they would leave other family members behind to face the vengeance of their former owners alone and without any protection and support they might otherwise have been in a position to offer. The threat of breaking up families, however, surely remained a useful tool in the arsenal of slaveholders to maintain control over slaves. 

[26] While acknowledging, and indeed emphasizing, the virulence of western racialism, van den Berghe, bemoaning the intrusion of “moralism” (and, by extension, ethnomasochism) into scholarship, has little time for the notion that western slavery was intrinsically more malign than forms of slavery practised in other parts of the world or at other times in history (p116). This, he dismisses as “the guilt ascription game: whose slavery was worse?” (p128).
Whereas today, when discussing slavery, white liberal ethnomasochists focus almost exclusively on black slaves in the American South, forms of slavery practised concurrently in other parts of the world were, in many respects, even more brutal. For example, male slaves in the Islamic world were routinely castrated before being sold (p117). 
Given the dangers of this procedure, and the unsterile conditions under which it was performed, Thomas Sowell, in his excellent essay ‘The Real History of Slavery’, reports that “the great majority of those operated on died as a result” (Black Rednecks and White Liberals: p126). Indeed, van den Berghe himself reports that as many as “80 to 90% died of the operation” (p117).
In contrast, while it is true that slaves in the American South had unusually low rates of manumission (i.e. the granting of freedom to slaves), they also enjoyed surprisingly high standards of living, were well-fed and enjoyed long lives. Indeed, not only did slaves in the American South enjoy standards of living superior to those of most other slave populations, they even enjoyed, by some measures, living standards comparable to many non-slave populations, including industrial workers in Europe and the Northern United States, and poor white Southerners, during the same time period (The End of Racism: p88-91; see also Time on the Cross: the Economics of American Slavery). 
Ironically, living standards were so high for the very same reason that rates of manumission were so low – namely, slaves, especially after the abolition and suppression of the transatlantic slave-trade (but also even before then due to the costs of transportation during the Middle Passage) were an expensive commodity. Masters therefore fully intended to get their money’s worth out of their slaves, not only by rarely granting them their freedom, but also ensuring that they lived a long and healthy life.
In this endeavour, they were surprisingly successful. Thus, van den Berghe reports, in the fifty years that followed the prohibition on the import of new slaves into the USA in 1908, the black population of the USA nevertheless more than tripled (p128). In short, slaves may have been property, but they were valuable property – and slaveholders made every effort to protect their investment.
Ironically, therefore, indentured servants (themselves, in America, often white, and later, in Africa, usually South or East Asian) were, during the period of their indenture, often worked harder, and forced to live in worse conditions, than were actual slaves. This was because, since they were indentured for only a set number of years before they would be free, there was less incentive on the part of their owners to ensure that they lived a long and healthy life.
For example, Thomas Sowell reports how, in the antebelum American South, the most dangerous work on cotton plantations was often reserved for Irish labourers, not slaves, precisely because slaves were too valuable to be risked by employing them in such work (Applied Economics: p37-38).
Van den Berghe concludes: 

“The blanket ascription of collective racial guilt for slavery to ‘whites’ that is so dear to many liberal social scientists is itself a product of the racist mentality produced by slavery. It takes a racist to ascribe causality and guilt to racial categories” (p130). 

Indeed, as Dinesh D’Souza in The End of Racism and Thomas Sowell in his essay ‘The Real History of Slavery’, included in the collection Black Rednecks and White Liberals, both emphasize, whereas all civilizations have practised slavery, what was unique about western civilization was that it was the first civilization ever known to have abolished slavery (at, as it ultimately turned out, no little economic cost to itself).
Therefore, even if liberals and leftists do insist that we play what van den Berghe disparagingly calls “the guilt ascription game”, then white westerners actually come out rather well in the comparison.
As Thomas Sowell observes in this context:

Often it is those who are most critical of a ‘Eurocentric’ view of the world who are most Eurocentric when it comes to the evils and failings of the human race” (Black Rednecks and White Liberals p111).

[27] Indeed, in most cultures and throughout most of history, the use of female slaves as concubines was, not only widespread, but also perfectly socially acceptable. For example, in the Islamic world, the use of female slaves as concubines was entirely open and accepted, not only attracting literally no censure or criticism in the wider society or culture, but also receiving explicit prophetic sanction in the Quran. For this reason, in the Islamic world, females slaves tended to be in greater demand than males, and usually commanded a higher price.
In contrast, most slaves transported to the Americas were male, since males were more useful for hard, intensive agricultural labour and, in puritanical North America, sexual contact with between slaveholder and slave was very much frowned upon, even though it certainly occurred. Thus, van den Berghe cynically observes:  

Concubinage with slaves was somewhat more clandestine and hypocritical in the English and Dutch colonies than in the Spanish, Portuguese and French colonies where it was brazen, but there is no evidence that the actual incidence of interbreeding was any higher in the Catholic countries” (p132). 

Partial corroboration for this claim is provided by historian Eugene Genovese, who, in his book Roll, Jordan, Roll: The World the Slaves Made, reports that, in New Orleans slave markets:

First-class blacksmiths were being sold for $2,500 and prime field hands for about $1,800, but a particularly beautiful girl or young woman might bring $5,000” (Roll, Jordan, Roll: p416).

[28] Actually, exploitation can still be an adaptive strategy, even in respect of close biological relatives. This depends of the precise relative gain and loss in fitness to both the exploiter (the slave owner) and his victim (the slave), and their respective coefficient of relatedness, in accordance with Hamilton’s rule. Thus, it is possible that a slaveholder’s genes may benefit more from continuing to exploit his slaves as slaves than by freeing them, even if the latter are also his kin. Possibly the best strategy will often be a compromise of, say, keeping your slave-kin in bondage, but treating them rather better than other non-related slaves, or freeing them after your death in your will. 
Of course, this is not to suggest that individual slaveholders consciously (or subconsciously) perform such a calculation, nor even that their actual behaviour is usually adaptive (see the Sahlins fallacy, discussed here). Slaveholding is likely an ‘environmental novelty’ to which we are yet to have evolved adaptive responses

[29] Others suggest that Thomas Jefferson himself did not father any offspring with Sally Hemmings and that the more likely father is Jefferson’s wayward younger brother Randolph, who would, of course, share the same Y chromosome as his elder brother. For present purposes, this is not especially important, since, either way, Heming’s offspring would be blood relatives of Jefferson to some degree, hence likely influencing his decision to free them or permit them to escape.

[30] Quite how this destruction can be expected to have manifested itself is not spelt out by van den Berghe. Perhaps, with each passing generation, as slaves became more and more closely biologically related to their masters, more and more slaves would have been freed until there were simply no more left. Alternatively, perhaps, as slaves and slaveowners increasingly became biological kin to one another, the institution of slavery would gradually have become less oppressive and exploitative until ultimately it ceased to constitute true slavery at all. At any rate, in the Southern United States this (supposed) process was forestalled by the American Civil War and Emancipation Proclamation, and neither does it appear to have occurred in Latin America.  

[31] Another area of conflict between Marxism and Darwinism is the assumption of the former that somehow all conflict and exploitation will end in a future posited communist utopia. Curiously, although healthily cynical about exploitation under Soviet-style communism (p60), van den Berghe describes himself as an anarchist (van den Berghe 2005). However, anarchism seems even more hopelessly utopian than communism, given humanity’s innate sociality and desire to exploit reproductive competitors. In short, a Hobbesian state of nature is surely no one’s utopia (except perhaps Ragnar Redbeard). 

[32] The idea that there is “ambivalence in relations between black men and women in America” seems anecdotally plausible, given, for example, the delightfully misogynistic lyrics found in much African-American rap music. However, it is difficult to see how this could be a legacy of the plantation era, when everyone alive today is several generations removed from that era and living in a very different sexual and racial milieu. Today, black men do rather better in the mating market place than do black women, with black men being much more likely to marry non-black women than black women are to marry non-black men, suggesting that black men have a larger dating pool from which to choose (Sailer 1997; Fryer 2007).
Moreover, black men and women in America today are, of course, the descendants of both men and women. Therefore, even if black women did have a better time of it that black men in the plantation era, how would black male resentment be passed down the generations to black men today, especially given that most black men are today raised primarily by their mothers in single-parent homes and often have little or no contact with their fathers?

[33] Indeed, being perceived as attractive, or at least not as ugly, seems to be rather more important to most women that does being perceived as intelligent. Therefore, the question of race differences in attractiveness is seemingly almost as controversial as that of race differences in intelligence. This, then, leads to the delightfully sexist Sailer’s first law of female journalism, which posits that: 

The most heartfelt articles by female journalists tend to be demands that social values be overturned in order that, Come the Revolution, the journalist herself will be considered hotter-looking.” 

[34] A popular alt-right meme has it that there are literally no white-on-black rapes. This is, of course, untrue, and reflects the misreading of a table in a US departnment of Justice report that actually involved only a small sample. In fact, the government does not currently release data on the prevalence of interracial rape. Nevertheless, the US Department of Justice report (mis)cited by some white nationalists does indeed suggest that black-on-white rape is much more common than white-on-black rape in the contemporary USA, a conclusion corroborated by copious other data (e.g. Lebeau 1985).
Thus, in his book Paved with Good Intentions, Jared Taylor reports:

“In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93). 

Indeed, the authors of one recent textbook on criminology even claim that: 

“Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544). 

Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by the Human Rights Watch report, No Escape: Male Rape in US Prisons

References

Brigandt (2001) The homeopathy of kin selection: an evaluation of van den Berghe’s sociobiological approach to ethnicity. Politics and the Life Sciences 20: 203-215. 
Feinman & Gill (1977) Sex differences in physical attractiveness preferences, Journal of Social Psychology 105(1): 43-52. 
Frost (2008) Sexual selection and human geographic variation. Special Issue: Proceedings of the ND Annual Meeting of the Northeastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4): 169-191 
Fryer (2007) Guess Who’s Been Coming to Dinner? Trends in Interracial Marriage over the 20th Century, Journal of Economic Perspectives 21(2), pp. 71-90 
Hannam (2021) The rise of the race fakers. Spiked-Online.com, 5 November. 
Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52. 
Hood (2017) The privilege no one wants, American Renaissance, December 11.
Johnson (1986) Kin selection, socialization and patriotism. Politics and the Life Sciences 4(2): 127-154. 
Johnson (1987) In the Name of the Fatherland: An Analysis of Kin Term Usage in Patriotic Speech and Literature. International Political Science Review 8(2): 165-174.
Johnson, Ratwik and Sawyer (1987) The evocative significance of kin terms in patriotic speech pp157-174 in Reynolds, Falger and Vine (eds) The Sociobiology of Ethnocentrism: Evolutionary Dimensions of Xenophobia, Discrimination, Racism, and Nationalism (London: Croom Helm). 
Lebeau (1985) Rape and Racial Patterns. Journal of Offender Counseling Services Rehabilitation, 9(1- 2): 125-148 
Lewis (2011) Who is the fairest of them all? Race, attractiveness and skin color sexual dimorphism. Personality & Individual Differences 50(2): 159-162. 
Lewis (2012) A Facial Attractiveness Account of Gender Asymmetries in Interracial Marriage PLoS One. 2012; 7(2): e31703. 
Lind et al (2007) Elevated male European and female African contributions to the genomes of African American individuals. Human Genetics 120(5) 713-722 
Macdonald 2001 An integrative evolutionary perspective on ethnicity. Poiltics & the Life Sciences 20(1):67-8. 
Rushton (1998a). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Warfare, and Ideology: Evolutionary perspectives (pp. 369-388). Oxford: Berghahn Books. 
Rushton (1998b). Genetic similarity theory and the roots of ethnic conflict. Journal of Social, Political, and Economic Studies, 23, 477-486. 
Rushton, (2005) Ethnic Nationalism, Evolutionary Psychology and Genetic Similarity Theory, Nations and Nationalism 11(4): 489-507. 
Sailer (1997) Is love colorblind? National Review, July 14. 
Sailer (2021) Do 48% of White Male College Applicants Lie About Their Race? Interesting, if It Replicates. Unz Review, October 21. 
Salmon (1998) The Evocative Nature of Kin Terminology in Political Rhetoric. Politics & the Life Sciences, 17(1): 51-57.   
Salter (2000) A Defense and Extension of Pierre van den Berghe’s Theory of Ethnic Nepotism. In James, P. and Goetze, D. (Eds.)  Evolutionary Theory and Ethnic Conflict (Praeger Studies on Ethnic and National Identities in Politics) (Westport, Connecticut: Greenwood Press). 
Salter (2002) Estimating Ethnic Genetic Interests: Is It Adaptive to Resist Replacement Migration? Population & Environment 24(2): 111–140. 
Salter (2008) Misunderstandings of Kin Selection and the Delay in Quantifying Ethnic Kinship, Mankind Quarterly 48(3): 311–344. 
Tooby & Cosmides (1989) Kin selection, genic selection and information dependent strategies Behavioral and Brain Sciences 12(3): 542-544 
Van den Berghe (2005) Review of On Genetic Interests: Family, Ethny and Humanity in the Age of Mass Migration by Frank Salter Nations and Nationalism 11(1) 161-177 
Van den Berghe & Frost (1986) Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9: 87-113.
Whitney G (1999) The Biological Reality of Race. American Renaissance, October 1999.

The ‘Means of Reproduction’ and the Ultimate Purpose of Political Power

Laura Betzig, Despotism and Differential Reproduction: A Darwinian View of History (New Brunswick: AdelineTransation, 1983). 

Moulay Ismail Ibn Sharif, alias ‘Ismail the Bloodthirsty’, a late-seventeenth, early eighteenth century Emperor of Morocco is today little remembered, at least outside of his native Morocco. He is, however, in a strict Darwinian sense, possibly the most successful human ever to have lived. 

Ismail, you see, is said to have sired some 888 offspring. His Darwinian fitness therefore exceeded that of any other known person.[1]

Some have questioned whether this figure is realistic (Einon 1998). However, the best analyses suggest that, while the actual number of offspring fathered by Ismail may indeed be apocryphal, such a large progeny is indeed eminently plausible for a powerful ruler with access to a large harem of wives and/or concubines (Gould 2000; Oberzaucher & Grammer 2014).

Indeed, as Laura Betzig demonstrates in ‘Despotism and Differential Reproduction’, Ismail is exceptional only in degree.

Across diverse societies and cultures, and throughout human history, wherever individual males acquire great wealth and power, they convert this wealth and power into the ultimate currency of natural selection – namely reproductive success – by asserting and maintaining exclusive reproductive access to large harems of young female sex partners. 

A Sociobiological Theory of Human History 

Betzig begins her monograph by quoting a small part of a famous passage from the closing paragraphs of Charles Darwin’s seminal On the Origin of Species which she adopts as the epigraph to her preface. 

In this passage, the great Victorian naturalist tentatively extended his theory of natural selection to the question of human origins, a topic he conspicuously avoided in the preceding pages of his famous text. 

Yet, in this much-quoted passage, Darwin goes well beyond suggesting merely that his theory of evolution by natural selection might explain human origins in just the same way it explained the origin of other species. On the contrary, he also anticipated the rise of evolutionary psychology, writing of how: 

Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. 

Yet this is not the part of this passage quoted by Betzig. Instead, she quotes the next sentence, where Darwin makes another prediction, no less prophetic, namely that: 

Much light will be thrown on the origin of man and his history 

In this reference to “man and his history”, Darwin surely had in mind primarily, if not exclusively, the natural history and evolutionary history of our species.

Betzig, however, interprets Darwin more broadly, and more literally, and, in so doing, has both founded, and for several years, remained the leading practitioner of a new field – namely, Darwinian history.

This is the attempt to explain, not only the psychology and behaviour of contemporary humans in terms of sociobiology, evolutionary psychology and selfish gene theory, but also to explain the behaviour of people in past historical epochs in terms of the same theory.  

Her book length monograph, ‘Despotism and Differential Reproduction: A Darwinian View of History’ remains the best known and most important work in this field. 

The Historical and Ethnographic Record 

In making the case that, throughout history and across the world, males in positions of power have used this power so as to maximize their Darwinian fitness by securing exclusive reproductive access to large harems of fertile females, Betzig, presumably to avoid the charge of cherry picking, never actually even mentions Ismail the Bloodthirsty at any point in her monograph. 

Instead, Betzig uses ethnographic data taken from a random sample of cultures from across the world. Nevertheless, the patterns she uncovers are familiar and recurrent.

Powerful males command large harems of multiple fertile young females, to whom they assert, and defend, exclusive reproductive access. In this way, they convert their power into the ultimate currency of natural selection – namely, reproductive success or fitness.

Thus, citing and summarizing Betzig’s work, not only ‘Despotism and Differential Reproduction’, but also other works she has published on related topics, science writer Matt Ridley reports:

[Of] the six independent ‘civilizations’ of early history – Babylon, Egypt, India, China, the Aztecs and the Incas… the Babylonian king Hammurabi had thousands of slave ‘wives’ at his command. The Egyptian pharaoh Akhenaten procured three hundred and seventeen concubines and ‘droves’ of consorts. The Aztec ruler Montezuma enjoyed four thousand concubines. The Indian emperor Udayama preserved sixteen thousand consorts in apartments guarded by eunuchs. The Chinese emperor Fei-ti had ten thousand women in his harem. The Inca… kept virgins on tap throughout the kingdom” (The Red Queen: p191-2; see Betzig 1993a).

In a contemporary context, I wonder whether the ostensibly ‘elite’ all-female bodyguard of Arab socialist dictator, Colonel Gadaffi, his so-called ‘Amazonian Guard’ (aka ‘Revolutionary Nuns’), served a similar function.

Given the innate biological differences between the sexes, physical and psychological, women are unlikely to make for good bodyguards anymore than they do effective soldiers in wartime, and, judging from photographs, Gadaffi’s elite bodyguard seem to have been chosen at least as much on account of their youth and beauty as on the basis of any martial prowess. Certainly they did little to prevent his exection by rebels in 2011.

Moreover, since his overthrow and execution, accusations of sexual abuse have inevitably surfaced, though how much credence we should give to these claims is debatable.[2]

Such vast harems as those monopolized by ancient Egyptian pharaohs, Chinese emperors and Babylonian kings seem, at first, wholly wasteful. This is surely more fertile females than even the horniest, healthiest and most virile of emperors could ever hope to have even sex with, let alone successfully impregnate, in a single lifetime. However, as Betzig acknowledges: 

The number of women in such a harem may easily have prohibited the successful impregnation of each… but, their being kept from bearing children to others increased the monarch’s relative reproductive accomplishment” (p70). 

In other words, even if these rulers were unable to successfully impregnate every concubine in their harem, keeping them cloistered and secluded nevertheless prevented other males from impregnating them, which increased the relative representation of the ruler’s genes in subsequent generations.

To this end, extensive efforts also were made to ensure the chastity of these women. Thus, even in ancient times, Betzig reports: 

Evidence of claustration, in the form of a walled interior courtyard, exists for Babylonian Mai; and claustration in second story rooms with latticed, narrow windows is mentioned in the Old Testament” (p79). 

Indeed, Betzig even proposes an alternative explanation for early evidence of defensive fortifications

Elaborate fortifications erected for the purposes of defense may [also] have served the dual (identical?) function of protecting the chastity of women of the harem” (p79). 

Indeed, as Betzig alludes to in her parenthesis, this second function is arguably not entirely separate to the first. 

After all, if all male-male competition is ultimately based on competition over access to fertile females, then this surely very much includes warfare. As Napoleon Chagnon emphasizes in his studies of warfare and intergroup raiding among the Yąnomamö Indians of the Amazonian rainforest, warfare among primitive peoples tends to be predicated on the capture of fertile females from among enemy groups.[3]

Therefore, even fortifications erected for the purposes of military defence, ultimately serve the evolutionary function of maintaining exclusive reproductive access to the fertile females contained therein. 

Other methods of ensuring the chastity of concubines, and thus the paternity certainty of emperors, included the use of eunuchs as harem guards. Indeed, this seems to have been the original reason eunuchs were castrated and later became a key element in palace retinues (see The Evolution of Human Sociality: p45). 

Chastity belts, however, ostensibly invented for the wives of crusading knights while the latter were away on crusade, seem to be a modern myth.

The movements of harem concubines were also highly restricted. Thus, if permitted to venture beyond their cloisters, they were invariably escorted. 

For example in the African Kingdom of Dahomey, Betzig reports: 

The king’s wives’… approach was always signalled by the ringing of a bell by the women servant or slave who invariably preceded them [and] the moment the bell is heard all persons, whether male or female , turn their backs, but all the males must retire to a certain distance” (p79). 

Similarly, inmates of the Houses of Virgins maintained by Inca rulers:

Lived in perpetual seclusion to the end of their lives… and were not permitted to converse, or have intercourse with, or to see any man, nor any woman who was not one of themselves” (p81-2). 

Feminists tend to view such practices as evidence of the supposed oppression of women

However, from a sociobiological or evolutionary psychological perspective, the primary victims of such practices were, not the harem inmates themselves, but rather the lower-status men condemned to celibacy and ‘inceldom’ as a consequence of royal dynasties monopolizing sexual access to almost all the fertile females in the society in question. 

The encloistered women might have been deprived of their freedom of movement – but many lower-status men in the same societies were deprived of almost all access to fertile female sex partners, and hence any possibility of passing on their genes, the ultimate evolutionary function of any biological organism. 

In contrast, the concubines secluded in royal harems were not only able to reproduce, but also lived lives of relative comfort, if not, in some cases, outright luxury, often being: 

Equipped with their own household and servants, and probably lived reasonably comfortable lives in most respects, except… for a lack of liberal masculine company” (p80). 

Indeed, seclusion, far from evidencing oppression, was primarily predicted on safety and protection. In short, to be imprisoned is not so bad when one is imprisoned in a palace! 

Finally, methods were also sometimes employed specifically to enhance their fertility of the women so confined. Thus, Ridley reports: 

Wet nurses, who allow women to resume ovulation by cutting short their breast-feeding periods, date from at least the code of Hammurabi in the eighteenth century BC… Tang dynasty emperors of China kept careful records of dates of menstruation and conception in the harem so as to be sure to copulate only with the most fertile concubines… [and] Chinese emperors were also taught to conserve their semen so as to keep up their quota of two women a day” (The Red Queen: p192). 

Corroborating Betzig’s conclusions but subsequent to the publication of her work, researchers have now uncovered genetic evidence of the fecundity of one particular powerful ruler (or ruling male lineage) – namely, a Y chromosome haplogroup, found in 8% of males across a large region of Asia and in one in two hundred males across the whole world – the features of which are consistent with its having spread across the region thanks to the exceptional prolificity of Genghis Khan, his male siblings and descendants (Zerjal et al 2003). 

Female Rulers? 

In contrast, limited to only one pregnancy every nine months, a woman, howsoever rich and powerful, can necessarily bear far fewer offspring than can be sired by a man enjoying equivalent wealth, power and access to multiple fertile sex partners, even with the aid of evolutionary novelties like wet nurses, bottle milk and IVF treatment. 

As a female analogue of Ismail the Bloodthirsty, it is sometimes claimed that a Russian woman gave birth to 69 offspring in the nineteenth-century. She was also supposedly, and very much unlike Ismail the Bloodthirsty, not a powerful and polygamous elite ruler, but rather a humble, monogamously married peasant woman. 

However, this much smaller figure is both physiologically implausible and poorly sourced. Indeed, even her name is unknown, and she is referred to only as the wife of Feodor Vassilyev. It is, in short, almost certainly an urban myth.[4]

Feminists have argued that the overrepresentation of males in positions of power is a consequence of such mysterious and non-existent phenomena as patriarchy or male dominance or the oppression of women.

In reality, however, it seems that, for women, seeking positions of power and wealth simply doesn’t have the same reproductive payoff as for men – because, no matter how many men a woman copulates with, she can usually only gestate, and nurse, one (or, in the case of twins or triplets, occasionally two or three) offspring at a time. 

This is the essence of Bateman’s Principle, later formalized by Robert Trivers as differential parental investment theory (Bateman 1948; Trivers 1972).

This, then, in Darwinian terms, explains why women are less likely to assume positions of great political power.

It is not necessarily that they wouldn’t want political power if it were handed to them, but rather that they are less willing to make the necessary effort, or take the necessary risks to attain power.

Indeed, among women, there may even be a fitness penalty associated with assuming political power or acquiring a high status job. Thus, such jobs tend to be, not only high status, but also usually high stress and not easily combined with motherhood.

Indeed, even among baboons, it has been found that high-ranking females actually suffer reduced fertility and higher rates of miscarriages, possibly on account of hormonal factors (Packer et al 1995).

Kingsley Browne, in his excellent book, Biology at Work: Rethinking Sexual Equality (which I have reviewed here), noting that female executives also tend to have fewer children, tentatively proposes that a similar mechanism may be at work among humans:

Women who succeed in business tend to be relatively high testosterone, which can result in lower female fertility, whether because of ovulatory irregularities or reduced interest in having children. Thus, rather than the high-powered career being responsible for the high rate of childlessness, it may be that high testosterone levels are responsible for both” (Biology at Work: p124).

Therefore, it may well be to woman’s advantage to marry a male with a high status, powerful job, but not to do such a job for herself. That way, she obtains the same wealth and status as her husband, and the same wealth and status for her offspring, but without the hard work it takes to achieve this status.

What is certainly true is that social status and political power does not have the same potential reproductive payoff for women as it did for, say, Ismail the Bloodthirsty.

This calculus, then, rather than the supposed oppression of women, explains, not only the cross-culturally universal over-representation of men in positions of power, but also much of the so-called gender pay gap in our own societies (see Kingsley Browne’s Biology at Work: reviewed here). 

Perhaps the closest women can get to producing such a vast progeny is maneuver their sons into having the opportunity to do so.

This might explain why such historical figures as Agrippina the Younger, the mother of Nero, and Olympias, mother of Alexander the Great, are reported as having been so active, and instrumental, in securing the succession on behalf of their sons. 

The Purpose of Political Power? 

The notion that powerful rulers often use their power to gain access to multiple nubile sex partners is, of course, hardly original to sociobiology. On the contrary, it accords with popular cynicism regarding men who occupy positions of power. 

What a Darwinian perspective adds is the ultimate explanation of why political leaders do so – and why female political rulers, even when they do assume power, usually adopt a very different reproductive strategy. 

Moreover, a Darwinian perspective goes beyond popular cynicism in suggesting that access to multiple sex partners is not merely yet another perk of power. On the contrary, it is the ultimate purpose of power and reason why men evolved to seek power in the first place. 

As Betzig herself concludes: 

Political power in itself may be explained, at least in part, as providing a position from which to gain reproductively” (p85).[5]

After all, from a Darwinian perspective, political power in and of itself has no intrinsic value. It is only if power can be used in such a way as to maximize a person’s reproductive success or fitness that it has evolutionary value. 

Thus, as Steven Pinker has observed, the recurrent theme in science fiction film and literature of robots rebelling against humans to take over the world and overthrow humanity is fundamentally mistaken. Robots would have no reason to rebel against humans, simply because they would not be programmed to want to take over the world and overthrow humanity in the first place. 

On the other hand, humans have been programmed to seek wealth and power – and to resist oppression and exploitation. This is why revolutions are a recurrent feature of human societies and history.

But we have been programmed, not by a programmer or god-like creator, but rather by natural selection.

We have been programmed by natural selection to seek wealth and power only because, throughout human evolutionary history, those among our ancestors who achieved political power tended, like Ismail the Bloodthirsty, also to achieve high levels of reproductive success as a consequence. 

Darwin versus Marx 

In order to test the predictive power of her theory, Betzig contrasts the predictions made by sociobiological theory with a rival theory – namely, Marxism

The comparison is apposite since, despite repeated falsification at the hands of both economists and of history, Marxism remains, among both social scientists and laypeople, perhaps the dominant paradigm when it comes to explaining social structure, hierarchy and exploitation in human societies.  

Certainly, it has proven far more popular than any approach to understanding human dominance hierarchies grounded in ethology, sociobiology, evolutionary psychology or selfish gene theory

There are, it bears emphasizing, several similarities between the two approaches. For one thing, each theory traces its origins ultimately to a nineteenth-century Victorian founder resident in Britain at the time he authored his key works, namely Charles Darwin and Karl Marx respectively.  

More importantly, there are also substantive similarities in the content and predictions of both these alternative theoretical paradigms. 

In particular, each is highly cynical in its conclusions. Indeed, at first glance, Marxist theory appears superficially almost as cynical as Darwinian theory. 

Thus, like Betzig, Marx regarded most societies in existence throughout history as exploitative – and as designed to serve the interests, neither of society in general nor of the population of that society as a whole, but rather of the dominant class within that society alone – namely, in the case of capitalism, the bourgeoisie or capitalist employers. 

However, sociobiological and Marxist theory depart in at least three crucial respects. 

First, Marxists propose that exploitation will be absent in future anticipated communist utopias

Second, Marxists also claim that such exploitation was also absent among hunter-gatherer groups, where so-called primitive communism supposedly prevailed. 

Thus, the Marxist, so cynical with regard exploitation and oppression in capitalist (and feudal) society, suddenly turns hopelessly naïve and innocent when it comes to envisaging future unrealistic communist utopias, and when contemplating ‘noble savages’ in their putative ‘Eden before the fall’.

Unfortunately, however, in her critique of Marxism, Betzig herself nevertheless remains somewhat confused in respect of this key issue. 

On the one hand, she rightly dismisses primitive communism as a Marxist myth. Thus, she demonstrates and repeatedly emphasizes that:

Men accrue reproductive rights to wives of varying numbers and fertility in every human society” (p20).

Therefore, Betzig, contrary to the tenets of Marxism, concludes:

Unequal access to the basic resource which perpetuates life, members of the opposite sex, is a condition in [even] the simplest societies” (p32; see also Chagnon 1979).

Neither is universal human inequality limited only to access to fertile females. On the contrary, Betzig observes:

Some form of exploitation has been in evidence in even the smallest societies… Conflicts of interest in all societies are resolved with a consistent bias in favor of men with greater power” (p67).

On the other hand, however, Betzig takes a wrong turn in refusing to rule out the possibility of true communism somehow arising in the future. Thus, perhaps in a misguided effort to placate the many leftist opponents of sociobiology in academia, she writes:

Darwinism… [does not] preclude the possibility of future conditions under which individual interests might become common interests: under which individual welfare might best be served by serving the welfare of society… [nor] preclude… the possibility of the evolution of socialism” (p68). 

This, however, seems obviously impossible. 

After all, we have evolved to seek to maximize the representation of our own genes in subsequent generations at the expense of those of other individuals. Only a eugenic reengineering of human nature itself could ever change this. 

Thus, as Donald Symons emphasized in his seminal The Evolution of Human Sexuality (which I have reviewed here), reproductive competition is inevitable – because, whereas there is sometimes sufficient food that everyone is satiated and competition for food is therefore unnecessary and counterproductive, reproductive success is always relative, and therefore competition over women is universal. 

Thus, Betzig quotes Confucius as observing:

Disorder does not come from heaven, but is brought about by women” (p26). 

Indeed, Betzig herself elsewhere recognizes this key point, namely the relativity of reproductive success, when she observes, in a passage quoted above, that a powerful monarch benefits from sequestering huge numbers of fertile females in his harem because, even if it is unfeasible that he would ever successfully impregnate all of them himself, he nevertheless thereby prevents other males from impregnating them, and thereby increases the relative representation of his own genes in subsequent generations (p70). 

It therefore seems inconceivable that social engineers, let alone pure happenstance, could ever engineer a society in which individual interests were identical to societal interests, other than a society of identical twins or through the eugenic reingineering of human nature itself (see Peter Singer’s A Darwinian Left, which I have reviewed here).[6]

Marx and the Means of Reproduction

The third and perhaps most important conflict between the Darwinist and Marxist perspectives concerns what Betzig terms: 

The relative emphasis on production and reproduction” (p67).

Whereas Marxists view control of what they term the means of production as the ultimate cause of societal conflict, socioeconomic status and exploitation, for Darwinians conflict and exploitation instead focus on control over what we might term the means of reproduction – in other words fertile females, their wombs, ova and vaginas. 

Thus, Betzig observes: 

Marxism makes no explicit prediction that exploitation should coincide with reproduction” (p68). 

In other words, Marxist theory is silent on the crucial issue of whether high-status individuals will necessarily convert their political and economic power into the ultimate currency of Darwinian selection – namely, reproductive success

On this view, powerful male rulers might just as well be celibate as control and assert exclusive reproductive access to large harems of young fertile wives and concubines. 

In contrast, for Darwinians, the effort to maximize one’s reproductive success is the very purpose, and ultimate end, of all political power. 

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here): 

The ultimate measure of human success is no production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon: p165). 

Thus, production is, from a sociobiological perspective, just another means of gaining the resources necessary for reproduction. 

On the other hand, reproduction is, from a biological perspective, the ultimate purpose of life. 

Therefore, it seems that, for all his ostensible radicalism, Karl Marx was, in his emphasis on economics rather than sex, just another nineteenth-century Victorian sexual prude

The Polygyny Threshold Model Applied to Humans? 

One way of conceptualizing the tendency of powerful males to attract (or perhaps commandeer) multiple wives and concubines is the polygyny threshold model

This way of conceptualizing male and female reproductive and ecological competition was first formulated by ornithologist-ecologist Gordon Orians in order to model the mating systems of passerine birds (Orians 1969). 

Here, males practice so-called resource defence polygyny – in other words, they defend territories containing valuable resources (e.g. food, nesting sites) necessary for successful reproduction and provisioning of offspring. 

Females then distribute themselves between males in accordance with size and quality of male territories. 

On this view, if the territory of one male is twice as resource-abundant as that of another, he would, all else being equal, attract twice as many mates; if it is three times as resource-abundant, he would attract three times as many mates; etc. 

The result is rough parity in resource-holdings and reproductive success among females, but often large disparities among males. 

Applying the Polygyny Threshold Model to Modern America

Thus, applying the polygyny threshold model to humans, and rather simplistically substituting wealth for territory size and quality, we might predict that, if Jeff Bezos is a hundred thousand times richer than Joe Schmo, then, if Joe has only one wife, then Jeff should have around 100,000 wives.

But, of course, Jeff Bezos does not have 100,000 wives, nor even a mere 100,000 concubines. 

Instead, he has only one solitary meagre ex-wife, and she, even when married to him, was not, to the best of my knowledge, ever guarded by any eunuchs – though perhaps he would have been better off if she had been, since they might have prevented her from divorcing him and taking an enormous share of his wealth with her in the ensuing divorce settlement.[7]

Indeed, with the sole exception of the magnificent John McAfee, founder of the first commercially available antivirus software, who, after making his millions, moved to a developing country where he obtained for himself a harem of teenage concubines, with whom he allegedly never actually had sex, instead preferring to have them defecate into his mouth while sitting in a hammock, but with whom he is nevertheless reported to have somehow fathered some forty-seven children, most modern millionaires, and billionaires, despite their immense wealth and the reproductive opportunities it offers, seemingly live lives of stultifyingly bland bourgeois respectability.

The same is also true of contemporary political leaders. 

Indeed, if any contemporary western political leader does attempt to practice polygyny, even on a comparatively modest scale, then, if discovered, a so-called sex scandal almost invariably results. 

Yet, viewed in historical perspective, the much-publicized marital infidelities of, say, Bill Clinton, though they may have outraged the sensibilities of the mass of monogamously-married Middle American morons, positively pale into insignificance besides the reproductive achievements of someone like, say, Ismail the Bloodthirsty

Indeed, Clinton’s infidelities don’t even pack much of a punch beside those of a politician from the same nation and just a generation removed, namely John F Kennedy – whose achievements in the political sphere are vastly overrated on account of his early death, but whose achievements in the bedroom, while scarcely matching those of Ismail the Bloodthirsty or the Aztec emperors, certainly put the current generation of American politicians to shame. 

Why, then, does the contemporary west represent such a glaring exception to the general pattern of elite polygyny that Betzig has so successfully documented throughout so much of the rest of the world, and throughout so much of history? And what has become of the henpecked geldings who pass for politicians in the contemporary era? 

Monogamy as Male Compromise? 

According to Betzig, the moronic mass media moral panic that invariably accompanies sexual indiscretions on the part of contemporary Western political leaders and other public figures is no accident. Rather, it is exactly what her theory predicts. 

According to Betzig, the institution of monogamy as it operates in Western democracies represents a compromise between low-status and high status males. 

According to the terms of this compromise, high-status males agree to forgo polygyny in exchange for the cooperation of low status males in participating in the complexly interdependent economic systems of modern western polities (p105) – or, in biologist Richard Alexander’s alternative formulation, in exchange for serving as necessary cannon-fodder in wars (p104).[8]

Thus, whereas, under polygyny, there are never enough females to go around, under monogamy, at least assuming that there is a roughly equal sex ratio (i.e. a roughly equal numbers of men and women), then virtually almost all males are capable of attracting a wife, howsoever physically repugnant, ugly and just plain unpleasant

This is important, since it means that all men, even the relatively poor and powerless, nevertheless have a reproductive stake in society. This, then, in evolutionary terms, provides them with an incentive both:

1) To participate in the economy to support and thereby provide for their wife and family; and

2) To defend these institutions in wartime, if necessary with their lives.

The institution of monogamy has therefore been viewed as a key factor, if not the key factor, in both the economic and military ascendency of the West (see Scheidel 2008). 

Similarly, it has recently been argued that the increasing rates of non-participation of young males in the economy and workforce (i.e. the so-called NEET’ phenomenon) is a direct consequence of the reduction in reproductive opportunities to young males (Binder 2021).[9]

Thus, on this view, then, the media scandal and hysteria that invariably accompanies sexual infidelities by elected politicians, or constitutional monarchs, reflects outrage that the terms of this implicit agreement have been breached. 

This idea was anticipated by Irish playwright and socialist George Bernard Shaw, who observed in Man and Superman: Maxims for Revolutionaries, the preface to his play Man and Superman

Polygyny, when tried under modern democratic conditions, as by the Mormons is wrecked by the revolt of the mass of inferior men who are condemned to celibacy by it” (Shaw 1903). 

Socially Imposed Monogamy’?

Consistent with this theory of socially imposed monogamy, it is indeed the case that, in all Western democratic polities, polygyny is unlawful, and bigamy a crime. 

Yet these laws are seemingly in conflict with contemporary western liberal democratic principles of tolerance and inclusivity, especially in respect of ‘alternative lifestyles’ and ‘non-traditional relationships’.

Thus, for example, we have recently witnessed a successful campaign for the legalization of gay marriage in most western jurisdictions. However, strangely, polygynous marriage seemingly remains anathema – despite the fact that most cultures across the world and throughout history have permitted polygynous marriage, whereas few if any have ever accorded any state recognition to homosexual unions.

Indeed, strangely, whereas the legalization of gay marriage was widely perceived as ‘progressive’, polygyny is associated, not with sexual liberation with rather with highly traditional and sexually repressive groups such as Mormons and Muslims.[10]

Polygynous marriage was also, rather strangely, associated with the supposed oppression of women in traditional societies such as under Islam

However, most women actually do better, at least in purely economic terms, under polygyny than under monogamy, at least in highly stratified societies with large differences in resource-holdings as between males. 

Thus, if, as we have seen, Jeff Bezos is 100,000 times richer than Joe Schmo, then a woman is financially better off becoming the second wife, or the tenth wife (or even the 99,999th wife!), of Jeff Bezos rather than the first wife of poor Joe. 

Moreover, women also have another incentive to prefer Jeff to Joe. 

If she is impregnated by a polygynous male like Jeff, then her male descendants may inherit the traits that facilitated their father’s wealth, power and polygyny, and hence become similarly reproductively successful themselves, aiding the spread of the woman’s own genes in subsequent generations. 

Biologists call this good genes sexual selection or, more catchily, the sexy son hypothesis

Once again, however, George Bernard Shaw beat them to it when he observed in the same 1903 essay quoted above: 

Maternal instinct leads a woman to prefer a tenth share in a first rate man to the exclusive possession of a third rate one” (Shaw 1903). 

Thus, Robert Wright concludes: 

In sheerly Darwinian terms, most men are probably better off in a monogamous system, and most women worse off” (The Moral Animal: p96). 

Thus, women generally should welcome polygyny, while the only people opposed to polygyny should be: 

1) The women currently married to men like Jeff Bezos, and greedily unwilling to share their resource-abundant ‘alpha-male’ providers with a whole hundred-fold harem of co-wives and concubines; and

2) A glut of horny sexually-frustrated bachelor-‘incels’ terminally condemned to celibacy, bachelorhood and inceldom by promiscuous lotharios like Jeff Bezos and Ismail the Bloodthirsty greedily hogging all the hot chicks for themselves.

Who Opposes Polygyny, and Why? 

However, in my experience, the people who most vociferously and puritanically object to philandering male politicians are not low-status men, but rather women. 

Moreover, such women typically affect concern on behalf, not of the male bachelors and ‘incels’ supposedly indirectly condemned to celibacy by such behaviours, but rather the wives of such politicians – though the latter are the chief beneficiaries of monogamy, while these other women, precluded from signing up as second or third-wives to alpha-male providers, are themselves, at least in theory, among the main losers. 

This suggests that the ‘male compromise theory’ of socially-imposed monogamy is not the whole story. 

Perhaps then, although women benefit in purely financial terms under polygyny, they do not do so well in fitness terms. 

Thus, one study found that, whereas polygynous males (unsurprisingly) had more offspring than monogamously-mated males, they (perhaps also unsurprisingly) had fewer offspring per wife. This suggests that, while polygynously-married males benefit from polygyny, their wives incur a fitness penalty for having to share their husband (Strassman 2000). 

This probably reflects the fact that even male reproductive capacity is limited, as, notwithstanding the Coolidge effect (which has, to my knowledge, yet to be demonstrated in humans), males can only manage a certain number of orgasms per day. 

Women’s distaste for polygynous unions may also reflect the fact that even prodigiously wealthy males will inevitably have a limited supply of one particular resource – namely, time – and time spent with offspring by a loving biological father may be an important determinant of offspring success, which paid child-minders, and stepfathers, lacking a direct genetic stake in offspring, are unable to perfectly replicate.[11]

Thus, if Jeff Bezos were able to attract for himself the 100,000 wives that the polygyny threshold model suggests is his due, then, even if he were capable of providing each woman with the two point four children that is her own due, it is doubtful he would have enough time on his hands to spend much ‘quality time’ with each of his 240,000 offspring – just as one doubts Ismail the Bloodthirsty was himself an attentive father his own more modest mere 888. 

Thus, one suspects that, contrary to the polygyny threshold model, polygyny is not always entirely a matter of female choice (Sanderson 2001).

On the contrary, many of the women sequestered into the harems of rulers like Ismail the Bloodthirsty likely had little say in the matter. 

The Central Theoretical Problem of Human Sociobiology’ 

Yet, if this goes some way towards explaining the apparent paradox of socially imposed monogamy, there is, today, an even greater paradox with which we must wrestle – namely, why, in contemporary western societies, is there apparently an inverse correlation between wealth and number of offspring.

After all, from a sociobiological or evolutionary psychological perspective, this represents something of a paradox. 

If, as we have seen, the very purpose of wealth and power (from a sociobiological perspective) is to convert these advantages into the ultimately currency of natural selection, namely reproductive success, then why are the wealthy so spectacularly failing to do so in the contemporary west?[12]

Moreover, if status is not conducive to high reproductive success, then why have humans evolved to seek high-status in the first place? 

This anomaly has been memorably termed the ‘The central theoretical problem of human sociobiology’ in a paper by University of Pennsylvania demographer and eugenicist Daniel Vining (Vining 1986). 

Socially imposed monogamy can only go some way towards explaining this anomaly. Thus, in previous centuries, even under monogamy, wealthier families still produced more surviving offspring, if only because their greater wealth enabled them to successfully rear and feed multiple successive offspring to adulthood. In contrast, for the poor, high rates of infant mortality were the order of the day. 

Yet, in the contemporary west, it seems that the people who have the most children and hence the highest fitness in the strict Darwinian sense, are, at least according to popular stereotype, single mothers on government welfare. 

De Facto’ Polygyny 

Various solutions have been proposed to this apparent paradox. A couple amount to claiming that the west is not really monogamous at all, and, once this is factored in, then, at least among males, higher-status men do indeed have greater numbers of offspring than lower-status men. 

One suggestion along these lines is that perhaps wealthy males sire additional offspring whose paternity is misassigned, via extra-marital liaisons (Betzig 1993b). 

However, despite some sensationalized claims, rates of misassigned paternity are actually quite low (Khan 2010; Gilding 2005; Bellis et al 2005). 

If it is lower-class women who are giving birth to most of the offspring, then it is probably mostly males of their own socioeconomic status who are responsible for impregnating them, if only because it is the latter with whom they have the most social contact. 

Perhaps a more plausible suggestion is that wealthy high-status males are able to practice a form of disguised polygyny by through repeated remarriage. 

Thus, wealthy men are sometimes criticized for divorcing their first wives to marry much younger second- and sometimes even third- and fourth-wives. In this way, they manage monopolize the peak reproductive years of multiple successive young women. 

This is true, for example, of recent American President Donald Trump – the ultimate American alpha male – who has himself married three women, each one younger than her predecessor

Thus, science journalist Robert Wright contends: 

The United States is no longer a nation of institutionalized monogamy. It is a nation of serial monogamy. And serial monogamy in some ways amounts to polygyny.” (The Moral Animal: p101). 

This, then, is not so much ‘serial monogamy’ as it is ‘sequential’ or non-concurrent polygyny’. 

Evolutionary Novelties

Another suggestion is that evolutionary novelties – i.e. recently developed technologies such as contraception – have disrupted the usual association between status and fertility. 

On this view, natural selection has simply not yet had sufficient time (or, rather, sufficient generations) over which to mold our psychology and behaviour in such a way as to cause us to use these technologies in an adaptive manner – i.e. in order to maximize, not restrict, our reproductive success. 

An obvious candidate here is safe and effective contraception, which, while actually somewhat older than most people imagine, nevertheless became widely available to the population at large only over the course of the past century, which is surely not enough generations for us to have become evolutionarily adapted to its use.  

Thus, a couple of studies have found that that, while wealthy high-status males may not father more offspring, they do have more sex with a greater number of partners – i.e. behaviours that would have resulted in more offspring in ancestral environments prior to the widespread availability of contraception (Pérusse 1993: Kanazawa 2003). 

This implies that high-status males (or their partners) use contraception either more often, or more effectively, than low-status males (or their partners), probably because of their greater intelligence and self-control, namely the very traits that enabled them to achieve high socioeconomic status in the first place (Kanazawa 2005). 

Another evolutionary novelty that may disrupt the usual association between social status and number of surviving offspring is the welfare system

Welfare payments to single mothers undoubtedly help these families raise to adulthood offspring who would otherwise perish in infancy. 

In addition, by reducing the financial disincentives associated with raising additional offspring, they probably increase the number of offspring these women choose to have in the first place. 

While it is highly controversial to suggest that welfare payments to single mothers actually give the latter an actual financial incentive to bear additional offspring, they surely, at the very least, reduce the financial disincentives otherwise associated with bearing additional children. 

Therefore, given that the desire for offspring is probably innate, women would rationally respond by having more children.[13]

Feminist ideology also encourages women in particular to postpone childbearing in favour of careers. Moreover, it is probably higher-status females who are more exposed to feminist ideology, especially in universities, where feminist ideology is thoroughly entrenched and widely proselytized

In contrast, lower-status women are not only less exposed to feminist ideology encouraging them to delay motherhood in favour of career, but also likely have fewer appealing careers available to them in the first place. 

Finally, even laws against bigamy and polygyny might be conceptualized as an evolutionary novelty that disrupts the usual association between status and fertility. 

However, whereas technological innovations such as effective contraception were certainly not available until recent times, ideological constructs and religious teachings – including ideas such as feminism, prohibitions on polygyny, and the socialist ideology that motivated the creation of the welfare state – have existed ever since we evolved the capacity to create such constructs (i.e. since we became fully human). 

Therefore, one would expect that humans would have evolved resistance to ideological and religious teachings that go against their genetic interests. Otherwise, we would be vulnerable to indoctrination (and hence exploitation) at the hands third parties. 

Dysgenics? 

Finally, it must be noted that these issues are not only of purely academic interest. 

On the contrary, since socioeconomic status correlates with both intelligence and personality traits such as conscientiousness, and these traits are, in turn, substantially heritable, and moreover determine, not only individual wealth and prosperity, but also at the aggregate level, the wealth and prosperity of nations, the question of who has the offspring is surely of central concern to the future of society, civilization and the world. 

In short, what is at stake is the very genetic posterity that we bequeath to future generations. It is simply too important a matter to be delegated to the capricious and irrational decision-making of individual women. 

__________________________

Endnotes

[1] Actually, the precise number of offspring Ismail fathered is unclear. The figure I have quoted in the main body of the text comes from various works on evolutionary psychology (e.g. Cartwright, Evolution and Human Behaviour: p133-4; Wright, The Moral Animal: p247). However, another earlier work on human sociobiology, David Barash’s The Whisperings Within gives an even higher figure, of “1,056 offspring” (The Whisperings Within: p47). Meanwhile, an article produced by the Guinness Book of Records gives an even higher figure of at least 342 daughters and 700 sons, while a scientific paper by Elisabeth Oberzaucher and Karl Grammer gives a figure of 1171 offspring in total. The precise figure seems to be unknown and is probably apocryphal. Nevertheless, the general point – namely that a powerful male with access to a large harem and multiple wives and concubines, is capable of fathering many offspring – is surely correct.

[2] Thus, it is important to emphasise that sexual abuse allegations should certainly not automatically be accepted as credible, given the prevalence of false rape allegations, and indeed their incentivization, especially in this age of me too’ hysteria and associated witch-hunts. Indeed, western mainstream media is likely to be especially credulous respect to allegations in respect of a dictator which it and the political establishment it serves had long reviled and demonized.
Moreover, although, as noted above, given the innate psychological and physiological differences between the sexes, women are unlikely to be effective as conventional bodyguards any more than they are effective as soldiers in wartime, it has nevertheless been suggested that they may have provided a very different form of protection the Libyan dictator – namely as a highly effective ‘human shield’.
On this view, under the pretence of feminism, Gaddaffi may actually have been shrewdly taking advantage of misguided male chivalry and female privilege, not unreasonably surmising that any potential assassins and unsurgents would almost certainly be male, and hence chivalrous, paternalistic and protective towards women, especially since these assassins are also likely to be conservative Muslims, who formed the main bulk of the domestic opposition to his regime, and the deliberate killing of women is explicitly forbidden under Islamic law (Sahih Muslim 19: 4320; cf. Sihah Muslim 19: 4321).

[3] The capture of fertile females from among enemy groups is by no means restricted to the Yąnomamö. On the contrary, it may even form the ultimate evolutionary basis for intergroup conflict and raiding among troops of chimpanzees, our species’ closest extant relative. It is also alluded to, and indeed explicitly commanded, in the Hebrew Bible (e.g. Deuteronomy 20: 13-14; Numbers 31: 17-18), and was formerly prevalent in western culture as well.
It is also very much apparent, for example, in the warfare and raiding formerly endemic in the Gobi Desert of what is today Mongolia. Thus, the mother of Genghis Khan was, at least according to legend, herself kidnapped by the Great Khan’s father. Indeed, this was apparently an accepted form of courtship on the Mongolian Steppe, as Genghis Khan’s own wife was herself stolen from him on at least one occasion by rival Steppe nomads, resulting in a son of disputed paternity (whom the great Khan perhaps tellingly named Jochi, which is said to translate as ‘guest) and a later succession crisis.
Many anthropologists, it ought to be noted, dismiss Chagnon’s claim that Yanomami warfare is predicated on the capture of women. Perhaps the most famous is Chagnon’s own former student, Kenneth Good, whose main claim to fame is to have himself married a (by American standards, underage) Yąnomamö girl – who, in a dramatic falsification of her husband’s theory that would almost be amusing were it not so tragic, was then herself twice abducted and raped by raiding Yanomami war parties.

[4] It is ironic that John Cartwright, author of Evolution and Human Behaviour, an undergraduate level textbook on evolutionary psychology, is skeptical regarding the claim that Ismail the Bloodthirsty fathered 888 offspring, but nevertheless apparently takes at face value that claim that a Russian peasant woman had 69 offspring, a biologically far more implausible claim (Evolution and Human Behaviour: p133-4).

[5] However, here, Betzig is perhaps altogether overcautious. Thus, whether or not “political power in itself” is explained in this way (i.e. “as providing a position from which to gain reproductively”), certainly the human desire for political power must surely be explained in this way.

[6] The prospect of eugenically reengineering human nature itself so as to make utopian communism achievable, and human society less conflictual, is also unrealistic. As John Gray has noted in Straw Dogs: Thoughts on Humans and Other Animals (reviewed here), if human nature is eugenically reengineered, then it will be done, not in the interests of society, let alone humankind, as a whole, but rather in the interests of those responsible for ordering or undertaking the project – namely, scientists and, more importantly, those from whom they take their orders (e.g. government, politicians, civil servants, big business, managerial elites). Thus, Gray concludes:

“[Although] it seems feasible that over the coming century human nature will be scientifically remodelled… it will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

[7] Here, it is important to emphasize that what is exceptional about western societies is not monogamy per se. On the contrary, monogamy is common in relatively egalitarian societies (e.g. hunter-gatherer societies), especially those living at or near subsistence levels, where no male is able to secure access to sufficient resources so as to provision multiple wives and offspring (Kanazawa and Still 1999). What is exceptional about contemporary western societies is the combination of:

1) Large differentials of resource-holdings between males (i.e. social stratification); and

2) Prescriptive monogamy (i.e. polygyny is not merely not widely practised, but also actually unlawful).

[8] Quite when a degree of de facto monogamy originated in the west seems to be a matter of some dispute. Betzig views it as very much a recent phenomenon, arising with the development of complex, interdependent industrial economies, which required the cooperation of lower-status males in order to function. Here, Betzig perhaps underestimates the extent to which even pre-industrial economies required the work and cooperation of low-status males in order to function.
Thus, Betzig argues that, in ancient Rome, nominally monogamous marriages concealed rampantly de facto polygyny, with emperors and other powerful males fathering multiple offspring with both slaves and other men’s wives (Betzig 1992). As evidence, she largely relies on salacious gossip about a few eminent Roman political leades.
Similarly, in medieval Europe, she argues that, despite nominal monogamy, wealthy men fathered multiple offspring through servant girls (Betzig 1995a; Betzig 1995b). In contrast, Kevin Macdonald persuasively contends that medieval monogamy was no mere myth and most illegitimate offspring born to servant girls were fathered by men of roughly their own station (Macdonald 1995a; Macdonald 1995b).

[9] Certainly, the so-called NEET and incel phenomena seem to be correlated with one another. NEETs are disproportionately likely to be incels, and incels are disproportionately likely to be NEETs. However, the direction of causation is unclear and probably works in both directions.
On the one hand, since women are rarely attracted to men without money or the prospects of money, men without jobs are rarely able to attract wives or girlfriends. However, on the other hand, men who, for whatever reason, perceive themselves as unable to attract a wife or girlfriend even if they did have a job, may see little incentive to getting a job in the first place or keeping the one they do have.
In addition, certain aspects of personality, and indeed psychopathology, likely predispose a man both to joblessness and inability to obtain a wife or girlfriend. These include mental illness, mental and physical disabilities, and conditions such as autism.
Finally, the NEET phenomenon cannot be explained solely by the supposed decline in marriage opportunities for young men, as might be suggested by simplistic reading of Binder (2021). Another factor is surely the increased affluence of society at large. In previous times, and in much of the developing world today, remaining voluntarily would likely result in penury and destitution for all but a tiny minority of the economic elite.

[10] Indeed, during the debates surrounding the legalization of gay marriage, the prospect of the legalization of polygynous marriage was rarely discussed, and, when it was raised, it was usually invoked by the opponents of gay marriage, as a sort of reductio ad absurdum of changes in marriage laws to permit gay marriage, something champions of gay marriage were quick to dismiss as preposterous scaremongering. In short, both sides in the acrimonious debates regarding gay marriage seem to have been agreed that legalizing polygynous unions was utterly beyond the pale.

[11] Thus, father absence is a known correlate of criminality and other negative life outcomes. In fact, however, the importance of paternal investment in offspring outcomes, and indeed of parental influence more generally, has yet to be demonstrated, since the correlation between father-absence and negative life-outcomes could instead reflect the heritability of personality, including those aspects of personality that cause people to have offspring out of wedlock, die early, divorce, abandon their children or have offspring by a person who abandons their offspring or dies early (see Judith Harris’s The Nurture Assumption, which I have reviewed here). 

[12] This paradox is related to another one – namely, why it is that people in richer societies tend to have lower fertility rates than poorer societies? This recent development, often referred to as the demographic transition, is paradoxical for the exact same reason that it is paradoxical for relatively wealthier people within western societies to have have fewer offspring than relatively poorer people within these same societies, namely that it is elementary Darwinism 101 that an organism with access to greater resources should channel those additional resources into increased reproduction. Interestingly, this phenomenon is not restricted to western societies. On the contrary, other wealthy industrial and post-industrial societies, such as Japan, Singapore and South Korea, have, if anything, even lower fertility rates than Europe, Australasia and North America.

[13] Actually, it is not altogether clear that women do have an innate desire to bear children. After all, in the EEA, there was no need for women to evolve a desire to bear children. All they required to a desire to have sexual intercourse (or indeed a mere willingness to acquiesce in the male desire for intercourse). In the absence of contraception, offspring would then naturally result. Indeed, other species, including presumably most of our pre-human ancestors, are surely wholly unaware of the connection between sexual intercourse and reproduction. A desire for offspring would then serve no adaptive function for these species at all. However, this did not stop these species from seeking out sexual opportunities and hence reproducing their kind. However, given anecdotal evidence of so-called ‘broodiness’ among women, I suspect women do indeed have some degree of innate desire for offspring.

References

Bateman (1948), Intra-sexual selection in Drosophila, Heredity, 2 (Pt. 3): 349–368.
Bellis et al (2005) Measuring Paternal Discrepancy and its Public Health Consequences. Journal of Epidemiology and Community Health 59(9):749.
Betzig 1992 Roman Polygyny. Ethology and Sociobiology 13(5-6): 309-349.
Betzig 1993a. Sex, succession, and stratification in the first six civilizations: How powerful men reproduced, passed power on to their sons, and used power to defend their wealth, women and children. In Lee Ellis, ed. Social Stratification and Socioeconomic Inequality, pp. 37-74. New York: Praeger.
Betzig 1993b. Where are the bastards’ daddies? Comment on Daniel Pérusse’s ‘Cultural and reproductive success in industrial societies’. Behavioral and Brain Sciences, 16: 284-85.
Betzig 1995a Medieval Monogamy. Journal of Family History 20(2): 181-216.
Betzig 1995b Wanting Women Isn’t New; Getting Them Is: Very. Politics and the Life Sciences 14(1): 24-25.
Binder (2021) Why Bother? The Effect of Declining Marriage Market Prospects on Labor-Force Participation by Young Men (March 1, 2021). Available at SSRN: https://ssrn.com/abstract=3795585 or http://dx.doi.org/10.2139/ssrn.3795585
Chagnon N (1979) Is reproductive success equal in egalitarian societies. In: Chagnon & Irons (eds) Evolutionary Biology and Human Social Behavior: An Anthropological Perspective pp.374-402 (MA: Duxbury Press).
Einon, G (1998) How Many Children Can One Man Have? Evolution and Human Behavior, 19(6):413–426.
Gilding (2005) Rampant Misattributed Paternity: The Creation of an Urban Myth. People and Place 13(2): 1.
Gould (2000) How many children could Moulay Ismail have had? Evolution and Human Behavior 21(4): 295 – 296.
Khan (2010) The paternity myth: The rarity of cuckoldry, Discover, 20 June, 2010.
Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.
Kanazawa (2003) Can Evolutionary Psychology Explain Reproductive Behavior in the Contemporary United States? Sociological Quarterly. 44: 291–302.
Kanazawa (2005) An Empirical Test of a Possible Solution to ‘the Central Theoretical Problem of Human Sociobiology’. Journal of Cultural and Evolutionary Psychology. 3: 255–266.
Macdonald 1995a The establishment and maintenance of socially imposed monogamy in Western Europe, Politics and the Life Sciences, 14(1): 3-23.
Macdonald 1995b Focusing on the group: further issues related to western monogamy, Politics and the Life Sciences, 14(1): 38-46.
Oberzaucher & Grammer (2014) The Case of Moulay Ismael – Fact or Fancy? PLoS ONE 9(2): e85292.
Orians (1969) On the Evolution of Mating Systems in Birds and Mammals. American Naturalist 103 (934): 589–603.
Packer et al (1995) Reproductive constraints on aggressive competition in female baboons. Nature 373: 60–63.
Pérusse (1993). Cultural and Reproductive Success in Industrial Societies: Testing the Relationship at the Proximate and Ultimate Levels.” Behavioral and Brain Sciences 16:267–322.
Sanderson (2001) Explaining Monogamy and Polygyny in Human Societies: Comment on Kanazawa and Still. Social Forces 80(1):329-335.
Scheidel (2008) Monogamy and Polygyny in Greece, Rome, and World History, (June 2008). Available at SSRN: https://ssrn.com/abstract=1214729 or http://dx.doi.org/10.2139/ssrn.1214729
Shaw GB (1903) Man and Superman, Maxims for Revolutionists.
Strassman B (2000) Polygyny, Family Structure and Infant Mortality: A Prospective Study Among the Dogon of Mali. In Cronk, Chagnon & Irons (Ed.), Adaptation and Human Behavior: An Anthropological Perspective (pp.49-68). New York: Aldine de Gruyter.
Trivers, R. (1972). Parental investment and sexual selection. Sexual Selection & the Descent of Man, Aldine de Gruyter, New York, 136-179. Chicago.
Vining D 1986 Social versus reproductive success: The central theoretical problem of human sociobiology Behavioral and Brain Sciences 9(1): 167- 187.
Zerjal et al. (2003) The Genetic Legacy of the Mongols, American Journal of Human Genetics, 72(3): 717–721.

‘The Bell Curve’: A Book Much Read About, But Rarely Actually Read

The Bell Curve: Intelligence and Class Structure in American Life by Richard Herrnstein and Charles Murray (New York: Free Press, 1994). 

There’s no such thing as bad publicity’ – or so contends a famous adage of the marketing industry. 

The Bell Curve: Intelligence and Class Structure in America’ by Richard Herrnstein and Charles Murray is perhaps a case in point. 

This dry, technical, academic social science treatise, full of statistical analyses, graphs, tables, endnotes and appendices, and totalling almost 900 pages, became an unlikely nonfiction bestseller in the mid-1990s on a wave of almost universally bad publicity in which the work was variously denounced as racist, pseudoscientific, fascist, social Darwinist, eugenicist and sometimes even just plain wrong. 

Readers who hurried to the local bookstore eagerly anticipating an incendiary racialist polemic were, however, in for a disappointment. 

Indeed, one suspects that, along with ‘The Bible’ and Stephen Hawkins’ A Brief History of Time, ‘The Bell Curve’ became one of those bestsellers that many people bought, but few managed to finish. 

The Bell Curve’ thus became, like another book that I have recently reviewed, a book much read about, but rarely actually read – at least in full. 

As a result, as with that other book, many myths have emerged regarded the content of ‘The Bell Curve’ that are quite contradicted when one actually takes the time and trouble to read it for oneself. 

Subject Matter 

The first myth of ‘The Bell Curve’ is that it was a book about race differences, or, more specifically, about race differences in intelligence. In fact, however, this is not true. 

Thus, ‘The Bell Curve’ is a book so controversial that the controversy begins with the very identification of its subject-matter. 

On the one hand, the book’s critics focused almost exclusively on subject of race. This led to the common perception that ‘The Bell Curve’ was book about race and race differences in intelligence.[1]

Ironically, many racialists seem to have taken these leftist critics at their word, enthusiastically citing the work as support for their own views regarding race differences in intelligence.  

On the other hand, however, surviving co-author Charles Murray insisted from the outset that the issue of race, and of race differences in intelligence, was always peripheral to he and co-author Richard Herrnstein’s primary interest and focus, which was, he claimed, on the supposed emergence of a ‘Cognitive Elite’ in modern America. 

Actually, however, both these views seem to be incorrect. While the first section of the book does indeed focus on the supposed emergence of a ‘Cognitive Elite’ in modern America, the overall theme of the book seems to be rather broader. 

Thus, the second section of the book focuses on the association between intelligence and various perceived social pathologies, such as unemployment, welfare dependency, illegitimacy, crime and single-parenthood. 

To the extent the book has a single overarching theme, one might say that it is a book about the social and economic correlates of intelligence, as measured by IQ tests, in modern America.  

Its overall conclusion is that intelligence is indeed a strong predictor of social and economic outcomes for modern Americans – high intelligence with socially desirable outcomes and low intelligence with socially undesirable ones. 

On the other hand, however, the topic of race is not quite as peripheral to the book’s themes as sometimes implied by Murray and some of his defenders. 

Thus, it is sometimes claimed only a single chapter dealt with race. Actually, however, two chapters focus on race differences, namely chapters 13 and 14, respectively titled ‘Ethnic Differences in Cognitive Ability’ and ‘Ethnic Inequalities in Relation to IQ’. 

In addition, a further two chapters, namely chapters 19 and 20, entitled respectively ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, deal with the topic of affirmative action, as does the final appendix, entitled ‘The Evolution of Affirmative Action in the Workplace’ – and, although affirmative action has been employed to favour women as well as racial minorities, it is with racial preferences that Herrnstein and Murray are primarily concerned. 

However, these chapters represent only 142 of the book’s nearly 900 pages. 

Moreover, in much of the remainder of the book, the authors actually explicitly restrict their analysis to white Americans exclusively. They do so precisely because the well documented differences between the races in IQ as well as in many of the social outcomes whose correlation with IQ the book discusses would mean that race would have represented a potential confounding factor that they would otherwise have to take steps to control for. 

Herrnstein and Murray therefore took to decision to extend their analysis to race differences near the end of their book, in order to address the question of the extent to which differences in intelligence, which they have already demonstrated to be an important correlate of social and economic outcomes among whites, are also capable of explaining differences in achievement as between races. 

Without these chapters, the book would have been incomplete, and the authors would have laid themselves open to the charge of political-correctness and of ignoring the elephant in the room

Race and Intelligence 

If the first controversy of ‘The Bell Curve’ concerns whether it is a book primarily book about race and race differences in intelligence, the second controversy is over what exactly the authors concluded with respect to this vexed and contentious issue. 

Thus, the same leftist critics who claimed that ‘The Bell Curve’ was primarily a book about race and race differences in intelligence, also accused the authors of concluding that black people are innately less intelligent than whites

Some racists, as I have already noted, evidently took the leftists at their word, and enthusiastically cite the book as support and authority for this view. 

However, in subsequent interviews, Murray always insisted he and Herrnstein had actually remained “resolutely agnostic” on the extent to which genetic factors underlay the IQ gap. 

In the text itself, Herrnstein and Murray do indeed declare themselves “resolutely agnostic” with regard to the extent of the genetic contribution to the test score gap (p311).

However, just couple of sentences before they use this very phrase, they also appear to conclude that genes are indeed at least part of the explanation, writing: 

It seems highly likely to us that both genes and the environment have something to do with racial differences [in IQ]” (p311). 

This paragraph, buried near the end of chapter 13, during an extended discussion of evidence relating to the causes of race differences in intelligence, is the closest the authors come to actually declaring any definitive conclusion regarding the causes of the black-white test score gap.[2]

This conclusion, though phrased in sober and restrained terms, is, of course, itself sufficient to place its authors outside the bounds of acceptable opinion in the early-twenty-first century, or indeed in the late-twentieth century when the book was first published, and is sufficient to explain, and, for some, justify, the opprobrium heaped upon the book’s surviving co-author from that day forth. 

Intelligence and Social Class

It seems likely that races which evolved on separate continents, in sufficient reproductive isolation from one another to have evolved the obvious (and not so obvious) physiological differences between races that we all observe when we look at the faces, or bodily statures, of people of different races (and that we indirectly observe when we look at the results of different athletic events at the Olympic Games), would also have evolved to differ in psychological traits, including intelligence

Indeed, it is surely unlikely, on a priori grounds alone, that all different human races have evolved, purely by chance, the exact same level of intelligence. 

However, if races differ in intelligence are therefore probable, the case for differences in intelligence as between social classes is positively compelling

Indeed, on a priori grounds alone, it is inevitable that social classes will come to differ in IQ, if one accepts two premises, namely: 

1) Increased intelligence is associated with upward social mobility; and 
2) Intelligence is passed down in families.

In other words, if more intelligent people tend, on average, to get higher-paying jobs than those of lower intelligence, and the intelligence of parents is passed on to their offspring, then it is inevitable that the offspring of people with higher-paying jobs will, on average, themselves be of higher intelligence than are the offspring of people with lower paying jobs.  

This, of course, follows naturally from the infamous syllogism formulated by ‘Bell Curve’ co-author Richard Herrnstein way back in the 1970s (p10; p105). 

Incidentally, this second premise, namely that intelligence is passed down in families, does not depend on the heritability of IQ in the strict biological sense. After all, even if heritability of intelligence were zero, intelligence could still be passed down in families by environmental factors (e.g. the ‘better’ parenting techniques of high IQ parents, or the superior material conditions in wealthy homes). 

The existence of an association between social class and IQ ought, then, to be entirely uncontroversial to anyone who takes any time whatsoever to think about the issue. 

If there remains any room for reasoned disagreement, it is only over the direction of causation – namely the question of whether:  

1) High intelligence causes upward social mobility; or 
2) A privileged upbringing causes higher intelligence.

These two processes are, of course, not mutually exclusive. Indeed, it would seem intuitively probable that both factors would be at work. 

Interestingly, however, evidence demonstrates the occurrence only of the former. 

Thus, even among siblings from the same family, the sibling with the higher childhood IQ will, on average, achieve higher socioeconomic status as an adult. Likewise, the socioeconomic status a person achieves as an adult correlates more strongly with their own IQ score than it does with the socioeconomic status of their parents or of the household they grew up in (see Straight Talk About Mental Tests: p195). 

In contrast, family, twin and adoption studies and of the sort conducted by behavioural geneticists have concurred in suggesting that the so-called shared family environment (i.e. those aspects of the family environment shared by siblings from the same household, including social class) has but little effect on adult IQ. 

In other words, children raised in the same home, whether full- or half-siblings or adoptees, are, by the time they reach adulthood, no more similar to one another in IQ than are children of the same degree of biological relatedness brought up in entirely different family homes (see The Nurture Assumption: reviewed here). 

However, while the direction of causation may still be disputed by intelligent (if uninformed) laypeople, the existence of an association between intelligence and social class ought not, one might think, be in dispute. 

However, in Britain today, in discussions of social mobility, if children from deprived backgrounds are underrepresented, say, at elite universities, then this is almost invariably taken as incontrovertible proof that the system is rigged against them. The fact that children from different socio-economic backgrounds differ in intelligence is almost invariably ignored. 

When mention is made of this incontrovertible fact, leftist hysteria typically ensues. Thus, in 2008, psychiatrist Bruce Charlton rightly observed that, in discussion of social mobility: 

A simple fact has been missed: higher social classes have a significantly higher average IQ than lower social classes (Clark 2008). 

For his trouble, Charlton found himself condemned by the National Union of Students and assorted rent-a-quote academics and professional damned fools, while even the ostensibly ‘right-wing’ Daily Mail newspaper saw fit to publish a headline Higher social classes have significantly HIGHER IQs than working class, claims academic, as if this were in some way a controversial or contentious claim (Clark 2008). 

Meanwhile, when, in the same year, a professor at University College a similar point with regard the admission of working-class students to medical schools, even the then government Health Minister, Ben Bradshaw, saw fit to offer his two cents worth (which were not worth even that), declaring: 

It is extraordinary to equate intellectual ability with social class” (Beckford 2008). 

Actually, however, what is truly extraordinary is that any intelligent person, least of all a government minister, would dispute the existence of such a link. 

Cognitive Stratification 

Herrnstein’s syllogism leads to a related paradox – namely that, as environmental conditions are equalized, heritability increases. 

Thus, as large differences in the sorts of environmental factors known to affect IQ (e.g. malnutrition) are eliminated, so differences in income have come to increasingly reflect differences in innate ability. 

Moreover, the more gifted children from deprived backgrounds who escape their humble origins, then, given the substantial heritability of IQ, the fewer such children will remain among the working-class in subsequent generations. 

The result is what Herrnstein and Murray call the ‘Cognitive Stratification’ of society and the emergence of what they call a ‘Cognitive Elite’. 

Thus, in feudal society, a man’s social status was determined largely by ‘accident of birth’ (i.e. he inherited the social station of his father). 

Women’s status, meanwhile, was determined, in addition, by what we might call ‘accident of marriage’ – and, to a large extent, it still is

However, today, a person’s social status, at least according to Herrnstein and Murray, is determined primarily, and increasingly, by their level of intelligence. 

Of course, people are not allocated to a particular social class by IQ testing itself. Indeed, the use of IQ tests by employers and educators has been largely outlawed on account of its disparate impact (or indirect discrimination’, to use the equivalent British phrase) with regard to race (see below). 

However, the skills and abilities increasingly valued at a premium in western society (and, increasingly, many non-western societies as well), mean that, through the operation of the education system and labour market, individuals are effectively sorted by IQ, even without anyone ever actually sitting an IQ test. 

In other words, society is becoming increasingly meritocratic – and the form of ostensible ‘merit’ upon which attainment is based is intelligence. 

For Herrnstein and Murray, this is a mixed blessing: 

That the brightest are identified has its benefits. That they become so isolated and inbred has its costs” (p25). 

However, the correlation between socioeconomic status and intelligence remains imperfect. 

For one thing, there are still a few highly remunerated, and very high-status, occupations that rely on skills that are not especially, if at all, related to intelligence.  I think here, in particular, of professional sports and the entertainment industry. Thus, leadings actors, pop stars and sports stars are sometimes extremely well-remunerated, and very high-status, but may not be especially intelligent.  

More importantly, while highly intelligent people might be, by very definition, the only ones capable of performing cognitively-demanding, and hence highly remunerated, occupations, this is not to say all highly intelligent people are necessarily employed in such occupations. 

Thus, whereas all people employed in cognitively-demanding occupations are, almost by definition, of high intelligence, people of all intelligence levels are capable of doing cognitively-undemanding jobs.

Thus, a few people of high intellectual ability remain in low-paid work, whether on account of personality factors (e.g. laziness), mental illness, lack of opportunity or sometimes even by choice (which choice is, of course, itself a reflection of personality factors). 

Therefore, the correlation between IQ and occupation is far from perfect. 

Job Performance

The sorting of people with respect to their intelligence begins in the education system. However, it continues in the workplace. 

Thus, general intelligence, as measured by IQ testing, is, the authors claim, the strongest predictor of occupational performance in virtually every occupation. Moreover, in general, the higher paid and higher status the occupation in question, the stronger the correlation between performance and IQ. 

However, Herrnstein and Murray are at pains to emphasize, intelligence is a strong predictor of occupational performance even in apparently cognitively undemanding occupations, and indeed almost always a better predictor of performance than tests of the specific abilities the job involves on a daily basis. 

However, in the USA, employers are barred from using testing to select among candidates for a job or for promotion unless they can show the test has ‘manifest relationship’ to the work, and the burden of proof is on the employer to show such a relationship. Otherwise, given their disparate impact’ with regard to race (i.e. the fact that some groups perform worse), the tests in question are deemed indirectly discriminatory and hence unlawful. 

Therefore, employers are compelled to test, not general ability, but rather the specific skills required in the job in question, where a ‘manifest relationship’ is easier to demonstrate in court. 

However, since even tests of specific abilities almost invariably still tap into the general factor of intelligence, races inevitably score differently even on these tests. 

Indeed, because of the ubiquity and predictive power of the g factor, it is almost impossible to design any type of standardized test, whether of specific or general ability or knowledge, in which different racial groups do not perform differently. 

However, if some groups outperform others, the American legal system presumes a priori that this reflects test bias rather than differences in ability. 

Therefore, although the words all men are created equal are not, contrary to popular opinion, part of the US constitution, the Supreme Court has effectively decided, by legal fiat, to decide cases as if they were. 

However, just as a law passed by Congress cannot repeal the law of gravity, so a legal presumption that groups are equal in ability cannot make it so. 

Thus, the bar on the use of IQ testing by employers has not prevented society in general from being increasingly stratified by intelligence, the precise thing measured by the outlawed tests. 

Nevertheless, Herrnstein and Murray estimate that the effective bar on the use of IQ testing makes this process less efficient, and cost the economy somewhere between 80 billion to 13 billion dollars in 1980 alone (p85). 

Conscientiousness and Career Success

I am skeptical of Herrnstein and Murray’s conclusion that IQ is the best predictor of academic and career success. I suspect hard work, not to mention a willingness to toady, toe the line, and obey orders, is at least as important in even the most cognitively-demanding careers, as well as in schoolwork and academic advancement. 

Perhaps the reason these factors have not (yet) been found to be as highly correlated with earnings as is IQ is that we have not yet developed a way of measuring these aspects of personality as accurately as we can measure a person’s intelligence through an IQ test. 

For example, the closest psychometricians have come to measuring capacity for hard work is the personality factor known as conscientiousness, one of the Big Five factors of personality revealed by psychometric testing. 

Conscientiousness does indeed correlate with success in education and work (e.g. Barrick & Mount 1991). However, the correlation is weaker than that between IQ and success in education and at work. 

However, this may be because personality is less easily measured by current psychometric methods than is intelligence – not least because personality tests generally rely on self-report, rather than measuring actual behaviour

Thus, to assess conscientiousness, questionnaires ask respondents whether they ‘see themselves as organized’, ‘as able to follow an objective through to completion’, ‘as a reliable worker’, etc. 

This would be the equivalent of an IQ test that, instead of directly testing a person’s ability to recognize patterns or manipulate shapes by having them do just this, simply asked respondents how good they perceived themselves as being at recognizing patterns, or manipulating shapes. 

Obviously, this would be a less accurate measure of intelligence than a normal IQ test. After all, some people lie, some are falsely modest and some are genuinely deluded. 

Indeed, according to the Dunning Kruger effect, it is those most lacking in ability who most overestimate their abilities – precisely because they lack the ability to accurately assess their ability (Kruger & Dunning 1999). 

In an IQ test, on the other hand, one can sometimes pretend to be dumber than one is, by deliberately getting questions wrong that one knows the answer to.[3]

However, it is not usually possible to pretend to be smarter than one is by getting more questions right simply because one would not know what are the right answers. 

Affirmative Action’ and Test Bias 

In chapters nineteen and twenty, respectively entitled ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, the authors discuss so-called affirmative action, an American euphemism for systematic and overt discrimination against white males. 

It is well-documented that, in the United States, blacks, on average, earn less than white Americans. On the other hand, it is less well-documented that whites, on average, earn less than people of IndianChinese and Jewish ancestry

With the possible exception of Indian-Americans, these differences, of course, broadly mirror those in average IQ scores. 

Indeed, according to Herrnstein and Murray, the difference in earnings between whites and blacks, not only disappears after controlling for differences in IQ, but is actually partially reversed. Thus, blacks are actually somewhat overrepresented in professional and white-collar occupations as compared to whites of equivalent IQ. 

This remarkable finding Herrnstein and Murray attribute to the effects of affirmative action programmes, as black Americans are appointed and promoted beyond what their ability merits because through discrimination. 

Interestingly, however, this contradicts what the authors wrote in an earlier chapter, where they addressed the question of test bias (pp280-286). 

There, they concluded that testing was not biased against African-Americans, because, among other reasons, IQ tests were equally predictive of real-world outcomes (e.g. in education and employment) for both blacks and whites, and blacks do not perform any better in the workplace or in education than their IQ scores predict. 

This is, one might argue, not wholly convincing evidence that IQ tests are not biased against blacks. It might simply suggest that society at large, including the education system and the workplace, is just as biased against blacks as are the hated IQ tests. This is, of course, precisely what we are often told by the television, media and political commentators who insist that America is a racist society, in which such mysterious forces as ‘systemic racism’ and ‘white privilege’ are pervasive. 

In fact, the authors acknowledge this objection, conceding:  

The tests may be biased against disadvantaged groups, but the traces of bias are invisible because the bias permeates all areas of the group’s performance. Accordingly, it would be as useless to look for evidence of test bias as it would be for Einstein’s imaginary person traveling near the speed of light to try to determine whether time has slowed. Einstein’s traveler has no clock that exists independent of his space-time context. In assessing test bias, we would have no test or criterion measure that exists independent of this culture and its history. This form of bias would pervade everything” (p285). 

Herrnstein and Murray ultimately reject this conclusion on the grounds that it is simply implausible to assume that: 

“[So] many of the performance yardsticks in the society at large are not only biased, they are all so similar in the degree to which they distort the truth-in every occupation, every type of educational institution, every achievement measure, every performance measure-that no differential distortion is picked up by the data” (p285). 

In fact, however, Nicholas Mackintosh identifies one area where IQ tests do indeed under-predict black performance, namely with regard to so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in a ‘normal’ manner). 

Blacks with low IQs are generally much more functional in these respects than whites or Asians with equivalent low IQs (see IQ and Human Intelligence: p356-7).[4]

Yet Herrnstein and Murray seem to have inadvertently, and evidently without realizing it, identified yet another sphere where standardized testing does indeed under-predict real-world outcomes for blacks. 

Thus, if indeed, as Herrnstein and Murray claim, blacks are somewhat overrepresented among professional and white-collar occupations relative to their IQs, this suggests that blacks do indeed do better in real-world outcomes than their test results would predict and, while Herrnstein and Murray attribute this to the effect of discrimination against whites, it could instead surely be interpreted as evidence that the tests are biased against blacks. 

Policy Implications? 

What, then, are the policy implications that Herrnstein and Murray draw from the findings that they report? 

In The Blank Slate: The Modern Denial of Human Nature, cognitive science, linguist and popular science writer Steven Pinker popularizes the notion that recognizing the existence of innate differences between individuals and groups in traits such as intelligence does not necessarily lead to ‘right-wing’ political implications. 

Thus, a leftist might accept the existence of innate differences in ability, but conclude that, far from justifying inequality, this is all the more reason to compensate the, if you like, ‘cognitively disadvantaged’ for their innate deficiencies, differences which are, being innate, hardly something for which they can legitimately be blamed. 

Herrnstein and Murray reject this conclusion, but acknowledge it is compatible with their data. Thus, in an afterword to later editions, Murray writes: 

If intelligence plays an important role in determining how well one does in life, and intelligence is conferred on a person through a combination of genetic and environmental factors over which that person has no control, the most obvious political implication is that we need a Rawlsian egalitarian state, compensating the less advantaged for the unfair allocation of intellectual gifts” (p554).[5]

Interestingly, Pinker’s notion of a ‘hereditarian left’, and the related concept of Bell Curve liberals, is not entirely imaginary. On the contrary, it used to be quite mainstream. 

Thus, it was the radical leftist post-war Labour government that imposed the tripartite system on schools in the UK in 1945, which involved allocating pupils to different schools on the basis of their performance in what was then called the 11-plus exam, conducted at with children at age eleven, which tested both ability and acquired knowledge. This was thought by leftists to be a fair system that would enable bright, able youngsters from deprived and disadvantaged working-class backgrounds to achieve their full potential.[6]

Indeed, while contemporary Cultural Marxists emphatically deny the existence of innate differences in ability as between individuals and groups, Marx himself, laboured under no such delusion

On the contrary, in advocating, in his famous (plagiarized) aphorism From each according to his ability; to each according to his need, Marx implicitly recognized that individuals differ in “ability”, and, given that, in the unrealistic communist utopia he envisaged, environmental conditions were ostensibly to be equalized, these differences he presumably conceived of as innate in origin. 

However, a distinction must be made here. While it is possible to justify economic redistributive policies on Rawlsian grounds, it is not possible to justify affirmative action

Thus, one might well reasonably contend that the ‘cognitively disadvantaged’ should be compensated for their innate deficiencies through economic redistribution. Indeed, to some extent, most Western polities already do this, by providing welfare payments and state-funded, or state-subsidized, care to those whose cognitive impairment is such as to qualify as a disability and hence render them incapable of looking after or providing for themselves. 

However, we are unlikely to believe that such persons should be given entry to medical school such that they are one day liable to be responsible for performing heart surgery on us or diagnosing our medical conditions. 

In short, socialist redistribution is defensible – but affirmative action is definitely not! 

Reception and Readability 

The reception accorded ‘The Bell Curve’ in 1994 echoed that accorded another book that I have also recently reviewed, but that was published some two decades earlier, namely Edward O. Wilson’s Sociobiology: The New Synthesis

Both were greeted with similar indignant moralistic outrage by many social scientists, who even employed similar pejorative soundbites (‘genetic determinism’, reductionism, ‘biology as destiny’), in condemning the two books. Moreover, in both cases, the academic uproar even spilled over into a mainstream media moral panic, with pieces appearing the popular press attacking the two books. 

Yet, in both cases, the controversy focused almost exclusively on just a small part of each book – the single chapter in Sociobiology: The New Synthesis focusing on humans and the few chapters in ‘The Bell Curve’ discussing race. 

In truth, however, both books were massive tomes of which these sections represented only a small part. 

Indeed, due to their size, one suspects most critics never actually read the books in full for themselves, including, it seemed, most of those nevertheless taking it upon themselves to write critiques. This is what led to the massive disconnect between what most people thought the books said, and their actual content. 

However, there is a crucial difference. 

Sociobiology: The New Synthesis was a long book of necessity, given the scale of the project Wilson set himself. 

As I have written in my review of that latter work, the scale of Wilson’s ambition can hardly be exaggerated. He sought to provide a new foundation for the whole field of animal behaviour, then, almost as an afterthought, sought to extend this ‘New Synthesis’ to human behaviour as well, which meant providing a new foundation, not for a single subfield within biology, but for several whole disciplines (psychology, sociology, economics and cultural anthropology) that were formerly almost unconnected to biology. Then, in a few provocative sentences, he even sought to provide a new foundation for moral philosophy, and perhaps epistemology too. 

Sociobiology: The New Synthesis was, then, inevitably and of necessity, a long book. Indeed, given that his musings regarding the human species were largely (but not wholly) restricted to a single chapter, one could even make a case that it was too short – and it is no accident that Wilson subsequently extended his writings with regard to the human species to a book length manuscript

Yet, while Sociobiology was of necessity a long book, ‘The Bell Curve: Intelligence and Class Structure in America’ is, for me, unnecessarily overlong. 

After all, Herrnstein and Murray’s thesis was actually quite simple – namely that cognitive ability, as captured by IQ testing, is a major correlate of many important social outcomes in modern America. 

Yet they reiterate this point, for different social outcomes, again and again, chapter after chapter, repeatedly. 

In my view, Herrnstein and Murray’s conclusion would have been more effectively transmitted to the audience they presumably sought to reach had they been more succinct in their writing style and presentation of their data. 

Had that been the case then perhaps rather more of the many people who bought the book, and helped make it into an unlikely nonfiction bestseller in 1994, might actually have managed to read it – and perhaps even been persuaded by its thesis. 

For casual readers interested in this topic, I would recommend instead Intelligence, Race, And Genetics: Conversations With Arthur R. Jensen (which I have reviewed herehere and here). 

Endnotes

[1] For example, Francis Wheen, a professional damned fool and columnist for the Guardian newspaper (which two occupations seem to be largely interchangeable) claimed that: 

The Bell Curve (1994), runs to more than 800 pages but can be summarised in a few sentences. Black people are more stupid than white people: always have been, always will be. This is why they have less economic and social success. Since the fault lies in their genes, they are doomed to be at the bottom of the heap now and forever” (Wheen 2000). 

In making this claim, Wheen clearly demonstrates that he has read few if any of those 800 pages to which he refers.

[2] Although their discussion of the evidence relating to the causes, genetic or environmental, of the black-white test score gap is extensive, it is not exhaustive. For example, Phillipe Rushton, the author of Race Evolution and Behavior (reviewed here and here) argues that, despite the controversy their book provoked, Herrnstein and Murray actually didn’t go far enough on race, omitting, for example, any real discussion, save a passing mention in Appendix 5, of race differences in brain size (Rushton 1997). On the other hand, Herrnstein and Murray also did not mention studies that failed to establish any correlation between IQ and blood groups among African-Americans, studies interpreted as supporting an environmentalist interpretation of race differences in intelligence (Loehlin et al 1973Scarr et al 1977). For readers interested in a more complete discussion of the evidence regarding the relative contributions of environment and heredity to the differences in IQ scores of different races, see my review of Richard Lynn’s Race Differences in Intelligence: An Evolutionary Analysis, available here.

[3] For example, some of those accused of serious crimes have been accused of deliberately getting questions wrong on IQ tests in order to qualify as mentally subnormal when before the courts for sentencing in order to be granted mitigation of sentence on this ground, or, more specifically, in order to evade the death penalty

[4] This may be because whites or Asians with such low IQs are more likely to have such impaired cognitive abilities because of underlying conditions (e.g chromosomal abnormalitiesbrain damage) that handicap them over and above the deficit reflected in IQ score alone. On the other hand, blacks with similarly low IQs are still within the normal range for their own race. Therefore, rather than suffering from, say, a chromosomal abnormality or brain damage, they are relatively more likely to simply be at the tail-end of the normal range of IQs within their group, and hence normal in other respects.

[5] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ‘veil of ignorance’.

[6] The tripartite system did indeed enable many working-class children to achieve a much higher economic status than their parents, although this was partly due to the expansion of the middle-class sector of the economy over the same time-period. It was also later Labour administrations who largely abolished the 11-plus system, not least because, unsurprisingly given the heritability of intelligence and personality, children from middle-class backgrounds tended to do better on it than did children from working-class backgrounds.

References 

Barrick & Mount 1991 The big five personality dimensions and job performance: a meta-analysis. Personnel Psychology 44(1):1–26 
Beckford (2008) Working classes ‘lack intelligence to be doctors’, claims academicDaily Telegraph, 04 Jun 2008. 
Clark 2008 Higher social classes have significantly HIGHER IQs than working class, claims academic Daily Mail, 22 May 2008. 
Kruger & Dunning (1999) Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-AssessmentsJournal of Personality and Social Psychology 77(6):1121-34 
Loehlin et al (1973) Blood group genes and negro-white ability differencesBehavior Genetics 3(3): 263-270  
Rushton, J. P. (1997). Why The Bell Curve didn’t go far enough on race. In E. White (Ed.), Intelligence, political inequality, and public policy (pp. 119-140). Westport, CT: Praeger. 
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 . 
Wheen (2000) The ‘science’ behind racismGuardian, 10 May 2000. 

Peter Singer’s ‘A Darwinian Left’

Peter Singer, A Darwinian Left: Politics, Evolution and Cooperation, London: Weidenfeld & Nicolson 1999.

Social Darwinism is dead. 

The idea that charity, welfare and medical treatment ought to be withheld from the poor, the destitute and the seriously ill so that they perish in accordance with the process of natural selection and hence facilitate further evolutionary progress survives only as a straw man sometimes attributed to conservatives by leftists in order to discredit them, and a form of guilt by association sometimes invoked by creationists in order to discredit the theory of evolution.[1]

However, despite the attachment of many American conservatives to creationism, there remains a perception that evolutionary psychology is somehow right-wing

Thus, if humans are fundamentally selfish, as Richard Dawkins is taken, not entirely accurately, to have argued, then this surely confirms the underlying assumptions of classical economics. 

Of course, as Dawkins also emphasizes, we have evolved through kin selection to be altruistic towards our close biological relatives. However, this arguably only reinforces conservatives’ faith in the family, and their concerns regarding the effects of family breakdown and substitute parents

Finally, research on sex differences surely suggests that at least some traditional gender roles – e.g. women’s role in caring for young children, and men’s role in fighting wars – do indeed have a biological basis, and also that patriarchy and the gender pay gap may be an inevitable result of innate psychological differences between the sexes

Political scientist Larry Arnhart thus champions what he calls a new ‘Darwinian Conservatism’, which harnesses the findings of evolutionary psychology in support of family values and the free market. 

Against this, however, moral philosopher and famed animal liberation activist Peter Singer, in ‘A Darwinian Left’, seeks to reclaim Darwin, and evolutionary psychology, for the Left. His attempt is not entirely successful. 

The Naturalistic Fallacy 

At least since David Hume, it has an article of faith among most philosophers that one cannot derive values from facts. To do otherwise is to commit what some philosophers refer to as the naturalistic fallacy

Edward O Wilson, in Sociobiology: The New Synthesis was widely accused of committing the naturalistic fallacy, by attempting to derive moral values form facts. However, those evolutionary psychologists who followed in his stead have generally taken a very different line. 

Indeed, recognition that the naturalistic fallacy is indeed a fallacy has proven very useful to evolutionary psychologists, since it has enabled them investigate the possible evolutionary functions of such morally questionable (or indeed downright morally reprehensible) behaviours as infidelityrape, warfare and child abuse while at the same time denying that they are somehow thereby providing a justification for the behaviours in question.[2] 

Singer, like most evolutionary psychologists, also reiterates the sacrosanct inviolability of the fact-value dichotomy

Thus, in attempting to construct his ‘Darwinian Left’, Singer does not attempt to use Darwinism in order to provide a justification or ultimate rationale for leftist egalitarianism. Rather, he simply takes it for granted that equality is a good thing and worth striving for, and indeed implicitly assumes that his readers will agree. 

His aim, then, is not to argue that socialism is demanded by a Darwinian worldview, but rather simply that it is compatible with such a worldview and not contradicted by it. 

Thus, he takes leftist ideals as his starting-point, and attempts to argue only that accepting the Darwinian worldview should not cause one to abandon these ideals as either undesirable or unachievable. 

But if we accept that the naturalistic fallacy is indeed a fallacy then this only raises the question: If it is indeed true that moral values cannot be derived from scientific facts, whence can moral values be derived?  

Can they only be derived from other moral values? If so, how are our ultimate moral values, from which all other moral values are derived, themselves derived? 

Singer does not address this. However, precisely by failing to address it, he seems to implicitly assume that our ultimate moral values must simply be taken on faith. 

However, Singer also emphasizes that rejecting the naturalistic fallacy does not mean that the facts of human nature are irrelevant to politics. 

On the contrary, while Darwinism may not prescribe any particular political goals as desirable, it may nevertheless help us determine how to achieve those political goals that we have already decided upon. Thus, Singer writes: 

An understanding of human nature in the light of evolutionary theory can help us to identify the means by which we may achieve some of our social and political goals… as well as assessing the possible costs and benefits of doing so” (p15). 

Thus, in a memorable metaphor, Singer observes: 

Wood carvers presented with a piece of timber and a request to make wooden bowls from it do not simply begin carving according to a design drawn up before they have seen the wood. Instead they will examine the material with which they are to work and modify their design in order to suit its grain…Those seeking to reshape human society must understand the tendencies inherent within human beings, and modify their abstract ideals in order to suit them” (p40). 

Abandoning Utopia? 

In addition to suggesting how our ultimate political objectives might best be achieved, an evolutionary perspective also suggests that some political goals might simply be unattainable, at least in the absence of a wholesale eugenic reengineering of human nature itself. 

In watering down the utopian aspirations of previous generations of leftists, Singer seems to implicitly concede as much. 

Contrary to the crudest misunderstanding of selfish gene theory, humans are not entirely selfish. However, we have evolved to put our own interests, and those of our kin, above those of other humans. 

For this reason, communism is unobtainable because: 

  1. People strive to promote themselves and their kin above others; 
  2. Only coercive state apparatus can prevent them so doing; 
  3. The individuals in control of this coercive apparatus themselves seek to promote the interests of themselves and their kin and corruptly use this coercive apparatus to do so. 

Thus, Singer laments: 

What egalitarian revolution has not been betrayed by its leaders?” (p39). 

Or, alternatively, as HL Mencken put it:

“[The] one undoubted effect [of political revolutions] is simply to throw out one gang of thieves and put in another.” 

In addition, human selfishness suggests, if complete egalitarianism were ever successfully achieved and enforced, it would likely be economically inefficient – because it would remove the incentive of self-advancement that lies behind the production of goods and services, not to mention of works of art and scientific advances. 

Thus, as Adam Smith famously observed: 

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.” 

And, again, the only other means of ensuring goods and services are produced besides economic self-interest is state coercion, which, given human nature, will always be exercised both corruptly and inefficiently. 

What’s Left? 

Singer’s pamphlet has been the subject of much controversy, with most of the criticism coming, not from conservatives, whom one might imagine to be Singer’s natural adversaries, but rather from other self-described leftists. 

These leftist critics have included both writers opposed to evolutionary psychology (e.g. David Stack in The First Darwinian Left), but also some other writers claiming to be broadly receptive to the new paradigm but who are clearly uncomfortable with some of its implications (e.g.  Marek Kohn in As We Know It: Coming to Terms with an Evolved Mind). 

In apparently rejecting the utopian transformation of society envisioned by Marx and other radical socialists, Singer has been accused by other leftists for conceding rather too much to the critics of leftism. In so doing, Singer has, they claim, in effect abandoned leftism in all but name and become, in their view, an apologist for and sell-out to capitalism. 

Whether Singer can indeed be said to have abandoned the Left depends, of course, on precisely how we define ‘the Left’, a rather more problematic matter than it is usually regarded as being.[3]

For his part, Singer certainly defines the Left in unusually broad terms.

For Singer, leftism need not necessarily entail taking the means of production into common ownership, nor even the redistribution of wealth. Rather, at its core, being a leftist is simply about being: 

On the side of the weak, not the powerful; of the oppressed, not the oppressor; of the ridden, not the rider” (p8). 

However, this definition is obviously problematic. After all, few conservatives would admit to being on the side of the oppressor. 

On the contrary, conservatives and libertarians usually reject the dichotomous subdivision of society into oppressed’ and ‘oppressor groups. They argue that the real world is more complex than this simplistic division of the world into black and white, good and evil, suggests. 

Moreover, they argue that mutually beneficial exchange and cooperation, rather than exploitation, is the essence of capitalism. 

They also usually claim that their policies benefit society as a whole, including both the poor and rich, rather than favouring one class over another.[4]

Indeed, conservatives claim that socialist reforms often actually inadvertently hurt precisely those whom they attempt to help. Thus, for example, welfare benefits are said to encourage welfare dependency, while introducing, or raising the level of, a minimum wage is said to lead to increases in unemployment. 

Singer declares that a Darwinian left would “promote structures that foster cooperation rather than competition” (p61).

Yet many conservatives would share Singer’s aspiration to create a more altruistic culture. 

Indeed, this aspiration seems more compatible with the libertarian notion of voluntary charitable donations replacing taxation than with the coercively-extracted taxes invariably favoured by the Left. After all, being forced to pay taxes is an example of coercion rather than true altruism. 

Nepotism and Equality of Opportunity 

Yet selfish gene theory suggests humans are not entirely self-interested. Rather, kin selection makes us care also about our biological relatives.

But this is no boon for egalitarians. 

Rather, the fact that our selfishness is tempered by a healthy dose of nepotism likely makes equality of opportunity as unattainable as equality of outcome – because individuals will inevitably seek to aid the social, educational and economic advancement of their kin, and those individuals better placed to do so will enjoy greater success in so doing. 

For example, parents with greater resources will be able to send their offspring to exclusive fee-paying schools or obtain private tuition for them; parents with better connections may be able to help their offspring obtain better jobs; while parents with greater intellectual ability may be better able to help their offspring with their homework. 

However, since many conservatives and libertarians are as committed to equality of opportunity as socialists are to equality of outcome, this conclusion may be as unwelcome on the right as on the left. 

Indeed, the theory of kin selection has even been invoked to suggest that ethnocentrism is innate and ethnic conflict is inevitable in multi-ethnic societies, a conclusion unwelcome across the mainstream political spectrum in the West today, where political parties of all persuasions are seemingly equally committed to building multi-ethnic societies. 

Unfortunately, Singer does not address any of these issues. 

Animal Liberation After Darwin 

Singer is most famous for his advocacy on behalf of what he calls animal liberation

In ‘A Darwinian Left’, he argues that the Darwinian worldview reinforces the case for animal liberation by confirming the evolutionary continuity between humans other animals. 

This suggests that there are unlikely to be fundamental differences in kind as between humans and other animals (e.g. in the capacity to feel pain) sufficient to justify the differences in treatment currently accorded humans and animals. 

It sharply contrasts account of creation in the Bible and the traditional Christian notion of humans as superior to other animals and as occupying an intermediate position between beasts and angels. 

Thus, Singer concludes: 

By knocking out the idea that we are a separate creation from the animals, Darwinian thinking provided the basis for a revolution in our attitudes to non-human animals” (p17). 

This makes our consumption of animals as food, our killing of them for sport, our enslavement of them as draft animals, or even pets, and our imprisonment of them in zoos and laboratories all ethically suspect, since these are not things that are generally permitted in respect of humans. 

Yet Singer fails to recognise that human-animal continuity cuts two ways. 

Thus, anti-vivisectionists argue that animal testing is not only immoral, but also ineffective, because drugs and other treatments often have very different effects on humans than they do on the animals used in drug testing. 

Our evolutionary continuity with non-human species makes this argument less plausible. 

Moreover, if humans are subject to the same principles of natural selection as other species, this suggests, not the elevation of animals to the status of humans, but rather the relegation of humans to just another species of animal

In short, we do not occupy a position midway between beasts and angels; we are beasts through and through, and any attempt to believe otherwise is mere delusion

This is, of course, the theme of John Gray’s powerful polemic Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here). 

Finally, acceptance of the existence of human nature surely entails recognition of carnivory as a part of that nature. 

Of course, we must remember not to commit the naturalistic or appeal to nature fallacy.  

Thus, just because meat-eating may be natural for humans, in the sense that meat was a part of our ancestors diet in the EEA, this does not necessarily mean that it is morally right or even morally justifiable to eat meat. 

However, the fact that meat is indeed a natural part of the human diet does suggest that, in health terms, vegetarianism is likely to be nutritionally sub-optimal. 

Thus, the naturalistic fallacy or appeal to nature fallacy is not always entirely fallacious, at least when it comes to human health. What is natural for humans is indeed what we are biologically adapted to and what our body is therefore best designed to deal with.[5]

Therefore, vegetarianism is almost certainly to some degree sub-optimal in nutritional terms. 

Moreover, given that Singer is an opponent of the view that there is a valid moral distinction between acts and omissions, describing one of his core tenets in the Introduction to his book Writings on an Ethical Life as the belief that “we are responsible not only for what we do but also for what we could have prevented” (Writings on an Ethical Life: pxv), then we must ask ourselves: If he believes it is wrong for us to eat animals, does he also believe we should take positive measures to prevent lions from eating gazelles? 

Economics 

Bemoaning the emphasis of neoliberals on purely economic outcomes, Singer protests:

From an evolutionary perspective, we cannot identify wealth with self-interest… Properly understood self-interest is broader than economic self-interest” (p42). 

Singer is right. The ultimate currency of natural selection is not wealth, but rather reproductive success – and, in evolutionarily novel environments, wealth may not even correlate with reproductive success (Vining 1986). 

Thus, as discussed by Laura Betzig in Despotism and Differential Reproduction, a key difference between Marxism and sociobiology is the relative emphasis on production versus reproduction

Whereas Marxists see societal conflict and exploitation as reflecting competition over control of the means of production, for Darwinians, all societal conflict ultimately concerns control over, not the means of production, but rather what we might term the ‘means of reproduction’ – in other words, women, their wombs and vaginas

Thus, sociologist-turned-sociobiologist Pierre van den Berghe observed: 

“The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon: p165). 

Production is ultimately, in Darwinian terms, merely by which to gain the necessary resources to permit successful reproduction. The latter is the ultimate purpose of life

Thus, for all his ostensible radicalism, Karl Marx, in his emphasis on economics (‘production’) at the expense of sex (‘reproduction’), was just another Victorian sexual prude

Competition or Cooperation: A False Dichotomy? 

In Chapter  Four, entitled “Competition or Cooperation?”, Singer argues that modern western societies, and many modern economists and evolutionary theorists, put too great an emphasis on competition at the expense of cooperation

Singer accepts that both competition and cooperation are natural and innate facets of human nature, and that all societies involve a balance of both. However, he argues that different societies differ in their relative emphasis on competition or cooperation, and that it is therefore possible to create a society that places a greater emphasis on the latter at the expense of the former. 

Thus, Singer declares that a Darwinian left would: 

Promote structures that foster cooperation rather than competition” (p61) 

However, Singer is short on practical suggestions as to how a culture of altruism is to be fostered.[6]

Changing the values of a culture is not easy. This is especially so for a liberal democratic (as opposed to a despotic, totalitarian) government, let alone for a solitary Australian moral philosopher – and Singer’s condemnation of “the nightmares of Stalinist Russia” suggests that he would not countenance the sort of totalitarian interference with human freedom to which the Left has so often resorted in the past, and continues to resort to in the present (even in the West), with little ultimate success, in the past. 

But, more fundamentally, Singer is wrong to see competition and conflict as necessarily in conflict with altruism and cooperation

On the contrary, perhaps the most remarkable acts of cooperation, altruism and self-sacrifice are those often witnessed in wartime (e.g. kamikaze pilotssuicide bombers and soldiers who throw themselves on grenades). Yet war represents perhaps the most extreme form of competition and conflict known to man. 

In short, soldiers risk and sacrifice their lives, not only to save the lives of others, but also to take the lives of other others. 

Likewise, trade is a form of cooperation, but is as fundamental to capitalism as is competition. Indeed, I suspect most economists would argue that exchange is even more fundamental to capitalism than is competition

Thus, far from disparaging cooperation, neoliberal economists see voluntary exchange as central to prosperity. 

Ironically, then, popular science writer Matt Ridley also, like Singer, focuses on humans’ innate capacity for cooperation to justify political conclusions in his book, The Origins of Virtue

But, for Ridley, our capacity for cooperation provides a rationale, not for socialism, but rather for free markets – because humans, as natural traders, produce efficient systems of exchange which government intervention almost always only distorts. 

However, whereas economic trade is motivated by self-interested calculation, Singer seems to envisage a form of reciprocity mediated by emotions such as compassiongratitude and guilt
 
However, sociobiologist Robert Trivers argues in his paper that introduced the concept of reciprocal altruism to evolutionary biology that these emotions themselves evolved through the rational calculation of natural selection (Trivers 1971). 

Therefore, while open to manipulation, especially in evolutionarily novel environments, they are necessarily limited in scope. 

Group Differences 

Singer’s envisaged ‘Darwinian Left’ would, he declares, unlike the contemporary left, abandon: 

“[The assumption] that all inequalities are due to discrimination, prejudice, oppression or social conditioning. Some will be, but this cannot be assumed in every case” (p61). 

Instead, Singer admits that at least some disparities in achievement may reflect innate differences between individuals and groups in abilities, temperament and preferences. 

This is probably Singer’s most controversial suggestion, at least for modern leftists, since it contravenes the contemporary dogma of political correctness

Singer is, however, undoubtedly right.  

Moreover, his recognition that some differences in achievement as between groups reflect, not discrimination, oppression or even the lingering effect of past discrimination or oppression, but rather innate differences between groups in psychological traits, including intelligence, is by no means incompatible with socialism, or leftism, as socialism and leftism were originally conceived. 

Thus, it is worth pointing out that, while contemporary so-called cultural Marxists may decry the notion of innate differences in ability and temperament as between different racessexesindividuals and social classes as anathema, the same was not true of Marx himself

On the contrary, in famously advocating from each according to his ability, to each according to his need, Marx implicitly recognized that people differed in ability – differences which, given the equalization of social conditions envisaged under communism, he presumably conceived of as innate in origin.[7]

As Hans Eysenck observes:

“Stalin banned mental testing in 1935 on the grounds that it was ‘bourgeois’—at the same time as Hitler banned it as ‘Jewish’. But Stalin’s anti-genetic stance, and his support for the environmentalist charlatan Lysenko, did not derive from any Marxist or Leninist doctrine… One need only recall The Communist Manifesto: ‘From each according to his ability, to each according to his need’. This clearly expresses the belief that different people will have different abilities, even in the communist heaven where all cultural, educational and other inequalities have been eradicated” (Intelligence: The Battle for the Mind: p85).

Here Eysenck echoes the earlier observations of the brilliant, pioneering early twentieth century biologist, and unrepentant Marxist, JBS Haldane, who reputedly wrote in the pages of The Daily Worker in the 1940s, that:

The dogma of human equality is no part of Communism… The formula of Communism ‘from each according to his ability, to each according to his needs’ would be nonsense if abilities are equal.”

Thus, Steven Pinker, in The Blank Slate, points to the theoretical possibility of what he calls a “Hereditarian Left”, arguing for a Rawlsian redistribution of resources to the, if you like, innately ‘cognitively disadvantaged’.[8] 

With regard to group differences, Singer avoids discussing the incendiary topic of race differences in intelligence, a question too contentious for Singer to touch. 

Instead, he illustrates the possibility that not “all inequalities are due to discrimination, prejudice, oppression or social conditioning” with the marginally less incendiary case of sex differences.  

Here, it is sex differences, not in intelligence, but rather in temperament, preferences and personality that are probably more important, and likely explain occupational segregation and the so-called gender pay gap

Thus, Singer writes: 

If achieving high status increases access to women, then we can expect men to have a stronger drive for status than women” (p18). 

This alone, he implies, may explain both the universalilty of male rule and the so-called gender pay gap

However, Singer neglects to mention another biological factor that is also probably important in explaining the gender pay gap – namely, women’s attachment to infant offspring. This factor, also innate and biological in origin, also likely impedes career advancement among women. 

Thus, it bears emphasizing that never-married women with no children actually earn more, on average, than do unmarried men without children of the same age in both Britain and America.[9]

For a more detailed treatment of the biological factors underlying the gender pay gap, see Biology at Work: Rethinking Sexual Equality by professor of law, Kingsley Browne, which I have reviewed here.[10] See also my review of Warren Farrell’s Why Men Earn More, which can be found here, here and here.

Dysgenic Fertility Patterns? 

It is sometimes claimed by opponents of welfare benefts that the welfare system only encourages the unemployed to have more children so as to receive more benefits and thereby promotes dysgenic fertility patterns. In response, Singer retorts:

Even if there were a genetic component to something as nebulous as unemployment, to say that these genes are ‘deleterious’ would involve value judgements that go way beyond what the science alone can tell us” (p15).

Singer is, of course, right that an extra-scientific value judgement is required in order to label certain character traits, and the genes that contribute to them, as deleterious or undesirable. 

Indeed, if single mothers on welfare do indeed raise more surviving children than do those who are not reliant on state benefits, then this indicates that they have higher reproductive success, and hence, in the strict biological sense, greater fitness than their more financially independent, but less fecund, reproductive competitors. 

Therefore, far from being deleterious’ in the biological sense, genes contributing to such behaviour are actually under positive selection, at least under current environmental conditions.  

However, even if such genes are not ‘deleterious’ in the strict biological sense, this does not necessarily mean that they are desirable in the moral sense, or in the sense of contributing to successful civilizations and societal advancement. To suggest otherwise would, of course, involve a version of the very appeal to nature fallacy or naturalistic fallacy that Singer is elsewhere emphatic in rejecting. 

Thus, although regarding certain character traits, and the genes that contribute to them, as undesirable does indeed involve an extra-scientific “value judgement”, this is not to say that the “value judgement” in question is necessarily mistaken or unwarranted. On the contrary, it means only that such a value judgement is, by its nature, a matter of morality, not of science. 

Thus, although science may be silent on the issue, virtually everyone would agree that some traits (e.g. generosity, health, happiness, conscientiousness) are more desirable than others (e.g. selfishness, laziness, depression, illness). Likewise, it is self-evident that the long-term unemployed are a net burden on society, and that a successful society cannot be formed of people unable or unwilling to work. 

As we have seen, Singer also questions whether there can be “a genetic component to something as nebulous as unemployment”. 

However, in the strict biological sense, unemployment probably is indeed partly heritable. So, incidentally, are road traffic accidents and our political opinions – because each reflect personality traits that are themselves heritable (e.g. risk-takers and people with poor physical coordination and slow reactions probably have more traffic accidents; and perhaps more compassionate people are more likely to favour leftist politics). 

Thus, while it may be unhelpful and misleading to talk of unemployment as itself heritable, nevertheless traits of the sort that likely contribute to unemployment (e.g. intelligenceconscientiousnessmental and physical illness) are indeed heritable

Actually, however, the question of heritability, in the strict biological sense, is irrelevant. 

Thus, even if the reason that children from deprived backgrounds have worse life outcomes is entirely mediated by environmental factors (e.g. economic or cultural deprivation, or the bad parenting practices of low-SES parents), the case for restricting the reproductive rights of those people who are statistically prone to raise dysfunctional offspring remains intact. 

After all, children usually get both their genes and their parenting from the same set of parents – and this could be changed only by a massive, costly, and decidedly illiberal, policy of forcibly removing offspring from their parents.[11]

Therefore, so long as an association between parentage and social outcomes is established, the question of whether this association is biologically or environmentally mediated is simply beside the point, and the case for restricting the reproductive rights of certain groups remains intact.  

Of course, it is doubtful that welfare-dependent women do indeed financially benefit from giving birth to additional offspring. 

It is true that they may receive more money in state benefits if they have more dependent offspring to support and provide for. However, this may well be more than offset by the additional cost of supporting and providing for the dependent offspring in question, leaving the mother with less to spend on herself. 

However, even if the additional monies paid to mothers with dependent children are not sufficient as to provide a positive financial incentive to bearing additional children, they at least reduce the financial disincentives otherwise associated with rearing additional offspring.  

Therefore, given that, from an evolutionary perspective, women probably have an innate desire to bear additional offspring, it follows that a rational fitness-maximizer would respond to the changed incentives represented by the welfare system by increasing their reproductive rate.[12]

Towards A New Socialist Eugenics?

If we accept Singer’s contention that an understanding of human nature can help show us how achieve, but not choose, our ultimate political objectives, then eugenics could be used to help us achieve the goal of producing the better people and hence, ultimately, better societies. 

Indeed, given that Singer seemingly concedes that human nature is presently incompatible with communist utopia, perhaps then the only way to revive the socialist dream of communism is to eugenically re-engineer human nature itself. 

Thus, it is perhaps no accident that, before World War Two, eugenics was a cause typically associated, not with conservatives, nor even, as today, with fascism and German National Socialism, but rather with the political left, the main opponents of eugenics, on the other hand, being Christian conservatives.

Thus, early twentieth century socialist-eugenicists like H.G. Wells, Sidney Webb, Margaret Sanger and George Bernard Shaw may then have tentatively grasped what eludes contemporary leftists, Singer very much included – namely that re-engineering society necessarily requires as a prerequisite re-engineering Man himself.[13]

_________________________

Endnotes

[1] Indeed, the view that the poor and ill ought to be left to perish so as to further the evolutionary process seems to have been a marginal one even in its ostensible late nineteenth century heyday (see Bannister, Social Darwinism Science and Myth in Anglo-American Social Thought). The idea always seems, therefore, to have been largely, if not wholly, a straw man.

[2] In this, the evolutionary psychologists are surely right. Thus, no one accuses biomedical researchers of somehow ‘justifying disease’ when they investigate how infectious diseases, in an effort maximize their own reproductive success, spread form host to host. Likewise, nobody suggests that dying of a treatable illness is desirable, even though this may have been the ‘natural’ outcome before such ‘unnatural’ interventions as vaccination and antibiotics were introduced.

[3] The convenional notion that we can usefully conceptualize the political spectrum on a single dimensional left-right axis is obviously preposterous. For one thing, there is, at the very least, a quite separate liberal-authoritarian dimension. However, even restricting our definition of the left-right axis to purely economic matters, it remains multi-factorial. For example, Hayek, in The Road to Serfdom classifies fascism as a left-wing ideology, because it involved big government and a planned economy. However, most leftists would reject this definition, since the planned economy in question was designed, not to reduce economic inequalities, but rather, in the case of Nazi Germany at least, to fund and sustain an expanded military force, a war economy, external military conquest and grandiose vanity public works architectural projects. The term right-wing’ is even more problematic, including everyone from fascists, to libertarians to religious fundamentalists. Yet a Christian fundamentalist who wants to outlaw pornography and abortion has little in common with either a libertarian who wants to decriminalize prostitution and child pornography, nor with a eugenicist who wants to make abortions, for certain classes of person, compulsory. Yet all three are classed together as ’right-wing’ even though they share no more in common with one another than any does with a raving unreconstructed Marxist.

[4] Thus, the British Conservatives Party traditionally styled themselves one-nation conservatives, who looked to the interests of the nation as a whole, rather than what they criticized as the divisive ‘sectionalism’ of the trade union and labour movements, which favoured certain economic classes, and workers in certain industries, over others, just as contemporary leftists privilege the interests of certain ethnic, religious and culturally-defined groups (e.g. blacks, Muslims, feminists) over others (i.e. white males).

[5] Of course, some ‘unnatural’ interventions have positive health benefits. Obvious examples are modern medical treatments such as penicillin, chemotherapy and vaccination. However, these are the exceptions. They have been carefully selected and developed by scientists to have this positive effect, have gone through rigorous testing to ensure that their effects are indeed beneficial, and are generally beneficial only to people with certain diagnosed conditions. In contrast, recreational drug use almost invariably has a negative effect on health.
It might also be noted that, although their use by humans may be ‘unnatural’, the role of antibiotics in fighting bacterial infection is not itself ‘unnatural’, since antibiotics such as penicillin themselves evolved as a natural means by which one microorganism, namely mould, a form of fungi, fights another form of microorganism, namely bacteria.

[6] It is certainly possible for more altruistic cultures to exist. For example, the famous (and hugely wasteful) potlatch feasts of some Native American cultures, which involved great acts of both altruism and wanton waste, exemplify an extreme form of competitive altruism, analogous to conspicuous consumption, and may be explicable as a form of status display in accordance with Zahavi’s handicap principle. However, recognizing that such cultures exist does not easily translate into working out how to create or foster such cultures, let alone transform existing cultures in this direction.

[7]  Indeed, by modern politically-correct standards, Marx was a rampant racist, not to mention an anti-Semite

[8] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ’veil of ignorance’. 

[9] As Warren Farrell documents in his excellent Why Men Earn More (which I have reviewed here, here and here), in the USA, women who have never married and have no children actually earn more than men who have never married and have no children and have done since at least the 1950s (Why Men Earn More: pxxi). More precisely, according to Farrell, never-married men without children on average earn only about 85% of their childless never-married female counterparts (Ibid: pxxiii).
The situation is similar in the UK. Thus, economist JR Shackleton reports:

Women in the middle age groups who remain single earn more than middle-aged single males” (Should We Mind the Gap? p30).

The reasons unmarried, childless women earn more than unmarried childless men are multifarious and include:

  1. Married women can afford to work less because they appropriate a portion of their husband’s income in addition to their own
  2. Married men and men with children are thus obliged to earn even more so as to financially support, not only themselves, but also their wife, plus any offspring;
  3. Women prefer to marry richer men and hence poorer men are more likely to remain single;
  4. Childcare duties undertaken by women interfere with their earning capacity.

[10]  Incidentally, Browne has also published a more succinct summary of the biological factors underlying the pay-gap that was first published in the same ‘Darwinism Today’ series as Singer’s ‘A Darwinian Left’, namely Divided Labors: An Evolutionary View of Women at Work. However, much though I admire Browne’s work, this represents a rather superficial popularization of his research on the topic, and I would recommend instead Browne’s longer Biology at Work: Rethinking Sexual Equality (which I have reviewed here) for a more comprehenseive treatment of the same, and related, topics. 

[11] A precedent for just such a programme, enacted in the name of socialism, albeit imposed consensually, was the communal rearing practices in Israeli Kibbutzim, since largely abandoned. Another suggestion along rather different lines comes from a rather different source, namely Adolf Hitler, who, believing that nature trumped nurture, is quoted in Mein Kampf as proposing: 

The State must also teach that it is the manifestation of a really noble nature and that it is a humanitarian act worthy of all admiration if an innocent sufferer from hereditary disease refrains from having a child of his own but bestows his love and affection on some unknown child whose state of health is a guarantee that it will become a robust member of a powerful community” (quoted in: Parfrey 1987: p162). 

[12] Actually, it is not entirely clear that women do have a natural desire to bear offspring. Other species probably do not have any such natural desire. After all, since they are almost certainly are not aware of the connection between sex and child birth, such a desire would serve no adaptive purpose and hence would never evolve. All an organism requires is a desire for sex, combined perhaps with a tendency to care for offspring after they are born. (Indeed, in principle, a female does not even require a desire for sex, only a willingness to submit to the desire of a male for sex.) As Tooby and Cosmides emphasize: 

Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.” 

There is no requirement for a desire for offspring as such. Nevertheless, anecdotal evidence of so-called broodiness, and the fact that most women do indeed desire children, despite the costs associated with raising children, suggests that, in human females, there is indeed some innate desire for offspring. Curiously, however, the topic of broodiness is not one that has attracted much attention among evolutionists.

[13] However, there is a problem with any such case for a ‘Brave New Socialist Eugenics’. Before the eugenic programme is complete, the individuals controlling eugenic programmes (be they governments or corporations) would still possess a more traditional human nature, and may therefore have less than altruistic motivations themselves. This seems to suggest then that, as philosopher John Gray concludes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):  

“[If] human nature [is] scientifically remodelled… it will be done haphazardly, as an upshot of the struggles in the murky world where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

References  

Parfrey (1987) Eugenics: The Orphaned Science. In Parfrey (Ed.) Apocalypse Culture (New York: Amoc Press). 

Trivers 1971 The evolution of reciprocal altruism Quarterly Review of Biology 46(1):35-57 

Vining 1986 Social versus reproductive success: The central theoretical problem of human sociobiologyBehavioral and Brain Sciences 9(1), 167-187.

‘Alas Poor Darwin’: How Stephen Jay Gould Became an Evolutionary Psychologist and Steven Rose a Scientific Racist

Steven Rose and Hillary Rose (eds.), Alas Poor Darwin: Arguments against Evolutionary Psychology, London: Jonathan Cape, 2000.

Alas Poor Darwin: Arguments against Evolutionary Psychology’ is an edited book composed of multiple essays by different authors, from different academic fields, brought together for the purpose of ostensibly all critiquing the emerging science of evolutionary psychology. This multiple authorship makes it difficult to provide an overall review, since the authors approaches to the topic differ markedly.  

Indeed, the editors admit as much, conceding that the contributors “do not speak with a single voice” (p9). This seems to a tacit admission that they frequently contradict one another. 

Thus, for example, feminist biologist Anne Fausto-Sterling attacks evolutionary psychologists such as Donald Symons as sexist for arguing that the female orgasm as a mere by-product of the male orgasm and not an adaptation in itself, complaining that, according to Symons, women “did not even evolve their own orgasms” (p176). 

Yet, on the other hand, scientific charlatan Stephen Jay Gould criticizes evolutionary psychologists for the precise opposite offence, namely for (supposedly) viewing all human traits and behaviours as necessarily adaptations and ignoring the possibility of by-products (p103-4).

Meanwhile, some chapters are essentially irrelevant to the project of evolutionary psychology

For example, one, that of full-time ‘Dawkins-stalker’ (and part-time philosopher) Mary Midgley critiques the quite separate approach of memetics

Likewise, one singularly uninsightful chapter by ‘disability activist’ Tom Shakespeare and a colleague seems to say nothing with which the average evolutionary psychologist would likely disagree. Indeed, they seem to say little of substance at all. 

Only at the end of their chapter do they make the obligatory reference to just-so stories, and, more bizarrely, to the “single-gene determinism of the biological reductionists” (p203).

Yet, as anyone who has ever read any evolutionary psychology is surely aware, evolutionary psychologists, like other evolutionary biologists, emphasize to the point of repetitiveness that, while they may talk of ‘genes for’ certain characteristics as a form of scientific shorthand, nothing in their theories implies a one-to-one concordance between single genes and behaviours. 

Indeed, the irrelevance of some chapters to their supposed subject-matter (i.e. evolutionary psychology) makes one wonder whether some of the contributors to the volume have ever actually read any evolutionary psychology, or even any popularizations of the field – or whether their entire limited knowledge of the field was gained by reading critiques of evolutionary psychology by other contributors to the volume. 

Annette Karmiloff-Smith’s chapter, entitled ‘Why babies’ brains are not Swiss army knives’, is a critique of what she refers to as nativism, namely the belief that certain brain structures (or modules) are innately hardwired into the brain at birth.

This chapter, perhaps alone in the entire volume, may have value as a critique of some strands of evolutionary psychology.

Any analogy is imperfect; otherwise it would not be an analogy but rather an identity. However, given that even a modern micro-computer has been criticized as an inadequate model for the human brain, comparing human brains to a Swiss army knives is obviously an analogy that should not be taken too far.

However, the nativist, massive modularity thesis that Karmiloff-Smith associates with evolutionary psychology, while indeed typical of what we might call the narrow ‘Tooby and Cosmides brand’ of evolutionary psychology is rejected by many evolutionary psychologists (e.g. the authors of Human Evolutionary Psychology) and is not, in my view, integral to evolutionary psychology as a discipline or approach.

Instead, evolutionary psychology posits that behaviour have been shaped by natural selection to maximise the reproductive success of organisms in ancestral environments. It therefore allows us to bypass the proximate level of causation in the brain by recognising that, howsoever the brain is structured and produces behaviour in interaction with its environment, given that this brain evolved through a process of natural selection, it must be such as to produce behaviour which maximizes the reproductive success of its bearer, at least under ancestral conditions. (This is sometimes called the phenotypic gambit.) 

Stephen Jay Gould’s Deathbed Conversion?

Undoubtedly the best known, and arguably the most prestigious, contributor to the Roses’ volume is the famed palaeontologist and popular science writer Stephen Jay Gould. Indeed, such is his renown that Gould evidently did not feel it necessary to contribute an original chapter for this volume, instead simply recycling, and retitling, what appears to be a book review, previously published in The New York Review of Books (Gould 1997). 

This is a critical review of a book Darwin’s Dangerous Idea: Evolution and the Meanings of Life by philosopher Daniel Dennett that is itself critical of Gould, a form of academic self-defence. Neither the book, nor the review, deal primarily with the topic of evolutionary psychology, but rather with more general issues in evolutionary biology. 

Yet the most remarkable revelation of Gould’s chapter – especially given that it appears in a book ostensibly critiquing evolutionary psychology – is that the best-known and most widely-cited erstwhile opponent of evolutionary psychology is apparently no longer any such thing. 

On the contrary, he now claims in this essay: 

‘Evolutionary psychology’… could be quite useful, if proponents would change their propensity for cultism and ultra-Darwinian fealty for a healthy dose of modesty” (p98). 

Indeed, even more remarkably, Gould even acknowledges: 

The most promising theory of evolutionary psychology [is] the recognition that differing Darwinian requirements for males and females imply distinct adaptive behaviors centred on male advantage in spreading sperm as widely as possible… and female strategy for extracting time and attention from males… [which] probably does underlie some different, and broadly general, emotional propensities oof human males and females” (p102). 

In other words, it seems that Gould now accepts the position of evolutionary psychologists in that most controversial of areas – innate sex differences

In this context, I am reminded of John Tooby and Leda Cosmides’s observation that critics of evolutionary psychology, in the course of their attacks on evolutionary psychology, often make concessions that, if made in any context other than that of an attack on evolutionary psychology, would cause them to themselves be labelled (and attacked) as evolutionary psychologists (Tooby and Cosmides 2000). 

Nevertheless, Gould’s backtracking is a welcome development, notwithstanding his usual arrogant tone.[1]

Given that he passed away only a couple of years after the current volume was published, one might almost, with only slight hyperbole, characterise his backtracking as a deathbed conversion. 

Ultra-Darwinism? Hyper-Adaptationism?

On the other hand, Gould’s criticisms of evolutionary psychology have not evolved at all but merely retread familiar gripes which evolutionary psychologists (and indeed so-called sociobiologists before them) dealt with decades ago. 

For example, he accuses evolutionary psychologists of viewing every human trait as adaptive and ignoring the possibility of by-products (p103-4). 

However, this claim is easily rebutted by simply reading the primary literature in the field. 

Thus, for example, Martin Daly and Margo Wilson view the high rate of abuse perpetrated by stepparents, not as itself adaptive, but as a by-product of the adaptive tendency for stepparents to care less for their stepchildren than they would for their biological children (see The Truth about Cinderella: which I have reviewed here).  

Similarly, Donald Symons argued that the female orgasm is not itself adaptive, but rather is merely a by-product of the male orgasm, just as male nipples are a non-adaptive by-product of female nipples (see The Evolution of Human Sexuality: which I have reviewed here).  

Meanwhile, Randy Thornhill and Craig Palmer are divided as to whether human rape is adaptive or merely a by-product of men’s greater desire for commitment-free promiscuous sex (A Natural History of Rape: which I have reviewed here). 

However, unlike Gould himself, evolutionary psychologists generally prefer the term ‘by-product’ to Gould’s unhelpful coinage ‘spandrel’. The former term is readily intelligible to any educated person fluent in English. Gould’s preferred terms is needless obfuscation. 

As emphasized by Richard Dawkins, the invention of jargon to baffle non-specialists (e.g. referring to animal rape as “forced copulation” as the Roses advocate: p2) is the preserve of fields suffering from physics-envy, according to ‘Dawkins’ First Law of the Conservation of Difficulty’, whereby “obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity”. 

Untestable? Unfalsifiable?

Gould’s other main criticism of evolutionary psychology is his claim that sociobiological theories are inherently untestable and unfalsifiable – i.e. what Gould calls Just So Stories

However, one only has to flick through copies of journals like Evolution and Human Behavior, Human Nature, Evolutionary PsychologyEvolutionary Psychological Science, and many other journals that regularly publish research in evolutionary psychology, to see evolutionary psychological theories being tested, and indeed often falsified, every month. 

As evidence for the supposed unfalsifiability of sociobiological theories, Gould cites, not such primary research literature, but rather a work of popular science, namely Robert Wright’s The Moral Animal

Thus, he quotes Robert Wright as asserting in this book that our “sweet tooth” (i.e. taste for sugar), although maladaptive in the contemporary West because it leads to obesity, diabetes and heart disease, was nevertheless adaptive in ancestral environments (i.e. the EEA) where, as Wright put it, “fruit existed but candy didn’t” (The Moral Animal: p67). 

Yet, Gould protests indignantly, in support of this claim, Wright cites “no paleontological data about ancestral feeding” (p100). 

However, Wright is a popular science writer, not an academic researcher, and his book, The Moral Animal, for all its many virtues, is a work of popular science. As such, Wright, unlike someone writing a scientific paper, is not to be expected to cite a source for every claim he makes. 

Moreover, is Gould, a palaeontologist, really so ignorant of human history that he seriously believes we really need “paleontological data” in order to demonstrate that fruit is not a recent invention but that candy is? Is this really the best example he can come up with? 

From ‘Straw Men’ to Fabricated Quotations 

Rather than arguing against the actual theories of evolutionary psychologists, contributors to ‘Alas Poor Darwin’ instead resort to the easier option of misrepresenting these theories, so as to make the task of arguing against them less arduous. This is, of course, the familiar rhetorical tactic of constructing of straw man

In the case of co-editor, Hilary Rose, this crosses the line from rhetorical deceit to outright defamation of character when, on p116, she falsely attributes to sociobiologist David Barash an offensive quotation violating the naturalistic fallacy by purporting to justify rape by reference to its adaptive function

Yet Barash simply does not say the words she attributes to him on the page she cites (or any other page) in Whisperings Within, the book form which the quotation claims be drawn. (I know, because I own a copy of said book.) 

Rather, after a discussion of the adaptive function of rape in ducks, Barash merely tentatively ventures that, although vastly more complex, human rape may serve an analogous evolutionary function (Whisperings Within: p55). 

Is Steven Rose a Scientific Racist? 

As for Steven Rose, the book’s other editor, unlike Gould, he does not repent his sins and convert to evolutionary psychology. However, in maintaining his evangelical crusade against evolutionary psychology, sociobiology and all related heresies, Rose inadvertently undergoes a conversion, in many ways, even more dramatic and far reaching in its consequences. 

To understand why, we must examine Rose’s position in more depth. 

Steven Rose, it goes almost without saying, is not a creationist. On the contrary, he is, in addition to his popular science writing and leftist political activism, a working neuroscientist who very much accepts Darwin’s theory of evolution

Rose is therefore obliged to reconcile his opposition to evolutionary psychology with the recognition that the brain is, like the body, a product of evolution. 

Ironically, this leads him to employ evolutionary arguments against evolutionary psychology

For example, Rose mounts an evolutionary defence of the largely discredited theory of group selection, whereby it is contended that traits sometimes evolve, not because they increase the fitness of the individual possessing them, but rather because they aid the survival of the group of which s/he is a member, even at a cost to the fitness of the individual themselves (p257-9). 

Indeed, Rose even goes further, even going so far as to assert: 

Selection can occur at even higher levels – that of the species for example” (p258). 

Similarly, in the book’s introduction, co-authored with his wife Hillary, the Roses dismiss the importance of evolutionary psychological concept of the environment of evolutionary adaptedness’ (or ‘EEA’).[2] 

This term refers to the idea that we evolved to maximise our reproductive success, not in the sort of post-industrial contemporary Western societies in which we now so often find ourselves, but rather in the sorts of environments in which our ancestors spent most of our evolutionary history, namely as Stone Age hunter-gatherers

On this view, much behaviour in modern Western societies is recognized as maladaptive, reflecting a mismatch between the environment to which we are adapted and that in which we find ourselves, simply because we have not had sufficient time to evolve psychological mechanisms for dealing with such ‘evolutionary novelties’ as contraception, paternity tests and chocolate bars. 

However, the Roses argue that evolution can occur much faster than this. Thus, they point to: 

The huge changes produced by artificial selection by humans among domesticated animals – cattle, dogs and… pigeons – in only a few generations. Indeed, unaided natural selection in Darwin’s own Islands, the Galapagos, studied over several decades by the Grants is enough to produce significant changes in the birds’ beaks and feeding habits in response to climate change” (p1-2). 

Finally, Rose rejects the modular’ model of the human mind championed by some evolutionary psychologists, whereby the brain is conceptualized as being composed of many separate domain-specific modules, each specialized for a particular class of adaptive problem faced by ancestral humans.  

As evidence against this thesis, Rose points to the absence of a direct one-to-one relationship between the modules postulated by evolutionary psychologists and actual regions of the brain as identified by neuroscientists (p260-2). 

Whether such modules are more than theoretical entities is unclear, at least to most neuroscientists. Indeed evolutionary psychologists such as Pinker go to some lengths to make it clear that the ‘mental modules’ they invent do not, or at least do not necessarily, map onto specific brain structures” (p260). 

Thus, Rose protests: 

Evolutionary psychology theorists, who… are not themselves neuroscientists, or even, by and large, biologists, show as great a disdain for relating their theoretical concepts to material brains as did the now discredited behaviorists they so despise” (p261). 

Yet there is an irony here – namely, in employing evolutionary arguments against evolutionary psychology (i.e. emphasizing the importance of group selection and of recently evolved adaptations), Rose, unlike many of his co-contributors, actually implicitly accepts the idea of an evolutionary approach to understanding human behaviour and psychology

In other words, if Rose is indeed right about these matters (group selection, recently evolved adaptations and domain general psychological mechanisms), this would suggest, not the abandonment of an evolutionary approach in psychology, but rather the need to develop a new evolutionary psychology that gives appropriate weight to such factors as group selection, recently evolved adaptations and domain general psychological mechanisms

Actually, however, as we will see, this ‘newevolutionary psychology may not be all that new and Rose may find he has unlikely bedfellows in this endeavour. 

Thus, group selection – which tends to imply that conflict between groups such as races and ethnic groups is inevitable – has already been defended by race theorists such as Philippe Rushton and Kevin MacDonald

For example, Rushton, author of Race, Evolution and Behavior (which I have reviewed here), a notorious racial theorist known for arguing that black people are genetically predisposed to crime, promiscuity and low IQ, has also authored papers with titles like Genetic similarity, human altruism and group-selection (Rushton 1989) and Genetic similarity theory, ethnocentrism, and group selection (Rushton 1998), which defend and draw on the concept of group selection to explain such behaviours as racism and ethnocentrism.

Similarly, Kevin Macdonald, a former professor of psychology widely accused of anti-Semitism, has also championed the theory of group selection, and even developed a theory of cultural group selection to explain the survival and prospering of the Jewish people in diaspora in his book, A People That Shall Dwell Alone: Judaism as a Group Evolutionary Strategy (which I have reviewed here) and its more infamous, and theoretically flawed, sequel, The Culture of Critique (which I have reviewed here). 

Similarly, the claim that sufficient time has elapsed for significant evolutionary change to have occurred since the Stone Age (our species’ primary putative environment of evolutionary adaptedness) necessarily also entails recognition that sufficient time has also elapsed for different human populations, including different races, to have significantly diverged in, not just their physiology, but also their psychology, behaviour and cognitive ability.[3]

Finally, rejection of a modular conception of the human mind is consistent with an emphasis on what is perhaps the ultimate domain-general factor in human cognition, namely general factor of intelligence, as championed by psychometriciansbehavioural geneticists, intelligence researchers and race theorists such as Arthur Jensen, Richard Lynn, Chris Brand, Philippe Rushton and the authors of The Bell Curve (which I have reviewed here), who believe that individuals and groups differ in intellectual ability, that some individuals and groups are more intelligent across the board, and that these differences are partly genetic in origin.

Thus, Kevin Macdonald specifically criticizes mainstream evolutionary psychology for its failure to give due weight to the importance of domain-general mechanisms, in particular general intelligence (Macdonald 1991). 

Indeed, Rose himself elsewhere acknowledges that: 

The insistence of evolutionary psychology theorists on modularity puts a strain on their otherwise heaven-made alliance with behaviour geneticists” (p261).[4]

Thus, in rejecting the tenets of mainstream evolutionary psychology, Rose inadvertently advocates, not so much a new form of evolutionary psychology, as rather an old form of scientific racism.

Of course, Steven Rose is not a racist. On the contrary, he has built a minor, if undistinguished, literary career smearing those other scientists he characterises and smears as such.[5]

However, descending to Rose’s own level of argumentation (e.g. employing guilt by association and argumenta ad hominem), he is easily characterised as such. After all, his arguments against the concept of the EEA, and in favour of group-selectionism directly echo those employed by the very scientific racists (e.g. Rushton, Sarich) whom Rose has built a minor literary career out of defaming. 

Thus, by rejecting many claims of mainstream evolutionary psychologists – about the environment of evolutionary adaptedness, about group-selectionism and about modularity – Rose ironically plays into the hands of the very ‘scientific racists’ whom he purportedly opposes.

Thus, if his friend and comrade Stephen Jay Gould, in own his recycled contribution to ‘Alas Poor Darwin’, underwent a surprising but welcome deathbed conversion to evolutionary psychology, then Steven Rose’s transformation proves even more dramatic but rather less welcome. He might, moreover, find his new bedfellows less good company than he expected. 

Endnotes

[1] Throughout his essay, Gould, rather than admit he was wrong with respect to sociobiology, the then-emerging approach that came to dominate research in animal behaviour but was rashly rejected by Gould and other leftist activists, instead makes no such concession. Rather, he seems to imply, even if he does not directly state, that it was his constructive criticism of sociobiology which led to advances in the field and indeed to the development of evolutionary psychology from human sociobiology. Yet, as anyone who followed the controversies over sociobiology and evolutionary psychology, and read Gould’s writings on these topics will be aware, this is far from the case.
Gould, it ought to be noted in this context, was notorious for his arrogance and self-importance. For example, even his friend, colleague and collaborator Richard Lewontin, who shared Gould’s radical leftist politics, and his willingness to subordinate science to politics and misrepresent scientific findings for reasons of political expediency, acknowledged that “Steve… was preoccupied with the desire to be considered a very original and great evolutionary theorist”, and that this led him to exaggerate the importance of his own supposed scientific discoveries, especially so-called punctuated equilibrium. Hence my reference above to his “his usual arrogant tone”. Thus, when Gould advises, in the passage quoted above, that evolutionary psychologists adopt “a healthy dose of modesty”, there is no little irony, and perhaps some projection, in this suggestion.

[2] Actually, the term environment of evolutionary adaptedness was coined, not by evolutionary psychologists, but rather by psychoanalyst and attachment theorist, John Bowby.

[3] This is a topic addressed in such controversial recent books as Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution and Nicholas Wade’s A Troublesome Inheritance: Genes, Race and Human History. It is also a central theme of Sarich and Frank Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here). Papers discussing the significance of recent and divergent evolution in different populations for the underlying assumptions of evolutionary psychology include Winegard et al (2017) and Frost (2011). Evolutionary psychologists in the 1990s and 2000s, especially those affiliated with Tooby and Cosmides at UCSB, were perhaps guilty of associating the environment of evolutionary adaptedness too narrowly with Pleistocene hunter-gatherers on the African savanna. Thus, Tooby and Cosmides have written our modern skulls house a stone age mind. However, while embracing this catchy if misleading soundbite, in the same article Tooby and Cosmides also write more accurately:

The environment of evolutionary adaptedness, or EEA, is not a place or time. It is the statistical composite of selection pressures that caused the design of an adaptation. Thus the EEA for one adaptation may be different from that for another” (Cosmides and Tooby 1997).

Thus, the EEA is not a single time and place that a researcher could visit with the aid of a map, a compass, a research grant and a time machine. Rather a range of environments, and also that the relevant range of environments may differ in respect of different adaptations.

[4] This reference to the “otherwise heaven-made alliance” between evolutionary psychologists and behavioural geneticists, incidentally, contradicts Rose‘s own acknowledgement, made just a few pages earlier, that:

Evolutionary psychologists are often at pains to distinguish themselves from behaviour geneticists and there is some hostility between the two” (p248). 

As we have seen, consistency is not Steven Rose’s strong point. See Kanazawa 2004 the alternative view that general intelligence is itself, paradoxically, a domain-specific module.

[5] I feel the need to emphasise that Rose is not a racist, not least for fear that he might sue me for defamation if I suggest otherwise. And if you think the idea of a professor suing some random, obscure blogger for a blog post is preposterous, then just remember – this is a man who once threatened legal action against publishers of a comic book – yes, a comic book – and forced the publishers to append an apology to some 10,000 copies of the said comic book, for supposedly misrepresenting his views in a speech bubble in said comic book, complaining “The author had literally [sic] put into my mouth a completely fatuous statement” (Brown 1999) – an ironic complaint given the fabricated quotation, of a genuinely defamatory nature, attributed to David Barash by his Rose’s own wife Hillary in the current volume: see above, for which Rose himself, as co-editor, is vicariously responsible.
Rose, it should be noted, is an open and unabashed opponent of free speech. Indeed, Rose even stands accused by German scientist, geneticist and intelligence researcher Volkmar Weiss of actively instigating the infamously repressive communist regime in East Germany to repress a courageous dissident scientist in that country (Weiss 1991). This is moreover an allegation that Rose has, to my knowledge, never denied or brought legal action in respect, despite his known track record for threatening legal action against the publishers of comic books.

References 

Brown (1999) Origins of the speciousGuardian, November 30.
Frost (2007) Human nature or human natures? Futures 43(8): 740-74.
Gould (1997) Darwinian Fundamentalism, New York Review of Books, June 12.
Kanazawa, (2004) General Intelligence as a Domain-Specific Module, Psychological Review 111(2):512-523. 
Macdonald (1991) A perspective on Darwinian psychology: The importance of domain-general mechanisms, plasticity, and individual differencesEthology and Sociobiology 12(6): 449-480.
Rushton (1989) Genetic similarity, human altruism and group-selectionBehavioral and Brain Sciences 12(3) 503-59.
Rushton (1998). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Ideology and Warfare: Evolutionary Perspectives (pp369-388). Oxford: Berghahn Books.
Tooby & Cosmides (1997) Evolutionary Psychology: A Primer, published at the Center for Evolutionary Psychology website, UCSB.
Tooby & Cosmides (2000) Unpublished Letter to the Editor of New Republic, published at the Center for Evolutionary Psychology website, UCSB.
Weiss (1991) It could be Neo-Lysenkoism, if there was ever a break in continuity! Mankind Quarterly 31: 231-253.
Winegard et al (2007) Human Biological and Psychological Diversity. Evolutionary Psychological Science 3:159–180.

Edward O Wilson’s ‘Sociobiology: The New Synthesis’: A Book Much Read About, But Rarely Actually Read

Edward O Wilson, Sociobiology: The New Synthesis Cambridge: Belknap, Harvard 1975

Sociobiology – The Field That Dare Not Speak its Name? 

From its first publication in 1975, the reception accorded Edward O Wilson’s ‘Sociobiology: The New Synthesis’ has been divided. 

On the one hand, among biologists, especially those specialist in the fields of ethology, zoology and animal behaviour, the reception was almost universally laudatory. Indeed, my 25th Anniversary Edition even proudly proclaims on the cover that it was voted by officers and fellows of the Animal Behavior Society as the most important ever book on animal behaviour, supplanting even Darwin’s own seminal On The Expression of Emotions in Man and Animals

However, on the other side of the university campus, in social science departments, the reaction was very different. 

Indeed, the hostility that the book provoked was such that ‘sociobiology’ became almost a dirty word in the social sciences, and ultimately throughout the academy, to such an extent that ultimately the term fell into disuse (save as a term of abuse) and was replaced by largely synonymous euphemisms like behavioral ecology and evolutionary psychology.[1]

Sociobiology thus became, in academia, ‘the field that dare not speak its name’. 

Similarly, within the social sciences, even those researchers whose work carried on the sociobiological approach in all but name almost always played down the extent of their debt to Wilson himself. 

Thus, books on evolutionary psychology typically begin with disclaimers acknowledging that the sociobiology of Wilson was, of course, crude and simplistic, and that their own approach is, of course, infinitely more sophisticated. 

Indeed, reading some recent works on evolutionary psychology, one could be forgiven for thinking that evolutionary approaches to understanding human behaviour began around 1989 with the work of Tooby and Cosmides

Defining the Field 

What then does the word ‘sociobiology’ mean? 

Today, as I have mentioned, the term has largely fallen into disuse, save among certain social scientists who seem to employ it as a rather indiscriminate term of abuse for any theory of human behaviour that they perceive as placing too great a weight on hereditary or biological factors, including many areas of research only tangentially connected to with sociobiology as Wilson originally conceived of it (e.g. behavioral genetics).[2]

The term ‘sociobiology’ was not Wilson’s own coinage. It had occasionally been used by biologists before, albeit rarely. However, Wilson was responsible for popularizing – and perhaps, in the long-term, ultimately unpopularizing it too, since, as we have seen, the term has largely fallen into disuse.[3] 

Wilson himself defined ‘sociobiology’ as: 

The systematic study of the biological basis of all social behavior” (p4; p595). 

However, as the term was understood by other biologists, and indeed applied by Wilson himself, sociobiology came to be construed more narrowly. Thus, it was associated in particular with the question of why behaviours evolved and the evolutionary function they serve in promoting the reproductive success of the organism (i.e. just one of Tinbergen’s Four Questions). 

The hormonal, neuroscientific, or genetic causes of behaviours are just as much a part of “the biological basis of behavior” as are the ultimate evolutionary functions of behaviour. However, these lie outside of scope of sociobiology as the term was usually understood. 

Indeed, Wilson himself admitted as much, writing in ‘Sociobiology: The New Synthesis’ itself of how: 

Behavioral biology… is now emerging as two distinct disciplines centered on neurophysiology and… sociobiology” (p6). 

Yet, in another sense, Wilson’s definition of the field was also too narrow. 

Thus, behavioural ecologists have come to study all forms of behaviour, not just social behaviour.  

For example, optimal foraging theory is a major subfield within behavioural ecology (the successor field to sociobiology), but concerns feeding behaviour, which may be an entirely solitary, non-social activity. 

Indeed, even some aspects of an organism’s physiology (as distinct from behaviour) have come to be seen as within the purview of sociobiology (e.g. the evolution of the peacock’s tail). 

A Book Much Read About, But Rarely Actually Read 

Sociobiology: The New Synthesis’ was a massive tome, numbering almost 700 pages. 

As Wilson proudly proclaims in his glossary, it was: 

Written with the broadest possible audience in mind and most of it can be read with full understanding by any intelligent person whether or not he or she has had any formal training in science” (p577). 

Unfortunately, however, the sheer size of the work alone was probably enough to deter most such readers long before they reached p577 where these words appear. 

Indeed, I suspect the very size of the book was a factor in explaining the almost universally hostile reception that the book received among social scientists. 

In short, the book was so large that the vast majority of social scientists had neither the time nor the inclination to actually read it for themselves, especially since a cursory flick through its pages showed that the vast majority of them seemed to be concerned with the behaviour of species other than humans, and hence, as they saw it, of little relevance to their own work. 

Instead, therefore, their entire knowledge of the sociobiology was filtered through to them via the critiques of the approach authored by other social scientists, themselves mostly hostile to sociobiology, who presented a straw man caricature of what sociobiology actually represented. 

Indeed, the caricature of sociobiology presented by these authors is so distorted that, reading some of these critiques, one often gets the impression that included among those social scientists not bothering to read the book for themselves were most of the social scientists nevertheless taking it upon themselves to write critiques of it. 

Meanwhile, the fact that the field was so obviously misguided (as indeed it often was in the caricatured form presented in the critiques) gave most social scientists yet another reason not to bother wading through its 700 or so pages for themselves. 

As a result, among sociologists, psychologists, anthropologists, public intellectuals, and other such ‘professional damned fools’, as well as the wider the semi-educated, reading public, ‘Sociobiology: The New Synthesis’ became a book much read about – but rarely actually read (at least in full). 

As a consequence, as with other books falling into this category (e.g. the Bible and The Bell Curve) many myths have emerged regarding its contents which are quite contradicted on actually taking the time to read it for oneself. 

The Many Myths of Sociobiology 

Perhaps the foremost myth is that sociobiology was primarily a theory of human behaviour. In fact, as is revealed by even a cursory flick through the pages of Wilson’s book, sociobiology was, first and foremost, a theoretical approach to understanding animal behaviour. 

Indeed, Wilson’s decision to attempt to apply sociobiological theory to humans as well was, it seems, almost something of an afterthought, and necessitated by his desire to provide a comprehensive overview of the behaviour of all social animals, humans included. 
 
This is connected to the second myth – namely, that sociobiology was Wilson’s own theory. In fact, rather than a single theory, sociobiology is better viewed as a particular approach to a field of study, the field in question being animal behaviour. 
 
Moreover, far from being Wilson’s own theory, the major advances in the understanding of animal behaviour that gave rise to what came to be referred to as ‘sociobiology’ were made in the main by biologists other than Wilson himself.  
 
Thus, it was William Hamilton who first formulated inclusive fitness theory (which came to be known as the theory of kin selection); John Maynard Smith who first introduced economic models and game theory into behavioural biology; George C Williams who was responsible for displacing a crude group-selection in favour of a new focus on the gene itself as the principal unit of selection; while Robert Trivers was responsible for such theories such as reciprocal altruismparent-offspring conflict and differential parental investment theory
 
Instead, Wilson’s key role was to bring the various strands of the emerging field together, give it a name and, in the process, take far more than his fair share of the resulting flak. 
 
Thus, far from being a maverick theory of a single individual, what came to be known as ‘sociobiology’ was, if not based on accepted biological theory at the time of publication, then at least based on biological theory that came to be recognised as mainstream within a few years of its publication. 
 
Controversy attached almost exclusively to the application of these same principles to explain human behaviour. 

Applying Sociobiology to Humans 

In respect of Wilson’s application of sociobiological theory to humans, misconceptions again abound. 

For example, it is often asserted that Wilson only extended his theory to apply to human behaviour in his infamous final chapter, entitled, ‘Man: From Sociobiology to Sociology’. 

Actually, however, Wilson had discussed the possible application of sociobiological theory to humans several times in earlier chapters. 
 
Often, this was at the end of a chapter. For example, his chapter on “Roles and Castes” closes with a discussion of “Roles in Human Societies” (p312-3). Similarly, the final subsection of his chapter on “Aggression” is titled “Human Aggression” (p 254-5). 
 
Other times, however, humans get a mention in mid-chapter, as in Chapter Fifteen, which is titled ‘Sex and Society’, where Wilson discusses the association between adultery, cuckoldry and violent retribution in human societies, and rightly prophesizes that “the implications for the study of humans” of Trivers’ theory of differential parental investment “are potentially great” (p327). 
 
Another misconception is that, while he may not have founded the approach that came to be known as sociobiology, it was Wilson who courted controversy, and bore most of the flak, because he was the first biologist brave, foolish, ambitious, farsighted or naïve enough to attempt to apply sociobiological theory to humans. 
 
Actually, however, this is untrue. For example, a large part of Robert Trivers’ seminal paper on reciprocal altruism published in 1971 dealt with reciprocal altruism in humans and with what are presumably specifically human moral emotions, such as guilt, gratitude, friendship and moralistic anger (Trivers 1971). 
 
However, Trivers’ work was published in the Journal of Theoretical Biology and therefore presumably never came to the attention of any of the leftist social scientists largely responsible for the furore over sociobiology, who, being of the opinion that biological theory was wholly irrelevant to human behaviour, and hence to their own field, were unlikely to be regular readers of the journal in question. 

Yet this is perhaps unfortunate since Trivers, unlike the unfortunate Wilson, had impeccable left-wing credentials, which may have deflected some of the overtly politicized criticism (and pitchers of water) that later came Wilson’s way. 

Reductionism vs Holism

Among the most familiar charges levelled against Wilson by his opponents within the social sciences, and by contemporary opponents of sociobiology and evolutionary psychology, alongside the familiar and time-worn charges of ‘biological determinism’ and ‘genetic determinism’, is that sociobiology is inherently reductionist, something which is, they imply, very much a bad thing. 
 
It is therefore something of a surprise to find among the opening pages of ‘Sociobiology: The New Synthesis’, Wilson defending “holism”, as represented, in Wilson’s view, by the field of sociobiology itself, as against what he terms “the triumphant reductionism of molecular biology” (p7). 
 
This passage is particularly surprising for anyone who has read Wilson’s more recent work Consilience: The Unity of Knowledge, where he launches a trenchant, unapologetic and, in my view, wholly convincing defence of “reductionism” as representing, not only “the cutting edge of science… breaking down nature into its constituent components” but moreover “the primary and essential activity of science” and hence at the very heart of the scientific method (Consilience: p59). 

Thus, in a quotable aphorism, Wilson concludes: 

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science” (Consilience: p59). 

Of course, whether ‘reductionism’ is a good or bad thing, as well as the extent to which sociobiology can be considered ‘reductionist’, ultimately depends on precisely how we define ‘reductionism’. Moreover, ‘reductionism’, how ever defined, is a surely matter of degree. 

Thus, philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, distinguishes what he calls “greedy reductionism”, which attempts to oversimplify the world (e.g. Skinnerian behaviourism, which seeks to explain all behaviours in terms of conditioning), from “good reductionism”, which attempts to understand it in all its complexity (i.e. good science).

On the other hand, ‘holistic’ is a word most often employed in defence of wholly unscientific approaches, such as so-called holistic medicine, and, for me, the word itself is almost always something of a red flag. 

Thus, the opponents of sociobiology, in using the term ‘reductionist’ as a criticism, are rejecting the whole notion of a scientific approach to understanding human behaviour. In its place, they offer only a vague, wishy-washy, untestable and frankly anti-scientific obscurantism, whereby any attempt to explain behaviour in terms of causes and effects is dismissed as reductionism and determinism

Yet explaining behaviour, whether the behaviour of organisms, atoms, molecules or chemical substances, in terms of causes and effects is the very essence, if not the very definition, of science. 

In other words, determinism (i.e. the belief that events are determined by causes) is not so much a finding of science as its basic underlying assumption.[4]

Yet Wilson’s own championing of “holism” in ‘Sociobiology: The New Synthesis’ can be made sense of in its historical context. 

In other words, just as Wilson’s defence of reductionism in ‘Concilience’ was a response to the so-called sociobiology debates of the 1970s and 80s in which the charge of ‘reductionism’ was wielded indiscriminately by the opponents of sociobiology, so Wilson’s defence of holism in ‘Sociobiology: The New Synthesis’ itself must be understood in the context, not of the controversy that this work itself provoked (which Wilson was, at the time, unable to foresee), but rather of a controversy preceded its publication. 

In particular, certain molecular biologists at Harvard, and perhaps elsewhere, led by the brilliant yet but abrasive molecular biologist James Watson, had come to the opinion that molecular biology was to be the only biology, and that traditional biology, fieldwork and experiments were positively passé. 

This controversy is rather less familiar to anyone outside of Harvard University’s biology department than the sociobiology debates, which not only enlisted many academics from outside of biology (e.g. psychologists, sociologists, anthropologists and even philosophers), but also spilled over into the popular media and even became politicized. 

However, within the ivory towers of Harvard University’s department of biology, this controversy seems to have been just as fiercely fought over.[5]

As is clear from ‘Sociobiology: The New Synthesis’, Wilson’s own envisaged “holism” was far from the wishy-washy obscurantism which one usually associates with those championing a ‘holistic approach’, and thoroughly scientific. 

Thus, in On Human Nature, Wislon’s follow-up book to ‘Sociobiology: The New Synthesis’, where he first concerned himself specifically to the application of sociobiological theory to humans, Wilson gives perhaps his most balanced description of the relative importance of reductionism and holism, and indeed of the nature of science, writing: 

Raw reduction is only half the scientific process… the remainder consist[ing] of the reconstruction of complexity by an expanding synthesis under the control if laws newly demonstrated by analysis… reveal[ing] the existence of novel emergent phenomena” (On Human Nature: p11). 

It is therefore in this sense, and in contrast to the reductionism of molecular biology, that Wilson saw sociobiology as ‘holistic’. 

Group Selection? 

One of the key theoretical breakthroughs that formed the basis for what came to be known as sociobiology was the discrediting of group-selectionism, largely thanks to the work of George C Williams, whose ideas were later popularized by Richard Dawkins in The Selfish Gene (which I have reviewed here).[6] 
 
A focus the individual, or even the gene, as the primary, or indeed the only, unit of selection, came to be viewed as an integral component of the sociobiological worldview. Indeed, it was once seriously debated on the pages of the newsletter of the European Sociobiological Society whether one could truly be both a ‘sociobiologist’ and a ‘group-selectionist’ (Price 1996). 

It is therefore something of a surprise to discover that the author of ‘Sociobiology: The New Synthesis’, responsible for christening the emerging field, was himself something of a group-selectionist. 

Wilson has recently ‘come out’ as a group-selectionist by co-authoring a paper concerning the evolution of eusociality in ants (Nowak et al 2010). However, reading ‘Sociobiology: The New Synthesis’ leads one to suspect that Wilson had been a closet, or indeed a semi-out, group-selectionist all along. 

Certainly, Wilson repeats the familiar arguments against group-selectionism popularised by Richard Dawkins in The Selfish Gene (which I have reviewed here), but first articulated by George C Williams in Adaptation and Natural Selection (see p106-7). 

However, although he offers no rebuttal to these arguments, this does not prevent Wilson from invoking, or at least proposing, group-selectionist explanations for behaviours elsewhere in the remainder of the book (e.g. p275). 

Moreover, Wilson concludes: 

Group selection and higher levels of organization, however intuitively implausible… are at least theoretically possible under a wide range of conditions” (p30). 

 
Thus, it is clear that, unlike, say, Richard Dawkins, Wilson did not view group-selectionism as a terminally discredited theory. 

Man: From Sociobiology to Sociology… and Perhaps Evolutionary Psychology

What then of Wilson’s final chapter, entitled ‘Man – From Sociobiology to Sociology’? 

It was, of course, the only one to focus exclusively on humans, and, of course, the chapter that attracted by far the lion’s share of the outrage and controversy that soon ensued. 

Yet, reading it today, over forty years after it was first written, it is, I feel, rather disappointing. 

Let me be clear, I went in very much wanting to like it. 

After all, Wilson’s general approach was basically right. Humans, like all other organisms, have evolved through a process of natural selection. Therefore, their behaviour, no less than their physiology, or the physiology or behaviour of non-human organisms, must be understood in the light of this fact. 

Moreover, not only were almost all of the criticisms levelled at Wilson misguided, wrongheaded and unfair, but they often bordered upon persecution as well.

The most famous example of this leftist witch hunting was when, during a speech at the annual meeting of the American Association for the Advancement of Science, he was drenched him with a pitcher of water by leftist demonstrators. 

However, this was far from an isolated event. For example, an illustration from the book The Moral Animal shows a student placard advising protesters to “bring noisemakers” in order to deliberately disrupt one of Wilson’s speaking engagements (The Moral Animal: illustration p341). 

In short, Wilson seems to have been an early victim of what would today be called ‘deplatorming’ and ‘cancel culture’, phenomena that long predated the coining of these terms

Thus, one is tempted to see Wilson in the role of a kind of modern Galileo, being, like Galileo, persecuted for his scientific theories, which, like those of Galileo, turned out to be broadly correct. 

Moreover, Wilson’s views were, in some respects, analogous to those of Galileo. Both disputed prevailing orthodoxies in such a way as to challenge the view that humans were somehow unique or at the centre of things, Galileo by suggesting the earth was not at the centre of the solar system, and Wilson by showing that human behaviour was not all that different from that of other animals.[7]

Unfortunately, however, the actual substance of Wilson’s final chapter is rather dated.

Inevitably, any science book will be dated after forty years. However, while this is also true of the book as a whole, it seems especially true of this last chapter, which bears little resemblance to the contents of a modern textbook on evolutionary psychology

This is perhaps inevitable. While the application of sociobiological theory to understanding and explaining the behaviour other species was already well underway, the application of sociobiological theory to humans was, the pioneering work of Robert Trivers on reciprocal altruism notwithstanding, still very much in its infancy. 

Yet, while the substance of the chapter is dated, the general approach was spot on.

Indeed, even some of the advances claimed by evolutionary psychologists as their own were actually anticipated by Wilson. 

Thus, Wilson recognises:

One of the key questions [in human sociobiology] is to what extent the biogram represents an adaptation to modern cultural life and to what extent it is a phylogenetic vestige” (p458). 

He thus anticipates the key evolutionary psychological concept of the Environment of Evolutionary Adaptedness or EEA, whereby it is theorized that humans are evolutionarily adapted, not to the modern post-industrial societies in which so many of us today find ourselves, but rather to the ancestral environments in which our behaviours first evolved.

Wilson proposes examine human behavior from the disinterested perspective of “a zoologist from another planet”, and concludes: 

In this macroscopic view the humanities and social sciences shrink to specialized branches of biology” (p547). 

Thus, for Wilson: 

Sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the Modern Synthesis” (p4). 

Indeed, the idea that the behaviour of a single species is alone exempt from principles of general biology, to such an extent that it must be studied in entirely different university faculties by entirely different researchers, the vast majority with little or no knowledge of general biology, nor of the methods and theory of researchers studying the behaviour of all other organisms, reflects an indefensible anthropocentrism

However, despite the controversy these pronouncements provoked, Wilson was actually quite measured in his predictions and even urged caution, writing 

Whether the social sciences can be truly biologicized in this fashion remains to be seen” (p4) 

The evidence of the ensuing forty years suggests, in my view, that the social sciences can indeed be, and are well on the way to being, as Wilson puts it, ‘biologicized’. The only stumbling block has proven to be social scientists themselves, who have, in some cases, proven resistant. 

‘Vaunting Ambition’? 

Yet, despite these words of caution, the scale of Wilson’s intellectual ambition can hardly be exaggerated. 

First, he sought to synthesize the entire field of animal behavior under the rubric of sociobiology and in the process produce the ‘New Synthesis’ promised in the subtitle, by analogy with the Modern Synthesis of Darwinian evolution and Mendelian genetics that forms the basis for the entire field of modern biology. 

Then, in a final chapter, apparently as almost something of an afterthought, he decided to add human behaviour into his synthesis as well. 

This meant, not just providing a new foundation for a single subfield within biology (i.e. animal behaviour), but for several whole disciplines formerly virtually unconnected to biology – e.g. psychology, cultural anthropology, sociology, economics. 

Oh yeah… and moral philosophy and perhaps epistemology too. I forgot to mention that. 

From Sociobiology to… Philosophy?

Indeed, Wilson’s forays into philosophy proved even more controversial than those into social science. Though limited to a few paragraphs in his first and last chapter, they were among the most widely quoted, and critiqued, in the whole book. 

Not only were opponents of sociobiology (and philosophers) predictably indignant, but even those few researchers bravely taking up the sociobiological gauntlet, and even applying it to humans, remained mostly skeptical. 

In proposing to reconstruct moral philosophy on the basis of biology, Wilson was widely accused of committing what philosophers call the naturalistic fallacy or appeal to nature fallacy

This refers to the principle that, if a behaviour is natural, this does not necessarily make it right, any more than the fact that dying of tuberculosis is natural means that it is morally wrong to treat tuberculosis with such ‘unnatural’ interventions as vaccination or antibiotics. 

In general, evolutionary psychologists have generally been only too happy to reiterate the sacrosanct inviolability of the fact-value chasm, not least because it allowed them to investigate the evolutionary function of such morally dubious, or indeed morally reprehensible, behaviours as infidelity, rape, war, sexual infidelity and child abuse, while denying they are thereby providing a justification for the behaviours in question. 

Yet this begs the question: if we cannot derive values from facts, whence can values be arrived at? Can they be derived only from other values? If so, then whence are our ultimate moral values, from which all others are derived, themselves ultimately derived? Must they be simply taken on faith? 

Wilson has recently controversially argued, in his excellent Consilience: The Unity of Knowledge, that, in this context: 

The posing of the naturalistic fallacy is itself a fallacy” (Consilience: p273). 

Leaving aside this controversial claim, it is clear that his point in ‘Sociobiology’ is narrower. 

In short, Wilson seems to be arguing that, in contemplating the appropriateness of different theories of prescriptive ethics (e.g. utilitarianism, Kantian deontology), moral philosophers consult “the emotional control centers in the hypothalamus and limbic system of the brain” (p3). 

Yet these same moral philosophers take these emotions largely for granted. They treat the brain as a “black box” rather than a biological entity the nature of which is itself the subject of scientific study (p562). 

Yet, despite the criticism Wilson’s suggestion provoked among many philosophers, the philosophical implications of recognising that moral intuitions are themselves a product of the evolutionary process have since become an serious and active area of philosophical enquiry. Indeed, among the leading pioneers in this field of enquiry has been the philosopher of biology Michael Ruse, not least in collaboration Wilson himself (Ruse & Wilson 1986). 

Yet if moral philosophy must be rethought in the light of biology and the evolved nature of our psychology, then the same is also surely true of arguably the other main subfield of contemporary philosophy – namely epistemology.  

Yet Wilson’s comments regarding the relevance of sociobiological theory to epistemology are even briefer than the few sentences he devotes in his opening and closing chapters to moral philosophy, being restricted to less than a sentence – a mere five-word parenthesis in a sentence primarily discussing moral philosophy and philosophers (p3). 

However, what humans are capable of knowing is, like morality, ultimately a product of the human brain – a brain which is a itself biological entity that evolved through a process of natural selection. 

The brain, then, is designed not for discovering ‘truth’, in some abstract, philosophical sense, but rather for maximizing the reproductive success of the organism whose behaviour it controls and directs. 

Of course, for most purposes, natural selection would likely favour psychological mechanisms that produce, if not ‘truth’, then at least a reliable model of the world as it actually operates, so that an organism can modify its behaviour in accordance with this model, in order to produce outcomes that maximizes its inclusive fitness under these conditions. 

However, it is at least possible that there are certain phenomena that our brains are, through the very nature of their wiring and construction, incapable of fully understanding (e.g. quantum mechanics or the hard question of consciousness), simply because such understanding was of no utility in helping our ancestors to survive and reproduce in ancestral environments. 

The importance of evolutionary theory to our understanding of epistemology and the limits of human knowledge is, together with the relevance of evolutionary theory to moral philosophy, a theme explored in philosopher Michael Ruse’s book, Taking Darwin Seriously, and is also the principal theme of such recent works as The Case Against Reality: Why Evolution Hid the Truth from Our Eyes by Donald D Hoffman. 

Dated? 

Is ‘Sociobiology: The New Synthesis’ worth reading today? At almost 700 pages, it represents no idle investment of time. 

Wilson is a wonderful writer even in a purely literary sense, and has the unusual honour for a working scientist of being a twice Pulitzer-Prize winner. However, apart from a few provocative sections in the opening and closing chapters, ‘Sociobiology: The New Synthesis’ is largely written in the form of a student textbook, is not a book one is likely to read on account of its literary merits alone. 

As a textbook, Sociobiology is obviously dated. Indeed, the extent to which it has dated is an indication of the success of the research programme it helped inspire. 

Thus, one of the hallmarks of true science is the speed at which cutting-edge work becomes obsolete.  

Religious believers still cite holy books written millennia ago, while adherents of pseudo-sciences like psychoanalysis and Marxism still paw over the words of Freud and Marx. 

However, the scientific method is a cumulative process based on falsificationism and is moreover no respecter of persons.

Scientific works become obsolete almost as fast as they are published. Modern biologists only rarely cite Darwin. 

If you want a textbook summary of the latest research in sociobiology, I would instead recommend the latest edition of Animal Behavior: An Evolutionary Approach or An Introduction to Behavioral Ecology; or, if your primary interest is human behavior, the latest edition of David Buss’s Evolutionary Psychology: The New Science of the Mind

The continued value of ‘Sociobiology: The New Synthesis’ lies in the field, not of science, but history of science In this field, it will remain a landmark work in the history of human thought, for both the controversy, and the pioneering research, that followed in its wake. 

Endnotes

[1] Actually, ‘evolutionary psychology’ is not quite a synonym for ‘sociobiology’. Whereas the latter field sought to understand the behaviour of all animals, if not all organisms, the term ‘evolutionary psychology’ is usually employed only in relation to the study of human behaviour. It would be more accurate, then, to say ‘evolutionary psychology’ is a synonym, or euphemism, for ‘human sociobiology’.

[2] Whereas behavioural geneticists focus on heritable differences between individuals within a single population, evolutionary psychologists largely focus on behavioural adaptations that are presumed to be pan-human and universal. Indeed, it is often argued that there is likely to be minimal heritable variation in human psychological adaptations, precisely because such adaptations have been subject to such strong selection pressure as to weed out suboptimal variation, such that only the optimal genotype remains. On this view, substantial heritable variation is found only in respect of traits that have not been subject to intense selection pressure (see Tooby & Cosmides 1990). However, this fails to be take into account such phenomena as frequency dependent selection and other forms of polymorphism, whereby different individuals within a breeding population adopt, for example, quite different reproductive strategies. It is also difficult to reconcile with the finding of behavioural geneticists that there is substantial heritable variation in intelligence as between individuals, despite the fact that the expansion of human brain-size over the course of evolution suggests that intelligence has been subject to strong selection pressures.

[3] For example, in 1997, the journal Ethology and Sociobiology, which had by then become, and remains, the leading scholarly journal in the field of what would then have been termed ‘human sociobiology’, and now usually goes by the name of ‘evolutionary psychology’, changed its name to Evolution and Human Behavior.

[4] An irony is that, while science is built on the assumption of determinism, namely the assumption that observed phenomena have causes that can be discovered by controlled experimentation, one of the findings of science is that, at least at the quantum level, determinism is actually not true. This is among the reasons why quantum theory is paradoxically popular among people who don’t really like science (and who, like virtually everyone else, don’t really understand quantum theory). Thus, Richard Dawkins has memorably parodied quantum mysticism as as based on the reasoning that: 

Quantum mechanics, that brilliantly successful flagship theory of modern science, is deeply mysterious and hard to understand. Eastern mystics have always been deeply mysterious and hard to understand. Therefore, Eastern mystics must have been talking about quantum theory all along.”

[5] Indeed, although since reconciled, Wilson and Watson seem to have shared a deep personal animosity for one another, Wilson once describing how he had once considered Watson, with whom he later reconciled, “the most unpleasant human being I had ever met” – see Wilson’s autobiography, Naturalist. A student of Watson’s describes how, when Wilson was granted tenure at Harvard before Watson:

It was a ‘big, big day in our corridor’. Watson could be heard coming up the stairwell to the third floor  shouting ‘fuck, fuck, fuck’” (Watson and DNA: p98)  

Wilson’s description of Watson’s personality in his memoir is interesting in the light of the later controversy regarding the latters comments regarding the economic implications of racial differences in intelligence with Wilson writing: 

Watson, having risen to historic fame at an early age, became the Caligula of biology. He was given license to say anything that came to his mind and expect to be taken seriously. And unfortunately, he did so, with a casual and brutal offhandedness.” 

In contrast, geneticist David Reich suggests that Watson’s abrasive personality predated his scientific discoveries and may even have been partly responsible for them, writing: 

His obstreperousness may have been important to his success as a scientist” (Who We are and how We Got Here: p263).

[6] Group selection has recently, however, enjoyed something of a resurgence in the form of multi-level selection theory. Wilson himself is very much a supporter of this trend.

[7] Of course, it goes without saying that the persecution to which Wilson was subjected was as nothing compared to that to which Galileo was subjected (see my post, A Modern McCarthyism in Our Midst). 

References 

Nowak et al (2010) The evolution of eusociality Nature 466:1057–1062. 

Price (1996) ‘In Defence of Group Selection, European Sociobiological Society Newsletter. No. 42, October 1996 

Ruse & Wilson (1986) Moral Philosophy as Applied SciencePhilosophy 61(236):173-192 

Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and AdaptationJournal of Personality 58(1): 17-67. 

Trivers (1971) The evolution of reciprocal altruism. Quarterly Review of Biology 46:35–57 

Donald Symons’ ‘The Evolution of Human Sexuality’: A Founding Work of Modern Evolutionary Psychology

The Evolution of Human Sexuality by Donald Symons (Oxford University Press 1980). 

Research over the last four decades in the field that has come to be known as evolutionary psychology has focused disproportionately on mating behaviour. Geoffrey Miller (1998) has even argued that it is the theory of sexual selection rather than that of natural selection which, in practice, guides most research in this field. 

This does not reflect merely the prurience of researchers. Rather, given that reproductive success is the ultimate currency of natural selection, mating behaviour is, perhaps along with parental investment, the form of behaviour most directly subject to selective pressures.

Almost all of this research traces its ancestry ultimately to Donald Symons’ ‘The Evolution of Human Sexuality’ by Donald Symons. Indeed, much of it was explicitly designed to test claims and predictions formulated by Symons himself in this very book.

Age Preferences

For example, in his discussion of the age at which women are perceived as most attractive by males, Symons formulated two alternative hypotheses. 

First, if human evolutionary history were characterized by fleeting one-off sexual encounters (i.e. one-night standscasual sex and hook-ups), then, he reasoned, men would have evolved to find women most attractive when the latter are at the age of their maximum fertility

For women, fertility is said to peak around when a woman reaches her mid-twenties since, although women still in their teens have high pregnancy rates, they also experience greater risk of birth complications

However, if human evolutionary history were characterized instead by long-term pair bonds, then men would have evolved to be maximally attracted to somewhat younger women (i.e. those at the beginning of their reproductive careers), so that, by entering a long-term relationship with the woman at this time, a male is potentially able to monopolize her entire lifetime reproductive output (p189). 

More specifically, males would have evolved to prefer females, not of maximal fertility, but rather of maximal reproductive value, a term borrowed from demography and population genetics which refers to a person’s expected future reproductive output given their current age. Unlike fertility, a woman’s reproductive value peaks around her mid- to late-teens.  

On the basis of largely anecdotal evidence, Symons concludes that human males have evolved to be most attracted to females of maximal reproductive value rather than maximal fertility.  

Subsequent research designed to test between Symons’s rival hypotheses has largely confirmed his speculative hunch that it is younger females in their mid- to late-teens who are perceived by males as most attractive (e.g. Kenrick and Keefe 1992). 

Why Average is Attractive

Symons is also credited as the first person to recognize that a major criterion of attractiveness is, paradoxically, averageness, or at least the first to recognize the significance of, and possible evolutionary explanation for, this discovery.[1] Thus, Symons argues that: 

“[Although] health and status are unusual in that there is no such thing as being too healthy or too high ranking… with respect to most anatomical traits, natural selection produces the population mean” (p194). 

On this view, deviations from the population mean are interpreted as the result of deleterious mutations or developmental instability, and hence bad genes.[2]

Concealed Ovulation

Support has even emerged for some of Symons’ more speculative hunches.

For example, one of Symons’ two proposed scenarios for the evolution of concealed ovulation, in which he professed “little confidence” (p141), was that this had evolved so as to impede male mate-guarding and enable females select a biological father for their offspring different from their husbands (p139-141).

Consistent with this theory, studies have found that women’s mate preferences vary throughout their menstrual cycle in a manner compatible with a so-called ‘dual mating strategy’, preferring males evidencing a willingness to invest in offspring at most times, but, when at their most fertile, preferring characteristics indicative of genetic quality (e.g. Penton-Voak et al 1999). 

Meanwhile, a questionnaire distributed via a women’s magazine found that women engaged in extra-marital affairs do indeed report engaging in ‘extra-pair copulations’ (EPCs) at times likely to coincide with ovulation (Bellis and Baker 1990).[3]

The Myth of Female Choice

Interestingly, Symons even anticipated some of the mistakes evolutionary psychologists would be led into.

Thus, he warns that researchers in modern western societies may be prone to overestimate the importance of female choice as a factor in human evolution, because, in their own societies, this is a major factor, if not the major factor, in determining marriage and sexual and romantic relationships (p203).[4]

However, in ancestral environments (i.e. what evolutionary psychologists now call the Environment of Evolutionary Adaptedness or EEA) arranged marriages were likely the norm, as they are in most premodern cultures around the world today (p168).[5]

Thus, Symons concludes: 

There is no evidence that any features of human anatomy were produced by intersexual selection [i.e. female choice]. Human physical sex differences are explained most parsimoniously as the outcome of intrasexual selection (the result of male-male competition)” (p203). 

Thus, human males have no obvious analogue of the peacock’s tail, but they do have substantially greater levels of upper-body strength and violent aggression as compared to females.[6]

This was a warning almost entirely ignored by subsequent generations of researchers before being forcefully reiterated by Puts (2010)

Homosexuality as a ‘Test-Case

An idea of the importance of Symons’s work can be ascertained by comparing it with contemporaneous works addressing the same subject-matter.

Edward O Wilson’s On Human Nature was first published in 1978, only a year before Symons’s ‘The Evolution of Human Sexuality’. 

However, whereas Symons’s book set out much of the theoretical basis for what would become the modern science of evolutionary psychology, Wilson’s chapter on “Sex” has dated rather less well, and a large portion of chapter is devoted to introducing a now faintly embarrassing theory of the evolution of homosexuality which has subsequently received no empirical support (see Bobrow & Bailey 2001).[7]

In contrast, Symons’s own treatment of homosexuality is innovative. It is also characteristic of his whole approach and illustrates why ‘The Evolution of Human Sexuality‘ has been described by David Buss as “the first major treatise on evolutionary psychology proper” (Handbook of Evolutionary Psychology: p251).

Rather than viewing all behaviours as necessarily adaptive (as critics of evolutionary psychology, such as Stephen Jay Gould, have often accused sociobiologists of doing),[8] Symons instead focuses on admittedly non-adaptive (or, indeed, even maladaptive) behaviours, not because he believes them to be adaptive, but rather because they provide a unique window on the nature of human sexuality.

Accordingly, Symons does not concern himself with how homosexuality evolved, implicitly viewing it as a rare and maladaptive malfunctioning of normal sexuality. Yet the behaviour of homosexuals is of interest to Symons because it provides a window on the nature of male and female sexuality as it manifests itself when freed from the constraints imposed by the conflicting desires of the opposite sex.

On this view, the rampant promiscuity manifested by many homosexual men (e.g. cruising and cottaging in bathhouses and public lavatories, or Grindr hookups) reflects the universal male desire for sexual variety when freed from the constraints imposed by the conflicting desires of women. 

This desire for sexual variety is, of course, obviously reproductively unproductive among homosexual men themselves. However, it evolved because it enhanced the reproductive success of heterosexual men by motivating them to attempt to mate with multiple females and thereby father multiple offspring.

Thus, a powerful ruler like with a large harem like Ismail the Bloodthirsty’ of Morocco could reputedly father as many as 888 offspring.

In contrast, burdened with pregnancy and lactation, women’s potential reproductive rate is more tightly constrained than that of men. They therefore have little to gain reproductively by mating with multiple males, since they can usually gestate, and nurse, only one offspring at a time.

It is therefore notable that, among lesbians, there is little evidence of the sort of rampant promiscuity common among gay men. Instead, lesbian relationships seem to be characterized by much the same features as heterosexual coupling (i.e. long-term pair-bonds).

The similarity of heterosexual coupling to that of lesbians, and the striking contrast with that of male homosexuals, suggests that it is women, not men, who exert decisive influence in dictating the terms of heterosexual coupling.[9]

Thus, Symons reports:

There is enormous cross-cultural variation in sexual customs and laws and the extent of male control, yet nowhere in the world do heterosexual relations begin to approximate those typical of homosexual men This suggests that, in addition to custom and law, heterosexual relations are structured to a substantial degree by the nature and interests of the human female” (p300). 

This conclusion is, of course, diametrically opposite to the feminist contention that it is men who dictate the terms of heterosexual coupling and for whose exclusive benefit such relationships are structured.

It also suggests, again contrary to feminist assumptions of male dominance, that most men are ultimately frustrated in achieving their sexual ambitions to a far greater extent than are most women. 

Thus, Symons concludes: 

The desire for sexual variety dooms most human males to a lifetime of unfulfilled longing” (p228). 

Here, Symons anticipates Camille Paglia who was later to famously observe: 

Men know they are sexual exiles. They wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19). 

Criticisms of Symons’s Use of Homosexuality as a Test-Case

There is, however, a potential problem with Symons’s use of homosexual behaviour as a window onto the nature of male and female sexuality as they manifest themselves when freed from the conflicting desires of the opposite sex. The whole analysis rests on a questionable premise – namely that homosexuals are, their preference for same-sex partners aside, otherwise similar, if not identical, to heterosexuals of their own sex in their psychology and sexuality.

Symons defends this assumption, arguing: 

There is no reason to suppose that homosexuals differ systematically from heterosexuals in any way other than their sexual object choice” (p292). 

Indeed, in some respects, Symons seems to see even “sexual object choice” as analogous among homosexuals and heterosexuals of the same sex.

For example, he observes that, unlike women, both homosexual and heterosexual men tend to evaluate prospective mates primarily on the basis their physical appearance and youthfulness (p295). 

Thus, in contrast to the failure of periodicals featuring male nudes to attract a substantial female audience (see below), Symons notes the existence of a market for gay pornography parallel in most respects to heterosexual porn – i.e. featuring young, physically attractive models in various states of undress (p301).

This, of course, contradicts the feminist notion that men are led to ‘objectify’ women only due to the sexualized portrayal of the latter in the media.

Instead, Symons concludes: 

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (p304).[10] 

However, this assumption of the fundamental similarity of heterosexual and homosexual male psychology has been challenged by David Buller in his book, Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature.

Buller cites evidence that male homosexuals are ‘feminized’ in many aspects of their behaviour and morphology.

For example, one study reported that, despite stereotypically more likely to ‘hit the gym’, gay man nevertheless had relatively less muscular development than heterosexual men, and lower shoulder-to-hip ratios on average (Evans 1972). Another particularly interesting recent study found that male homosexuals have more female-typical occupation interests than do heterosexual males (Ellis & Ratnasingam 2012).

Likewise, one of the few consistent early correlates of homosexuality is gender non-conformity in childhood and some evidence (e.g. digit ratios, the fraternal birth order effect) has been interpreted to suggest that the level of prenatal exposure to masculinizing androgens (e.g. testosterone) in utero affects sexual orientation (see Born Gay: The Pyschobiology of Sexual Orientation).

Indeed, Symons himself mentions the evidence of an association between homosexuality and levels of masculinizing androgens in utero (albeit in respect of lesbians rather than of male homosexuality) just a few pages before his discussion of the promiscuous behaviours of male homosexuals (p289).

As Buller also notes, although gay men seem, like heterosexual men, to prefer youthful sexual partners, they also appear to prefer sexual partners who are, in other respects highly masculine.[11]

Thus, Buller observes: 

“The males featured in gay men’s magazines embody very masculine, muscular physiques, not pseudo-feminine physiques” (Adapting Minds: p227).

Indeed, the models in such magazines seem in most respects similar in physical appearance to the male models, pop stars, actors and other ‘sex symbols’ and celebrities fantasized about by heterosexual women and girls.

How then are we to resolve this apparent paradox?

One possible explanation that some aspects of the psychology of male homosexuals are feminized but not others – perhaps because different parts of the brain are formed at different stages of prenatal development, at which stages the levels of masculinizing androgens in the womb may vary. 

Thus, Glenn Wilson, writing in 1989 and citing the work of Ellis & Ames (1987), reports that:

The masculinization/feminization effects occur in different parts of the brain and, more importantly, at different times during pre-natal development. Indications are that sex orientation in humans depends critically upon the hormone balance prevailing during the third and fourth months of pregnancy, while secondary sex characteristics and sex-typical behaviour patterns are influenced more by hormones circulating during the fifth and sixth months of pregnancy” (The Great Sex Divide: p79).

Indeed, there is even some evidence that homosexual males may be hyper-masculinized in some aspects of their physiology.

For example, it has been found that homosexual males report larger penis-sizes than heterosexual men (Bogaert & Hershberger 1999). 
 
Researchers Glenn Wilson and Qazi Rahman propose, may be because: 

If it is supposed that the barriers against androgens with respect to certain brain structures (notably those concerned with homosexuality) lead to increased secretion in an effort to break through, or some sort of accumulation elsewhere… then there may be excess testosterone left in other departments” (Born Gay: The Psychobiology of Sex Orientation: p80). 

Another possibility is that male homosexuals actually lie midway between heterosexual men and women in their degree of masculinization.  

On this view, homosexual men come across as relatively feminine only because we naturally tend to compare them to other men (i.e. heterosexual men). However, as compared to women, they may be relatively masculine, as reflected in the male-typical aspects of their sexuality focused upon by Symons.

Interestingly, this latter interpretation suggests the slightly disturbing possibility that, freed from the restraints imposed by women, heterosexual men would be even more indiscriminately promiscuous than their homosexual counterparts.

Evidence consistent with this interpretation is provided by one study from the 1980s which found that, when approached by a female stranger (also a student), on a University campus, with a request to go to bed with them, fully 72% of male students agreed (Clark and Hatfield 1989). 

In contrast, in the same study, not a single one of the 96 females approached by male strangers with the same request on the same university campus agreed to go to bed with the male stranger.

Yet what percentage of the female students subsequently sued the university for sexual harassment was not reported.

Pornography as a “Natural Experiment

For Symons, fantasy represents another window onto sexual and romantic desires. Like homosexuality, fantasy is, by its very nature, unconstrained by the conflicting desires of the opposite sex (or indeed by anything other than the imagination of the fantasist). 

Symons later collaborated in an investigation into sexual fantasy by means of a questionnaire (Ellis and Symons 1990). 

However, in the present work, he investigates fantasy indirectly by focusing on what he calls “the natural experiment of commercial periodical publishing” – i.e. pornographic magazines (p182).

In many respects, this approach is preferable to a survey because, even in an anonymous questionnaire, individuals may be less than honest when dealing with a sensitive topic such as their sexual fantasies. On the other hand, they are unlikely to regularly spend money on a magazine unless they are genuinely attracted by its contents.

Before the internet age, softcore pornographic magazines, largely featuring female nudes, commanded sizeable circulations, despite the not insubstantial stigma attached to their purchase. However, their readership (if indeed ‘readership’ is the right words, since there was typically little reading involved, save of the ‘one-handed’ variety) was almost exclusively male.

In contrast, there was little or no female audience for magazines containing pictures of naked males. Instead, magazines marketed towards women (e.g. fashion magazines) contain, mostly, pictures of other women.

Indeed, when, in the 1970s, attempts were made, in the misguided name of feminism and ‘women’s liberation, to market magazines featuring male nudes to a female readership, one such title, Viva, abandoned publishing male nudes after just a few years due to lack of interest or demand, then subsequently went bust just a few years after that, while the other, Playgirl, although it remained in publication for many years and did not entirely abandon male nudes, was notorious, as a consequence, for attracting a readership composed in large part of homosexual men.

Symons thus concludes forcefully and persuasively: 

The notion must be abandoned that women are simply repressed men waiting to be liberated” (p183). 

Indeed, though it has been loudly and enthusiastically co-opted by feminists, this view of women, and of female sexuality – namely women as “repressed men waiting to be liberated” – represents an obviously quintessentially male persepective. 

Indeed, taken to extremes, it has even been used as a justification for rape.

Thus, the curious, though recurrent, sub-Freudian notion that female rape victims actually secretly enjoy being raped seems to rest ultimately on the assumption that female sexuality is fundamentally the same as that of men (i.e. indiscriminately enjoying of promiscuous sex) and that it is only women’s alleged sexual ‘repression’ that prevents them admitting as much.[12]

Romance Literature 

Unfortunately, however, there is notable omission in Symons’s discussion of pornography as a window into male sexuality – namely, he omits to consider whether there exists any parallel artistic genre that offers equivalent insight into the female psyche.

Later writers on the topic have argued that romance novels (e.g. Mills and Boon, Jane Austin), whose audience is as overwhelmingly female as pornography’s is male, represent the female equivalent of pornography, and that analysis of the the content of such works provides insights into female mate preferences parallel to those provided into male psychology by pornography (e.g. Kruger et al 2003; Salmon 2004; see also Warrior Lovers: Erotic Fiction, Evolution and Female Sexuality, co-authored by Symons himself).

Thus, popular science writer Matt Ridley reports:

Two industries relentlessly exploit the sexual fantasizing of men and women: pornography and the publishing of romance novels: Pornography is aimed almost entirely at men. It varies little from a standard formula all over the world… The romance novel, by contrast, is aimed entirely at a female market. It, too, depicts a fictional world that has changed remarkably little except in adapting to female career ambitions and to a less inhibited attitude toward the description of sex” (The Red Queen: p270-271)

Symons touches upon this analogy only in passing, when he observes that:

Heterosexual men are, of course, aware that the female sexuality portrayed in men’s magazines reflects male fantasy more than female reality, just as heterosexual women are aware that the happy endings of stories in romance magazines exist largely in the realm of fantasy” (p293).

Yet, while feminists perpetually complain about how pornography supposedly creates unrealistic expectations of women and girls and puts undue pressure on women and girls to live up to this male fantasy, few men complain about how the equally unrealistic portrayal of men in romance literature creates unrealistic expectations of boys and men and puts undue pressure on boys and men to live up to a female fantasy.

Female Orgasm as Non-Adaptive

An entire chapter of ‘The Evolution of Human Sexuality’, namely Chapter Three (entitled, “The Female Orgasm: Adaptation or Artefact”), is devoted to rejecting the claim that the female orgasm represents a biological adaptation.

This is perhaps excessive. However, it does at least conveniently contradicts the claim of some critics of evolutionary psychology, and of sociobiology, such as Stephen Jay Gould that the field is ‘ultra-Darwinian’ or ‘hyper-adaptionist’ and committed to the misguided notion that all traits are necessarily adaptive.[13]

In contrast, Symons champions the thesis that the female capacity for orgasm is a simply non-adaptive by-product of the male capacity to orgasm, the latter of which is of course adaptive.

On this view, the female orgasm (and clitoris) is, in effect, the female equivalent of male nipples (only more fun).

Certainly, Symons convincingly critiques the romantic notion, popularized by Desmond Morris among others, that the female orgasm functions as a mechanism designed to enhance ‘pair-bonding between couples.

However, subsequent generations of evolutionary psychologists have developed less naïve models of the adaptive function of female orgasm.

For example, Geoffrey Miller argues that the female orgasm, and clitoris, functions as an adaptation for mate choice (The Mating Mind: p239-241).

Of course, at first glance, experiencing orgasm during coitus may appear to be a bit late for mate choice, since, by the time coitus has occurred, the choice in question has already been made. However, given that, among humans, most sexual intercourse is non-reproductive (i.e. does not result in conception), the theory is not altogether implausible.

On this view, the very factors which Symons views as suggesting female orgasm is non-adaptive – such as the relative difficultly of stimulating female orgasm during ordinary vaginal sex – are positive evidence for its adaptive function in carefully discriminating between suitors/lovers to determine their desirability as father for a woman ’s offspring.

Nevertheless, at least according to the stringent criteria set out by George C Williams in his classic Adaptation and Natural Selection, as well as the more general principle of parsimony (also known as Occam’s Razor), the case for female orgasm as an adaptation remains unproven (see also Sherman 1989; Case Of The Female Orgasm: Bias in the Science of Evolution).

Out-of-Date?

Much of Symons’ work is dedicated to challenging the naïve group-selectionism of Sixties ethologists, especially Desmond Morris. Although scientifically now largely obsolete, Morris’s work still retains a certain popular resonance and therefore this aspect of Symons’s work is not entirely devoid of contemporary relevance.

In place of Morris‘s rather idyllic notion that humans are a naturally monogamous ‘pair-bonding’ species, Symons advocates instead an approach rooted in the individual-level (or even gene-level) selection championed Richard Dawkins in The Selfish Gene (reviewed here).

This leads to some decidedly cynical conclusions regarding the true nature of sexual and romantic relations among humans.

For example, Symons argues that it is adaptive for men to be less sexually attracted to their wives than they are to other women – because they are themselves liable to bear the cost of raising offspring born to their wives but not those born to other women with whom they mate (e.g. those attached to other males).

Another cynical conclusion is that the primary emotion underlying the institution of marriage, both cross-culturally and in our own society, is neither love nor even lust, but rather male sexual jealousy and proprietariness (p123). 

Marriage, then, is an institution borne not of love, but of male sexual jealousy and the behaviour known to biologists as mate-guarding.

Meanwhile, in his excellent chapter on ‘Copulation as a Female Service’ (Chapter Eight), Symons suggests that many aspects of heterosexual romantic relationships may be analogous to prostitution.

As well as its excessive focus on debunking sixties ethologists like Morris, ‘The Evolution of Human Sexuality’ is also out-of-date in a more serious respect Namely, it fails to incorporate the vast amount of empirical research on human sexuality from a sociobiological perspective which has been conducted since the first publication of his work.

For a book first published thirty years ago, this is inevitable – not least because much of this empirical research was inspired by Symons’ own ideas and specifically designed to test theories formulated in this very work.

In addition, potentially important new factors in human reproductive behaviour that even Symons did not foresee have been identified, for example role of levels of fluctuating asymmetry functioning as a criterion for, or at least correlate of, physical attractiveness.

For an updated discussion of the evolutionary psychology of human sexual behaviour, complete with the latest empirical data and research, readers should consult the latest edition of David Buss’s The Evolution Of Desire: Strategies of Human Mating.

In contrast, in support of his theories Symons relies largely on classical literary insight, anecdote and, most importantly, a review of the ethnographic record.

However, this latter focus ensures that, in some respects, the work remains of more than merely of historical interest.

After all, one of the more legitimate criticisms levelled against recent research in evolutionary psychology is that it is insufficiently cross-cultural and, with several notable exceptions (e.g. Buss 1989), relies excessively on research conducted among convenience samples of students at western universities.

Given costs and practicalities, this is inevitable. However, for a field that aspires to understand a human nature presumed to be universal, such a method of sampling is highly problematic, especially given what has recently been revealed about the ‘WEIRD-ness’ of western undergraduate samples.

The Evolution of Human Sexuality’ therefore retains its importance for two reasons. 

First, is it the founding work of modern evolutionary psychological research into human sexual behaviour, and hence of importance as a landmark and classic text in the field, as well as in the history of science more generally. 

Second, it also remains of value to this day for the cross-cultural and ethnographic evidence it marshals in support of its conclusions. 

Endnotes

[1] Actually, the first person to discover this, albeit inadvertently, was the great Victorian polymath, pioneering statistician and infamous eugenicist Francis Galton, who, attempting to discover abnormal facial features possessed by the criminal class, succeeded in morphing the faces of multiple convicted criminals. The result was, presumably to his surprise, an extremely attractive facial composite, since all the various minor deformities of the many convicted criminals whose faces he morphed actually balanced one another out to produce a face with few if any abnormalities or disproportionate features.

[2] More recent research in this area has focused on the related concept of fluctuating asymmetry.

[3] However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology. It has also been questioned whether ovulation in humans is indeed concealed, or is actually detectable by subtle cues (e.g. Miller et al 2007), for example, changes in face shape (Oberzaucher et al 2012), breast symmetry (Scutt & Manning 1996) and body scent (Havlicek et al 2006).

[4] Another factor leading recent researchers to overestimate the importance of female choice in human evolution is their feminist orientation, since female choice gives women an important role in human evolution, even, paradoxically, in the evolution of male traits.

[5] Actually, in most cultures, only a girl’s first marriage is arranged on her behalf by her parents. Second- and third-marriages are usually negotiated by the woman herself. However, since female fertility peaks early, it is a girl’s first marriage that is usually of the most reproductive, and hence Darwinian, significance.

[6] Indeed, the human anatomical trait in humans that perhaps shows the most evidence of being a product of intersexual selection is a female one, namely the female breasts, since the latter are, unlike the mammary glands of most other mammals, permanently present from puberty on, not only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016). 

[7] Wilson terms his theory “the kin selection theory hypothesis of the origin of homosexuality” (p145). However, a better description might be the ‘helper at the nest theory of homosexuality’, the basic idea being that, like sterile castes in some insects, and like older siblings in some bird species where new nest sites are unavailable, homosexuals, rather than reproducing themselves, direct their energies towards assisting their collateral kin in successfully raising, and provisioning, their own offspring (On Human Nature: p143-7). The main problem with this theory is that there is no evidence that homosexuals do indeed devote any greater energies towards assisting their kin in raising offspring. On the contrary, homosexuals instead seem to devote much of their time and resources towards their own sex life, much as do heterosexuals (Bobrow & Bailey 2001).

[8] As we will see, contrary to the stereotype of evolutionary psychologists as viewing all traits as necessarily adaptive, as they are accused of doing by the likes of Gould, Symons also argued that the female orgasm and menopause are non-adaptive, but rather by-products of other adaptations.

[9] This is not necessarily to say that rampant, indiscriminate promiscuity is a male utopia, or the ideal of any man, be he homosexual or heterosexual. On the contrary, the ideal mating system for any individual male is harem polygyny in which the chastity of his own partners is rigorously policed (see Laura Betzig’s Despotism and Differential Reproduction: which I have reviewed here). However, given an equal sex ratio, this would condemn other males to celibacy and perpetual ‘inceldom. Similarly, Symons reports that “Homosexual men, like most people, usually want to have intimate relationships”. However, he observes:

Such relationships are difficult to maintain, largely owing to the male desire for sexual variety; the unprecedented opportunity to satisfy this desire in a world of men, and the male tendency towards sexual jealousy” (p297).  

It does indeed seem to be true that homosexual relationships, especially those of gay males, are, on average, of shorter duration than are heterosexual relationships. However, Symons’ claim regarding “the male tendency towards sexual jealousy” is questionable.
Actually, subsequent research in evolutionary psychology has suggested that men are no more prone to jealousy than women, but rather that it is sorts of behaviours which most intensely provoke such jealousy that differentiate the sexes (Buss 1992). Moreover, many gay men practice open relationships, which seems to suggest a lack of jealousy – or perhaps this simply reflects a recognition of the difficulty of maintaining relationships given, as Symons puts it, “the male desire for sexual variety [and] the unprecedented opportunity to satisfy this desire in a world of men”. 

[10] Indeed, far from men being led to objectify women due to the portrayal of women in a sexualized manner in the media, Symons suggests:

There may be no positive feedback at all; on the contrary, constant exposure to pictures of nude and nearly nude female bodies may to some extent habituate [i.e. desensitize] men to these stimuli” (p304).

[11] Admittedly, some aspects of body-type typically preferred by gay males (especially the so-called twink ideal) do reflect apparently female traits, especially a relative lack of body-hair. However, lack of body-hair is also obviously indicative of youth. Moreover, a relative lack of body-hair also seems to be a trait favoured in men by heterosexual women. For a discussion of the relative preference on the part of (heterosexual) females for masculine versus feminine physical appearance in male sex partners, see here.

[12] Thus, some men might indeed welcome being ‘raped’, albeit only under highly unusual circumstances – namely by an attractive opposite-sex partner (or, in the case of homosexual men, an attractive same-sex partner) to whom they are sexually attracted. Thus, Kingsley Browne, in his excellent Biology at Work (which I have reviewed here) quotes the perhaps remarkable finding that:

A substantial number of men ‘viewed an advance by a good-looking woman who threatened harm or held a knife as a positive sexual opportunity’” (Biology at Work: p196; quoting Struckman-Johnson & Struckman-Johnson 1994).

Of course, large numbers of women also report rape fantasies (Bivona & Critelli 2009). Yet this does not, of course, mean they would actually welcome real sexual assault, which would almost certainly take a very different form from the fantasy. In practice, therefore, members of neither sex are ever likely to welcome sexual assault in the form which it is actually likely to actually come.

[13] Incidentally, Symons also rejects the theory that the female menopause is adaptive, a theory which has subsequently become known as the grandmother hypothesis (p13). Also, although it does not directly address the issue, Symons’ discussion of human rape (p276-85), has also been interpreted as implicitly favouring the theory that rape is a by-product of the greater male desire for commitment free promiscuous sex, rather than the product of a specific rape adaptation in males (see Palmer 1991; and A Natural History of Rape: reviewed here). 

References 

Bellis & Baker (1990). Do females promote sperm competition?: Data for humans. Animal Behavior, 40: 997-999.
Bivona & Critelli 2009 The nature of women’s rape fantasies: an analysis of prevalence, frequency, and contents. Journal of Sex Research 46(1):33-45
Clark & Hatfield (1989) Gender differences in receptivity to sexual offers Journal of Psychology and Human Sexuality 2(1):39-55
Bobrow & Bailey (2001). Is male homosexuality maintained via kin selection? Evolution and Human Behavior, 22: 361-368.
Bogaert & Hershberger (1999) The relation between sexual orientation and penile size. Archives of Sexual Behavior 1999 Jun;28(3) :213-21. 
Buss (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral and Brain Sciences 12: 1-49.
Ellis & Ames (1987) Neurohormonal Functioning and Sexual Orientation: A Theory of Homosexuality-Heterosexuality, Psychological Bulletin 101(2): 233-58
Ellis & Ratnasingam (2012) Gender, Sexual Orientation, and Occupational Interests: Evidence of Androgen Influences. Mankind Quarterly  53(1): 36–80
Ellis & Symons (1990) Sex differences in sexual fantasy: An evolutionary psychological approach, Journal of Sex Research 27(4): 527-555.
Evans (1972) Physical and biochemical characteristics of homosexual menJournal of Consulting and Clinical Psychology, 39(1), 140–147
Gildersleeve, Haselton & Fales (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic review. Psychological Bulletin 140(5):1205-59.
Havlíček, Dvořáková, Bartos & Fleg (2006) Non‐Advertized does not Mean Concealed: Body Odour Changes across the Human Menstrual Cycle. Ethology 112(1):81-90.
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures. Evolution and Human Behavior 38(2): 217–226.
Kenrick & Keefe (1992). Age preferences in mates reflect sex differences in human reproductive strategies. Behavioral and Brain Sciences, 15: 75-133. 
Kruger et al (2003) Proper and Dark Heroes as Dads and Cads. Human Nature 14(3): 305-317.
Manning et al (1997) Breast asymmetry and phenotypic quality in women. Ethology and Sociobiology 18(4): 223–236.
Miller (1998). How mate choice shaped human nature: A review of sexual selection and human evolution. In C. Crawford & D. Krebs (Eds.), Handbook of Evolutionary Psychology: Ideas, Issues, and Applications (pp. 87-129). Mahwah, NJ: Lawrence Erlbaum.
Miller, Tybur & Jordan (2007). Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrous? Evolution and Human Behavior. 28(6):375–381.
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success. Ethology and Sociobiology 16(3): 207-219.
Palmer (1991) Human Rape: Adaptation or By-Product? Journal of Sex Research 28(3): 365-386.
Penton-Voak et al (1999) Menstrual cycle alters face preferences, Nature 399 741-2.
Puts (2010) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31 157-175.
Salmon (2004) The Pornography Debate: What Sex Differences in Erotica Can Tell Us About Human Sexuality. In Evolutionary Psychology, Public Policy and Personal Decisions (London: Lawrence Erlbaum Associates, 2004).
Scutt & Manning (1996) Symmetry and ovulation in women. Human Reproduction 11(11):2477-80.
Sherman (1989) The clitoris debate and levels of analysis, Animal Behaviour, 37: 697-8.
Struckman-Johnson & Struckman-Johnson (1994) Men’s reactions to hypothetical female sexual advances: A beauty bias in response to sexual coercion. Sex Roles 31(7-8): 387–405.
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.

Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals.

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studies, adoption studies and family studies.

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorce, family breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons why parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour quite independently of any parental treatment.

This is well illustrated by the question of the relationship between corporal punishment and behaviour in children.

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once you think about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times may seemingly have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

After all, if a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, it becomes entirely redundant.

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here):

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (The Ethnic Phenomenon: p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, they report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982; Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on purely a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, she reports, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nannies, governoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes.

On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and America, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here, here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the financial assistance of the taxpayer, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom, at least with respect to sexual abuse, is tantamount to contemporary heresy, with attendant persecution

Thus, for example, it is claimed that criminals who are abusive towards their children were themselves almost invariably abused, mistreated or neglected as children, which is what has led to their own abusive, behaviour.

A particularly eloquent expression of this theory is found in the novel Clockers, by Richard Price, where one of the lead characters, a police officer, explains how, during his first few years on the job, a senior colleague had restrained him from attacking an abusive mother who had left her infant son handcuffed to a radiator, telling him:

Rocco, that lady you were gonna brain? Twenty years ago when she was a little girl. I arrested her father for beating her baby brother to death. The father was a piece of shit. Now that she’s all grown up? She’s a real piece of shit. That kid you saved today. If he lives that long, if he grows up? He’s gonna be a real piece of shit. It’s the cycle of shit and you can’t do nothing about it” (Clockers: p96).

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called cycle of abuse

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle came to be abusive. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Therefore, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, and most importantly for present purposes, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their past victimization caused their current sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser.

After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

In short, the ‘cycle of shit’ referred to by Price’s fictional police officer may well be real, but mediated by genetics rather than childhood experience.

However, this conclusion is not entirely clear. On the contrary, Harris is at pains to emphasize that the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole.

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families sampled in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between human races, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores.[8]

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[9]

Jensen has termed this mysterious factor the X-factor

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited X-factor, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining unexplained test-score gap. 

However, Harris takes Jensen at his word and takes the search for the X-factor very seriously. Indeed, she apparently believes she has discovered and identified it. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Unfortunately, Harris does not really develop this fascinating claim. Indeed, she cites no direct evidence in support of this claim, and evidently seems to regard the alternative possibility – namely, that race differences in intelligence are at least partly genetic in origin – as so unpalatable that it can safely ruled out a priori.

In fact, however, although not discussed by Harris, there is at least some evidence in support of her theory. Indeed, her theory potentially reconciles the apparently conflicting findings of two of the most widely-cited studies in this vexed area of research and debate.

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same race differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among black, white and mixed-race populations in the community at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, in a follow-up study it was found that by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

Although Scarr, Weinberg and Waldman took pains to present their findings as compatible with a purely environmentalist theory of race differences, this study has, not unreasonably, been widely cited by hereditarians as evidence for the existence of innate racial differences in intelligence (e.g. Levin 1994; Lynn 1994; Whitney 1996).

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood.[10]

One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as ‘black’, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore, according to Harris, come to score lower in IQ than their white peers and adoptive siblings. 

But how can we ever test this theory? Is it not untestable, and is this not precisely the problem identified by Jensen with previous positedX-factors.

Actually, however, although not discussed by Harris, there is a way of testing this theory – namely, looking at the IQs of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?

This, then, brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the famous Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in an almost entirely monoracial, white Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This, then, is perhaps the last lifeline for a purely environmentalist theory of race differences in intelligence – namely the theory that African-American culture depresses intelligence. 

Unfortunately, however, this proposition – namely, that African-American culture depresses your IQ – is almost as politically unpalatable and politically-incorrect as is the notion that race differences in intelligence reflect innate genetic differences.[11]

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services. (In other cases, such children may have been orphaned. However, this is less common today. At any rate, parents who die before their offspring reach maturity are surely also unrepresentative of parents in general. For example, many may live high-risk lifestyles that contribute to their early deaths.)

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, at least in theory, be ruled out by longitudinal studies, which could investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. Such a study would seem all but impossible and certainly, to my knowledge, has yet to be conducted.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being called to make good on his claims and thereby proven wrong.

[8] Actually, even if race differences in IQ are found to disappear after controlling for socioeconomic status, it would be a fallacy to conclude that this means that the differences in IQ are entirely a result of differences in social class and that there is no innate difference in intelligence between the races. After all, differences in socioeconomic status are in large part a consequence of differences in cognitive ability, as more intelligent people perform better at school, and at work, and hence rise in socioeconomic status. Therefore, in controlling for socioeconomic status, one is, in effect, also controlling for differences in intelligence, since the two are so strongly correlated. The contrary assumption has been termed by Jensenthe sociologist’s fallacy’.
This fallacy involves the assumption that it is differences in socioeconomic status that cause differences in IQ, rather than differences in intelligence that cause differences in socioeconomic status. As Arthur Jensen explains it:

If SES [i.e. socioeconomic status] were the cause of IQ, the correlation between adults’ IQ and their attained SES would not be markedly higher than the correlation between children’s IQ and their parents’ SES. Further, the IQs of adolescents adopted in infancy are not correlated with the SES of their adoptive parents. Adults’ attained SES (and hence their SES as parents) itself has a large genetic component, so there is a genetic correlation between SES and IQ, and this is so within both the white and the black populations. Consequently, if black and white groups are specially selected so as to be matched or statistically equated on SES, they are thereby also equated to some degree on the genetic component of IQ” (The g Factor: p491).

[9] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous populations of sub-Saharan Africa are actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia, Liberia), scores are not any higher. Also, other minority groups ostensibly or historically subject to racial oppression and discrimination (e.g. Ashkenazi Jews, Overseas Chinese) actually score higher in IQ than the host populations that ostensibly oppress them. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[10] The fact that the heritability of intelligence is higher in adulthood than during childhood, and the influence of the shared family environment correspondingly decreases, has been interpreted as reflecting the fact that, during childhood, our environments are shaped, to a considerable extent, by our parents. For example, some parents may encourage activities that may conceivably enhance intelligence, such as reading books and visiting museums. In contrast, as we enter adulthood, we begin to have freedom to choose and shape our own environments, in accordance with our interests, which may be partly a reflection of our heredity.
Interestingly, this theory suggests that what is biologically inherited is not necessarily intelligence itself, but rather a tendency to seek out intelligence-enhancing environments, i.e. intellectual curiosity rather than intelligence as such. In fact, it is probably a mixture of both factors. Moreover, intellectual curiosity is surely strongly correlated with intelligence, if only because it requires a certain level of intelligence to appreciate intellectual pursuits, since, if one lacks the ability to learn or understand complex concepts, then intellectual pursuits are necessarily unrewarding.

[11] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2, 102–114

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27

Scarr & Weinberg (1976) IQ test performance of black children adopted by White families. American Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.

Richard Lynn’s ‘Race Differences in Intelligence’: Useful as a Reference Work, But Biased as a Book

Race Differences in Intelligence: An Evolutionary Analysis, by Richard Lynn (Augusta, GA: Washington Summit, 2006)

Richard Lynn’s ‘Race Differences in Intelligence’ is structured around his massive database of IQ studies conducted among different populations. This collection seems to be largely recycled from his earlier IQ and the Wealth of Nations, and subsequently expanded, revised and reused again in IQ and Global Inequality, The Global Bell Curve, and The Intelligence of Nations (as well as a newer edition of Race Differences in Intelligence, published in 2015). 

Thus, despite its subtitle, “An Evolutionary Analysis”, the focus is very much on documenting the existence of race differences in intelligence, not explaining how or why they evolved. The “Evolutionary Analysis” promised in the subtitle is actually almost entirely confined to the last three chapters. 

The choice of this as a subtitle is therefore misleading and presumably represents an attempt to cash in on the recent rise in, and popularity of, evolutionary psychology and other sociobiological explanations for human behaviours. 

However, whatever the inadequacies of Lynn’s theory of how and why race differences in intelligence evolved (discussed below), his documentation of the existence of these differences is indeed persuasive. The sheer number of studies and the relative consistency over time and place suggests that the differences are indeed real and that there is therefore something to be explained in the first place. 

In this respect, it aims to do something similar to what was achieved by Audrey Shuey’s The Testing of Negro Intelligence, first published in 1958, which brought together a huge number of studies, and a huge amount of data, regarding the black-white test score gap in the US. 

However, whereas Shuey focused almost exclusively on the black-white test score gap in North America, Lynn’s ambition is much broader and more ambitious – namely, to review data relating to the intelligences of all racial groups everywhere across the earth. 

Thus, Lynn declares that: 

The objective of this book [is] to broaden the debate from the local problem of the genetic and environmental contributions to the difference between whites and blacks in the United States to the much larger problem of the determinants of the global differences between the ten races whose IQs are summarised” (p182). 

Therefore, his book purports to be: 

The first fully comprehensive review… of the evidence on race differences in intelligence worldwide” (p2). 

Racial Taxonomy

Consistent with this, Lynn includes in his analysis data for many racial groups that rarely receive much if any coverage in previous works on the topic of race differences in intelligence

Relying on both morphological criteria and genetic data gathered by Cavalli-Sforz et al in The History and Geography of Human Genes, Lynn identifies ten separate human races. These are: 

1) “Europeans”; 
2) “Africans”; 
3) “Bushmen and Pygmies”; 
4) “South Asians and North Africans”; 
5) “Southeast Asians”; 
6) “Australian Aborigines”; 
7) “Pacific Islanders”; 
8) “East Asians”; 
9) “Artic Peoples”; and 
10) “Native Americans”.

Each of these racial groups receive a chapter of their own, and, in each of the respective chapters, Lynn reviews published (and occasionally unpublished) studies that provide data on each group’s: 

  1. IQs
  2. Reaction times when performing elementary cognitive tasks; and
  3. Brain size

Average IQs 

The average IQs reported by Lynn are, he informs us, corrected for the Flynn Effect – i.e. the rise in IQs over the last century (p5-6).  

However, the Flynn Effect has occurred at different rates in different regions of the world. Likewise, the various environmental factors that have been proposed as possible explanations for the phenomenon (e.g. improved nutrition and health as well as increases in test familiarity, and exposure to visual media) have varied in the extent to which they are present in different places. Correcting for the Flynn Effect is therefore surely easier said than done. 

IQs of “Hybrid populations

Lynn also discusses the average IQs of racially-mixed populations, which are, Lynn reports, consistently intermediate between the average IQs of the two (or more) parent populations.

However, one exception not discussed by Lynn is that recent African immigrants to the US outperform African-Americans both academically and economically, even though, as discussed by African businessman Chanda Chisala, African immigrants tend to be of unadulterated sub-Saharan African ancestry whereas African-Americans are actually a mixed-race population with considerable European ancestry (Chisala 2015a; 2015c; Anderson 2015; see below).

Moreover, both, on the one hand, hybrid vigour or heterosis and, on the other, hybrid incompatibility or outbreeding depression could potentially complicate the assumption that racial hybrids should have average IQs intermediate between the average IQs of the two (or more) parent populations. 

However, Lynn only alludes to the possible effect of hybrid vigour in relation to biracial people in Hawaii, not in relation to other hybrid populations whose IQs he discusses, and never discusses the possible effect of hybrid incompatibility or outbreeding depression at all. 

Genotypic IQs 

Finally, Lynn also purports to estimate what he calls the “genotypic IQ” of at least some of the races discussed. This is a measure of genetic potential, distinguished from their actual realized phenotypic IQ. 

He defines the “genotypic IQ” of a population as the average score of a population if they were raised in environments identical to those of the group with whom they are being compared. 

Thus, he writes: 

The genotypic African IQ… is the IQ that Africans would have if they were raised in the same environment as Europeans” (p69). 

The fact that lower-IQ groups generally provide their offspring with inferior environmental conditions precisely because of their lower intelligence is therefore irrelevant for determining their “genotypic IQ”. However, as Lynn himself later acknowledges: 

It is problematical whether the poor nutrition and health that impair the intelligence of many third world peoples should be regarded as a purely environmental effect or as to some degree a genetic effect arising from the low intelligence of the populations that makes them unable to provide good nutrition and health for their children” (p193). 

Also, Lynn does not explain why he uses Europeans as his comparison group – i.e. why the African genotypic IQ is “the IQ that Africans would have if they were raised in the same environment as Europeans”, as opposed to, say, if they were raised in the same environments East Asians, Middle Eastern populations or indeed their own environments. 

Presumably this reflects historical factors – namely, Europeans were the first racial group to have their IQs systematically measured – the same reason that European IQs are arbitrarily assigned an average score of 100. 

Reaction Times 

Reaction times refer to the time taken to perform so-called elementary cognitive tasks. These are tests where everyone can easily work out the right answer, but where the speed with which different people get there correlates with IQ. 

Arthur Jensen has championed reaction time as a (relatively more) direct measure of one key cognitive process underlying IQ, namely speed of mental processing. 

Yet individuals with quicker reaction times would presumably have an advantage in sports, since reacting to, say, the speed and trajectory of a ball in order to strike or catch it is analogous to an elementary cognitive task. 

However, despite lower IQs, African-Americans, and blacks resident in other western economies, are vastly overrepresented among elite athletes. 
 
To explain this paradox, Lynn distinguishes “reaction time proper” – i.e. when one begins to move one’s hand towards the correct button to press – from “movement time” – how long one’s hand takes to get there. 

Whereas whites generally react faster, Lynn reports that blacks have faster movement times (p58-9).[1] Thus, Lynn concludes: 

The faster movement times of Africans may be a factor in the fast sprinting speed of Africans shown in Olympic records” (p58). 

However, psychologist Richard Nisbett reports that: 

Across a host of studies, movement times are just as highly correlated with IQ as reaction times” (Intelligence and How to Get It: p222). 

Brain Size

Lynn also reviews data regarding the brain-size of different groups. 

The correlation between brain-size and IQ as between individuals is well-established (Rushton and Ankney 2009). 
 
As between species, brain-size is also thought to correlate with intelligence, at least after controlling for body-size. Thus, species whose behaviours suggest high intelligence (e.g. dolphins, chimpanzees, corvids, some parrots) also tend to have large brains as compared to other species of similar body-size.

Indeed, since brain tissue is highly metabolically expensive, increases in brain-size would surely never have evolved without conferring some countervailing selective advantage such as increased intelligence. 

Thus, in the late-1960s, biologist HJ Jerison developed an equation to estimate an animal’s intelligence from its brain- and body-size alone. This is called the animal’s encephalization quotient
 
However, comparing the intelligence of different species poses great difficulties. In short, if you think a culture fair’ intelligence test is an impossibility, then try designing a ‘species fair’ test![2]

Moreover, dwarves have smaller absolute brain-sizes but usually larger brains relative to body-size, but usually have IQs within the normal range.

This is probably because dwarfism is an abnormal and pathological condition – a malfunction in growth and development. Therefore, the increased brain volume relative to body-size associated with disproportionate dwarfism did not evolve through natural selection. Despite its metabolic cost, the additional brain tissue may then indeed confer no adaptive advantage. 

Similarly, some forms of macrocephaly (i.e. abnormally large head and brain size) actually seem to be associated with impaired cognitive ability, probably because the condition reflects a malfunction in brain growth, such that the additional brain tissue may again be without adaptive function.

Sex differences in IQ, meanwhile, are smaller than those between races even though differences in brain-size are greater, at least before one introduces controls for body-size.

Also, Neanderthals had larger brains than modern humans, despite a shorter, albeit more robust, stature.

One theory has it that population differences in brain-size reflect a climatic adaptation that evolved in order to regulate temperature, in accordance with Bermann’s Rule. This seems to be the dominant view among contemporary biological anthropologists, at least those who deign (or dare) to even discuss this politically charged topic.[3] 

Thus, in one recent undergraduate textbook in biological anthropology, authors Mielke, Konigsberg and Relethford contend: 

Larger and relatively broader skulls lose less heat and are adaptive in cold climates; small and relatively narrower skulls lose more heat and are adaptive in hot climates” (Human Biological Variation: p285). 

On this view, head size and shape represents a means of regulating the relative ratio of surface-area-to-volume, since this determines the proportion of a body that is directly exposed to the elements.

Thus, Stephen Molnar, the author of another competing undergraduate textbook in biological anthropology, observes

The closer a structure approaches a spherical shape, the lower will be the surface-to-volume ratio. The reverse is true as elongation occurs—a greater surface area to volume is formed, which results in more surface to dissipate heat generated within a given volume. Since up to 80 percent of our body heat may be lost through our heads on cold days, one can appreciate the significance of shape” (Human Variation: Races, Types and Ethnic Groups, 5th Ed: p188).

This then might explain why, despite the relatively primitive state of their pre-contact civilization and not especially high IQ scores (see below), those whom Lynn terms “Artic Peoples” (i.e. Eskimos) have, according to Lynn’s data, the largest brains of any of the racial groups whom he discusses.[4]

The BermannAllen rules likely also explain at least some of the variation in body-size and stature as between racial groups. 

For example, Eskimos tend to be short and stocky, with short arms and legs and flat faces. This minimizes the ratio of surface-area-to-volume, ensures only a minimal proportion of the body is directly exposed to the elements, and also minimizes the extent of extremities (e.g. arms, legs, noses), which are especially vulnerable to the cold and frostbite. 

In contrast, populations from tropical climates, such as African blacks and Australian Aboriginals, tend to have relatively long arms and legs as compared to trunk size, a factor which likely contributes towards their success in some athletic events. 

Yet, interestingly, Beals et al report that:

Braincase volume is more highly correlated with climate than any of the summative measures of body-size” (Beals et al 1984: p305).

In other words, brain-size is more strongly correlated with the climate in which one’s ancestors evolved than is overall body-size or other bodily dimensions.

Why this is so is not clear. One might perhaps infer it is because head-size and shape is especially important in the regulation of temperature.

In fact, however, contrary to popular wisdom, humans do not lose an especially high proportion of our body heat through our heads, certainly not “up to 80 percent of our body heat”, as claimed in Stephen Molnar’s anthropology textbook as quoted above, a preposterous figure given that the head comprises only about 10% of the body’s overall surface area.

Indeed, the amount of heat lost through our head is relatively higher than that lost through other parts of the body only because other parts of the body are typically covered by clothing.

At any rate, it is surely implausible that an increase in brain tissue, which is metabolically highly expensive, would have evolved solely for the purpose of regulating temperature, when the same result could surely have been achieved by modifying only the external shape of the skull.

Conversely, even if race differences in brain-size did evolve purely for temperature regulation, differences in intelligence could still have emerged as a by-product of such selection.

In other words, if larger brains did evolve among populations inhabiting colder latitudes solely for the purposes of temperature regulation, the extra brain tissue that resulted may still have resulted in greater levels of cognitive ability among these populations, even if there was no direct selection for increased cognitive ability itself.

Europeans

The first racial group discussed by Lynn are those he terms “Europeans” (i.e. white Caucasians). He reviews data on IQ both in Europe and among diaspora populations elsewhere in the world (e.g. North America, Australia). 

The results are consistent, almost always giving an average IQ of about 100 – though this figure is, of course, arbitrary and reflects the fact that IQ tests were first normed by reference to European populations. This is what James Thompson refers to as the ‘Greenwich mean IQ’ and the IQs of all other populations in Lynn’s book are calculated by reference to this figure. 
 
Southeast Europeans, however, score slightly lower. This, Lynn argues, is because: 

Balkan peoples are a hybrid population or cline, comprising a genetic mix between the Europeans and South Asians in Turkey” (p18). 

Therefore, as a hybrid population, their IQs are intermediate between those of the two parent populations, and, according to Lynn, South Asians score somewhat lower in IQ than do white European populations (see below). Similarly, the Turkish people, being intermixed with Europeans, score slightly higher than other Middle-Eastern populations (p80).

In the newer 2015 edition, Lynn argues that IQs are somewhat lower elsewhere in southern Europe, namely southern Spain and Italy, for much the same reason, namely because: 

The populations of these regions are a genetic mix of European people with those from the Near East and North Africa, with the result that their IQs are intermediate between the parent populations” (Preface, 2015 Edition).[5]

An alternative explanation is that these regions (e.g. Balkan countries, Southern Italy) have lower living-standards. 

However, instead of viewing differences in living standards as causing differences in recorded IQs as between populations, Lynn argues that differences in innate ability themselves cause differences in living standards, because, according to Lynn, more intelligent populations are better able to achieve high levels of economic development (see IQ and the Wealth of Nations).[6]

Moreover, Lynn observes that in Eastern Europe, living standards are substantially below elsewhere in Europe as a consequence of the legacy of communism. However, populations from Eastern Europe score only slightly below those from elsewhere in Europe, suggesting that even substantial differences in living-standards may have only a minor impact on IQ (p20). 

Portuguese 

The Portuguese also, Lynn claims, score lower than elsewhere in Europe. 

However, he cites just two studies. These give average IQs of 101 and 88 respectively, which Lynn averages to give an average of 94.5 (p19). 

Yet these two results are actually highly divergent, the former actually being slightly higher than the average for north-west Europe. This suggests an inadequate basis on which to posit a genetic difference in ability. 

However, from this meagre data set, Lynn does not hesitate to provocatively conclude: 

Intelligence in Portugal has been depressed by the admixture of sub-Saharan Africans. Portugal was the only European country to import black slaves from the fifteenth century onwards” (p19). 

This echoes nineteenth century French racialist Arthur De Gobineau’s infamous theory that empires decline because, through their empires, they conquer large numbers of inferior peoples, who then inevitably interbreed with their conquerors, which, according to De Gobineau, results in the dilution the very qualities that permitted their imperial glories in the first place. 

In support of Lynn’s theory, mitochondrial DNA studies have indeed found higher frequency of sub-Saharan African Haplogroup L in Portugal than elsewhere in Europe (e.g. Pereira et al 2005). 

Ireland and ‘Selective Migration

IQs are also, Lynn reports, somewhat lower than elsewhere in Europe in Ireland. 

Lynn cites four studies of Irish IQs which give average scores of 87, 97, 93 and 91 respectively. Again, these are rather divergent but nevertheless consistently below the European average, all but one substantially so. 
 
Of course, in England, in less politically correct times, the supposed stupidity of the Irish was once a staple of popular humour, Irish jokes being the English equivalent of Polish jokes in America.[7]
 
However, the low IQ scores reported for Ireland seem anomalous given the higher average IQs recorded elsewhere in North-West Europe, especially the UK, Ireland’s next-door neighbour, whose populations are closely related to those in Ireland.

Also, in relation to Lynn’s Cold Winters Theory (see below), the climate in Ireland is quite cold.

Moreover, although head size is obviously a crude, indirect measure of brain size, it is perhaps worth observing that Carleton Coon reported in 1939 that Ireland has “the largest heads of any country excepting Belgium”, head-size being especially large in the southwestern half of Ireland (The Races of Europe: p265). Thus, Coon reports that overall:

Ireland consistently has the largest head size of any equal land area in Europe” (The Races of Europe: p377).

Of course, historically Ireland was, until relatively recently, quite poor by European standards. 

It is also sparsely populated and a relatively high proportion of the population live in rural areas, and there is some evidence that people from rural areas have lower average IQs than those from urban areas

However, economic deprivation cannot explain the disparity. Today, despite the 2008 economic crash, and inevitable British bailout, Ireland enjoys, according to the UN, a higher Human Development Index than does the UK, and has done for some time. Indeed, by this measure, Ireland enjoys one of the highest standards of living in the world

Moreover, although formerly Ireland was much poorer, the studies cited by Lynn were published from 1973 to 1993, yet show no obvious increase over time.[8] 
 
Lynn himself attributes the depressed Irish IQ to what he calls ‘selective migration’, claiming: 

There has been some tendency for the more intelligent to migrate, leaving less intelligent behind” (p19). 

Of course, this would suggest, not only that the remaining Irish would have lower average IQs, but also that the descendants of Irish émigrés in Britain, Australia, America and other diaspora communities would have relatively higher IQs than other white people. 

In support of this, Americans reporting Irish ancestry do indeed enjoy higher relative incomes as compared to most other white American ethnicities. 

Interestingly, Lynn also invokes “selective migration” to explain the divergences in East Asian IQs. Here, however, it was supposedly the less intelligent who chose to migrate (p136; p138; p169).[9]

Meanwhile, other hereditarians have sought to explain away the impressive academic performance of recent African immigrants to the West (see below), and their offspring, by reference to selective immigration of high IQ Africans, an explanation which is wholly inadequate on mathematical grounds alone (see Chisala 2015b; 2019).

It certainly seems plausible that migrants differ in personality from those who choose to remain at home. It is likely that they are braver, have greater determination, drive and willpower than those who choose to stay behind. They may also perhaps be less ethnocentric, and more tolerant of foreign cultures.[10]

However, I see no obvious reason they would differ in intelligence.

As African businessman Chanda Chisala writes:

Realizing that life is better in a very rich country than in your poor country is never exactly the most g-loaded epiphany among Africans” (Chisala 2015b).

Likewise, it likely didn’t take much brain-power for Irish people to realize during the Irish Potato Famine that they were less likely to starve to death if they emigrated abroad.

Of course, wealth is correlated with intelligence and may affect the decision to migrate.

The rich usually have little economic incentive to migrate, while the poor may be unable to afford the often-substantial costs of migration (e.g. transportation).

However, without actual historical data showing certain socioeconomic classes or intellectual ability groups were more likely to migrate than others, Lynn’s claims regarding ‘selective migration’ represent little more than a post-hoc rationalization for IQ differences that are otherwise anomalous and not easily explicable in terms of heredity

Ireland, Catholicism and Celibacy

Interestingly, in the 2015 edition of ‘Race Differences in Intelligence’, Lynn also proposes, in addition, a further explanation for the low IQs supposedly found in Ireland, namely the clerical celibacy demanded under Catholicism. Thus, Lynn argues:

There is a dysgenic effect of Roman Catholicism, in which clerical celibacy has reduced the fertility of some of the most intelligent, who have become priests and nuns” (2015 Edition; see also Lynn 2015). 

Of course, this theory presupposes that it was indeed the most intelligent among the Irish people who became priests. However, this is a questionable assumption, especially given the well-established inverse correlation between intelligence and religiosity (Zuckerman et al 2013).

However, it is perhaps arguable that, in an earlier age, when religious dogmas were relentlessly enforced, religious scholarship may have been the only form of intellectual endeavour that it was safe for intellectually-minded people to engage in.

Anyone investigating more substantial matters, such as whether the earth revolved around the sun or vice versa, was liable to be burnt at the stake if he reached the wrong (i.e. the right) conclusion.

However, such an effect would surely also apply in other historically Catholic countries as well.

Yet there is little if any evidence of depressed IQs in, say, France or Austria, although the populaions of both these countries were, until recently, like that of Ireland, predominantly Catholic.[11]

Africans 

The next chapter is titled “Africans”. However, Lynn uses this term to refer specifically to black Africans – i.e. those formerly termed ‘Negroes’. He therefore excludes from this chapter, not only the predominantly ‘Caucasoid’ populations of North Africa, but also African Pygmies and the Khoisan of Southern Africa, who are considered separately in a chapter of their own. 

Lynn’s previous estimate of the average sub-Saharan African IQ as just 70 provoked widespread incredulity and much criticism. However, undeterred, Lynn now goes even further, estimating the average African IQ even lower, at just 67.[12]

Curiously, according to Lynn’s data, populations from the Horn of Africa (e.g. Ethiopia and Somalia) have IQs no higher than populations elsewhere in sub-Saharan Africa.[13]

Yet populations from the Horn of Africa are known to be partly, if not predominantly, Caucasoid in ancestry, having substantial genetic affinities with populations from the Middle East.[14].

Therefore, just as populations from Southern Europe have lower average IQs than other Europeans because, according to Lynn, they are genetically intermediate between Europeans and Middle Eastern populations, so populations from the Horn of Africa should score higher than those from elsewhere in sub-Saharan Africa because of intermixture with Middle Eastern populations.

However, Lynn’s data gives average IQs for Ethiopia and Somalia of just 68 and 69 respectively – no higher than elsewhere in sub-Saharan Africa (The Intelligence of Nations: p87; p141-2).

On the other hand, blacks resident in western economies score rather higher, with average IQs around 85 according to Lynn. 

The only exception, strangely, are the Beta Israel, who also hail from the Horn of Africa, but are now mostly resident in Israel, yet who score no higher than those blacks still resident in Africa. From this, Lynn concludes:

These results suggest that education in western schools does not benefit the African IQ” (p53). 

However, why then do blacks resident in other western economies score higher? Are blacks in Israel somehow treated differently than those resident in the UK, USA or France? 

For his part, Lynn attributes the higher scores of blacks resident in these other Western economies to both superior economic conditions and, more controversially, to racial admixture. 

Thus, African-Americans in particular are known to be a racially-mixed population, with substantial European ancestry (usually estimated at around 20%) in addition to their African ancestry.[15]

Therefore, Lynn argues that the higher IQs of African-Americans reflect, in part, the effect of the European portion of their ancestry. 

However, this explanation is difficult to square with the observation that, as documented by African businessman Chanda Chisala among others, recent African immigrants to the US, themselves presumably largely of unmixed sub-Saharan African descent, actually consistently outperform African-Americans (and sometimes whites as well!) both academically and  economically (Chisala 2015a2015cAnderson 2015).[16]

Musical Ability” 

Lynn also reviews the evidence pertaining to one class of specific mental ability not covered in most previous reviews on the subject – namely, race differences in musical ability. 

The accomplishments of African-Americans in twentieth century jazz and popular music are, of course, much celebrated. To Lynn, however, this represents a paradox, since musical abilities are known to correlate with general intelligence and African-Americans generally have low IQs. 
 
In addressing this perceived paradox, Lynn reviews the results of various psychometric measures of musical ability. These tests include: 

  • Recognizing a change in pitch; 
  • Remembering a tune; 
  • Identifying the constituent notes in a chord; and 
  • Recognizing whether different songs have similar rhythm (p55). 

In relation to these sorts of tests, Lynn reports that African-Americans actually score somewhat lower in most elements of musical intelligence than do whites, and their musical ability is indeed generally commensurate with their general low IQs. 

The only exception is for rhythmical ability. 

This is, of course, congruent with the familiar observation that black musical styles place great emphasis on rhythm

However, even with respect to rhythmical ability, blacks score no higher than whites. Instead, blacks’ scores on measures of rhythmical ability are exceptional only in that this is the only form of musical ability on which blacks score equal to, but no higher than, whites (p56). 

For Lynn, the low scores of African-Americans in psychometric tests of musical ability are, on further reflection, little surprise. 

The low musical abilities of Africans… are consistent with their generally poor achievements in classical music. There are no African composers, conductors, or instrumentalists of the first rank and it is rare to see African players in the leading symphony orchestras” (p57). 

However, who qualifies as a composer, conductor or instrumentalist “of the first rank” is, ultimately, unlike the results of psychometric testing, a subjective assessment, as are all artistic judgements. 

Moreover, why is achievement in classical music, an obviously distinctly western genre of music, to be taken as the sole measure of musical accomplishment? 

Even if we concede that the ability required to compose and perform classical music is greater than that required for other genres (e.g. jazz and popular music), musical intelligence surely facilitates composition and performance in other genres too – and, given the financial rewards offered by popular music often dwarf those enjoyed by players and composers of classical music, the more musically-gifted race would have every incentive to dominate this field too. 

Perhaps, then, these psychometric measures fail to capture some key element of musical ability relevant to musical accomplishment, especially in genres other than classical. 

In this context, it is notable that no lesser champion of standardized testing than Arthur Jensen has himself acknowledged that intelligence tests are incapable of measuring creativity (Langan & LoSasso 2002: p24-5). 

In particular, one feature common to many African-American musical styles, from rap freestyling to jazz, is improvisation.  

Thus, Dinesh D’Souza speculates tentatively that: 

Blacks have certain inherited abilities, such as improvisational decision making, that could explain why they predominate in… jazz, rap and basketball” (The End of Racism: p440-1). 

Steve Sailer rather less tentatively expands upon this theme, positing an African advantage in: 

Creative improvisation and on-the-fly interpersonal decision-making” (Sailer 1996). 

On this basis, Sailer concludes that: 

Beyond basketball, these black cerebral superiorities in ‘real time’ responsiveness also contribute to black dominance in jazz, running with the football, rap, dance, trash talking, preaching, and oratory” (Sailer 1996). 

Bushmen and Pygmies” 

Grouped together as the subjects of the next chapter are black Africans’ sub-Saharan African neighbours, namely San Bushmen and Pygmies

Quite why these two populations are grouped together by Lynn in a single chapter is unclear. 

He cites Cavalli-Sforza et al in The History and Geography of Human Genes as providing evidence that: 

These two peoples have distinctive but closely related genetic characteristics and form two related clusters” (p73). 

However, although both groups are obviously indigenous to sub-Saharan Africa and quite morphologically distinct from the other black African populations who today represent the great majority of the population of sub-Saharan Africa, they share no especial morphological similarity to one another.[17]

Moreover, since Lynn acknowledges that they have “distinctive… genetic characteristics and form two… clusters”, they presumably should each of merited chapters of their own.[18]

One therefore suspects that they are lumped together more for convenience than on legitimate taxonomic grounds. 

In short, both are marginal groups of hunter-gatherers, now few in number, few if any of whom have been exposed to the sort of standardized testing necessary to provide a useful estimate of their average IQs. Therefore, since his data on neither group alone is really sufficient to justify its own chapter, he groups them together in a single chapter.  

However, the lack of data on IQ for either group means that even this combined chapter remains one of the shorter chapters in Lynn’s book, and, as we will see, the paucity of reliable data on the cognitive ability of either group leads one to suspect that Lynn might have been better omitting both groups from his survey of race differences in cognitive ability altogether, just as he omitted at least one other phenotypically quite distinct racial group for whom presumably there is again little data on IQs, namely the Negrito populations of South and South-East Asia. 

San Bushmen

It may be some meagre consolation to African blacks that, at least in Lynn’s telling, they no longer qualify as the lowest scoring racial group when it comes to IQ. Instead, this dubious honour is now accorded their sub-Saharan African neighbours, San Bushmen
 
In Race: The Reality of Human Differences (which I have reviewed here), authors Vincent Sarich and Frank Miele quote anthropologist and geneticist Henry Harpending as observing: 

All of us have the impression that Bushmen are really quick and clever and are quite different from their [black Bantu] neighbors… Bushmen don’t look like their black African neighbors either. I expect that there will soon be real data from the Namibian school system about the relative performance of Bushmen… and Bantu kids – or more likely, they will suppress it” (Race: The Reality of Human Differences (reviewed here): p227). 

Today, however, some fifteen or so years after Sarich and Miele published this quotation, the only such data I am aware of is that reported by Lynn in this book, which suggests, at least according to Lynn, a level of intelligence even lower than that of other sub-Saharan Africans. 

Unfortunately, however, the data in question is very limited and, in my view, inadequate to support Lynn’s controversial conclusions regarding Bushmen ability.  

It also consists of just three studies, none of which remotely resemble a full IQ test (p74-5). 

Yet, from this meagre dataset, Lynn does not hesitate to attribute to Bushmen an average IQ of just 52. 

If Lynn’s estimate of the average sub-Saharan African IQ at around 70 provoked widespread incredulity, then his much lower estimate for Bushmen is unlikely to fare better. 

Lynn anticipates such a reaction, and responds by pointing out:  

An IQ of 54 represents the mental age of the average European 8-year-old, and the average European 8-year-old can read, write, and do arithmetic and would have no difficulty in learning and performing the activities of gathering foods and hunting carried out by the San Bushmen. An average 8-year-old can easily be taught to pick berries put them in a container and carry them home, collect ostrich eggs and use the shells for storing water and learn how to use a bow and arrow” (p76). 

Indeed, Lynn continues, other non-human animals survive in difficult, challenging environments with even lower levels of intelligence:  

Apes with mental abilities about the same as those of human 4-year olds survive quite well as gatherers and occasional hunters and so also did early hominids with IQs around 40 and brain sizes much smaller than those of modern Bushmen. For these reasons there is nothing puzzling about contemporary Bushmen with average IQs of about 54” (p77). 

Here, Lynn makes an important point. Many non-human animals survive and prosper in ecologically challenging environments with levels of intelligence much lower than that of any hominid, let alone any extant human race. 

On the other hand, however, I suspect Lynn would not last long in Kalahari Desert – the home environment of most contemporary Bushmen.

Pygmies 

Lynn’s data on the IQs of Pygmies is even more inadequate than his data for Bushmen. Indeed, it amounts to just one study, which again fell far short of a full IQ test. 

Moreover, the author of the study, Lynn reports, did not quantify his results, reporting only that Pygmies scored much “much worse” than other populations tested using the same test (p78). 

However, while the other populations tested using the same test and outperforming Pygmies included “Eskimos, Native American and Filipinos”, Lynn conspicuously does not mention that they included other black Africans, or indeed other very low IQ groups such as Australian Aboriginals (p78). 

Thus, Lynn’s assumption that Pygmies are lower in cognitive ability than other black Africans is not supported even by the single study that he cites. 

Lynn also infers a low level of intelligence for Pygmies from their lifestyle and mode of sustenance: 

Most of them still retain a primitive hunter-gatherer existence while many of the Negroid Africans became farmers over the last few hundred years” (p78). 

Thus, Lynn assumes that whether a population has successfully transitioned to agriculture is largely a product of their intelligence (p191). 

In contrast, most historians and anthropologists would emphasize the importance of environmental factors in explaining whether a group transitions to agriculture.[19]

Finally, Lynn also infers a low IQ from the widespread enslavement of Pygmies by neighbouring Bantus: 

The enslavement of Pygmies by Negroid Africans is consistent with the general principle that the more intelligent races generally defeat and enslave the less intelligent, just as Europeans and South Asians have frequently enslaved Africans but not vice versa” (p78). 

However, while it may be a “general principle that the more intelligent races typically defeat and enslave the less intelligent”, if only because, being, on average, superior in military technology, the former are better able to conquer the latter than vice versa, this is hardly a rigid rule. 

After all, Middle Eastern and North African Muslims sometimes enslaved Europeans.[20] Yet, according to Lynn, the Arabs belong to a rather less intelligent race than do the Europeans whom they so often enslaved

Interestingly, it is notable that Pygmies are the only racial group whom Lynn includes in his survey for whom he does not provide an actual figure as an estimate their average IQ, which presumably reflects a tacit admission of the inadequacy of the available data.[21] 

Curiously, unlike for all the other racial groups discussed, Lynn also fails to provide any data on Pygmy brain-size. 

Presumably, Pygmies have small brains as compared to other races, if only on account of their smaller body-size – but what about their brain-size relative to body-size? Is there simply no data available?

Australian Aborigines 

Another group who are barely mentioned at all in most previous discussions of the topic of race differences in intelligence are Australian Aborigines. Here, however, unlike for Bushmen and Pygmies, data from Australian schools are actually surprisingly abundant. 

These give, Lynn reports, an average Aboriginal IQ of just 62 (p104). 

Unlike his estimates for Bushmen and Pygmies, this figure seems to be reliable, given the number of studies cited and the consistency of their results. One might say, then, that Australian Aboriginals have the lowest recorded IQs of any human race for whom reliable data is available. 

Interestingly, in addition to his data on IQ, Lynn also reports the results of Piagetian measures of development conducted among Aboriginals. He reports, rather remarkably, that a large minority of Aboriginal adults fail to reach what Piaget called the concrete operational stage of development with respect to understanding the principle of conservation – in other words, they sometimes fail to recognize a substance (e.g. a liquid), transferred to a new container, necessarily still remains of the same quantity (p105-7). 

Perhaps even more remarkable, however, are reports of Aborigine spatial memory (p107-8). This refers to the ability to remember the location of objects, and their locations relative to one another. 

Thus, he reports, one study found that, despite their low general cognitive ability, Aborigines nevertheless score much higher than Europeans in tests of spatial memory (Kearins 1981).  

Another study found no difference in the performance of whites and Aborigines (Drinkwater 1975). However, since Aborigines have much lower IQs overall, even equal performance on spatial memory as against Europeans is still out of sync with the performance of whites and Aborigines on other types of intelligence test (p108). 

Lynn speculates that Aboriginal spatial memory may represent an adaptation to facilitate navigation in a desert environment with few available landmarks.[22]

The difference, Lynn argues, seems to be innate, since it was found even among Aborigines who had been living in an urban environment (i.e. not a desert) for several generations (p108; but see Kearins 1986). 

Two other studies reported lower scores than for Europeans. However, one was an unpublished dissertation and hence must be treated with caution, while the and the other (Knapp & Seagrim 1981) “did not present his data in such a way that the magnitude of the white advantage can be calculated” (p108). 

Intriguingly, Lynn reports that this ability even appears to be reflected in neuroanatomy. Thus, despite smaller brains overall, Aborigines’ right visual cortex, implicated in spatial ability, is relatively larger than in Europeans (Klekamp et al 1987; p108-9).

New Guineans and Jared Diamond 

In his celebrated Guns, Germs and Steel (reviewed here), Jared Diamond famously claimed: 

In mental ability New Guineans are probably genetically superior to Westerners, and they surely are superior in escaping the devastating developmental disadvantages under which most children in industrialized societies grow up” (Guns, Germs and Steel: p21). 

Diamond bases this claim on the fact that, in the West, survival, throughout most of our recent history, depended on who was struck down by disease, which was largely random. 

In contrast, in New Guinea, he argues, people had to survive on their wits, with survival depending on one’s ability to procure food and avoid homicide, activities in which intelligence was likely to be at a premium (Guns, Germs and Steel: p20-21). 

He also argues that the intelligence of western children is likely reduced because they spend too much time watching television and movies (Guns, Germs and Steel: p21). 

However, there is no evidence television has a negative impact on children’s cognitive development. Indeed, given the rise in IQs over the twentieth century has been concomitant with increases in television viewing, it has even been speculated that increasingly stimulating visual media may have contributed to rising IQs. 

On the basis of two IQ studies, plus three studies of Piagetian development, Lynn concludes that the average IQ of indigenous New Guineans is just 62 (p112-3). 

This is, of course, exactly the same as his estimate for the average IQ of Australian Aboriginals.

It is therefore consistent with Lynn’s racial taxonomy, since, citing Cavalli-Sforza et al, he classes New Guineans as in the same genetic cluster, and hence as part of the same race as Australian Aboriginals (p101). 

Pacific Islanders 

Other Pacific Islanders, however, including Polynesians, Micronesians, Melanesians and Hawiians, are grouped separately and hence receive a chapter of their own. 

They also, Lynn reports, score rather higher in IQ, with most such populations having average IQs of about 85 (p117). However, the Māoris of New Zealand score rather higher, with an average IQ of about 90 (p116). 

Hawaiians and Hybrid Vigor 

For the descendants of the inhabitants of one particular Pacific Island, namely Hawaii, Lynn also reports data regarding the IQs of racially-mixed individuals, both those of part-Native-Hawiian and part-East Asian ancestry, and those of part-Native-Hawiian and part-European ancestry. 

These racial hybrids, as expected, score on average between the average scores for the two parent populations. However, Lynn reports: 

The IQs of the two hybrid groups are slightly higher than the average of the two parent races. The average IQ of the Europeans and Hawaiians is 90.5, while the IQ of the children is 93. Similarly, the average IQ of the Chinese and Hawaiians is 90, while the IQ of the children is 91. The slightly higher than expected IQs of the children of the mixed race parents may be a hybrid vigor or heterosis effect” (p118). 

Actually, the difference between the “expected IQs” and the IQs actually recorded for the hybrid groups is so small (only one point for the Chinese-Hawaiians), that it could easily be dismissed as mere noise, and I doubt it would reach statistical significance. 

Nevertheless, Lynn’s discussion begs the question as to why hybrid vigor has not similarly elevated the IQs of the other hybrid, or racially-mixed, populations discussed in other chapters, and why Lynn has not discussed this issue when reporting the average IQs of other racially-mixed populations in other chapters. 

Of course, while hybrid vigor is a real phenomenon, so is outbreeding depression and hybrid incompatibilities

Presumably then, which of these countervailing effects outweighs the other for different types of hybrid depends on the degree of genetic distance between the two parent populations. This, of course, varies for different races. 

It is therefore possible that some racial mixes may tend to elevate intelligence, whereas others, especially between more distantly-related populations, may tend, on average, to depress intelligence. 

For what it’s worth, Pacific Islanders, including Hawiians, are thought to be genetically closer to East Asians than to Europeans. 

South Asians and North Africans

Another group rarely treated separately in earlier works are those whom Lynn terms “South Asians and North Africans”, though this group also includes populations from the Middle East

Physical anthropologists often lumped these peoples together with Europeans as collectively “Caucasian” or “Caucasoid”. However, while acknowledging that they are “closely related to the Europeans”, Lynn cites Cavalli-Sforza et al as showing they form “a distinctive genetic cluster” (p79).

Certainly, there are genetic and phenotypic differences between Europeans and MENA populations. However, they are very much clinal in nature, so precisely where one should draw the line between these ostensible races is a matter for dispute. Thus, to say that Greeks are ‘European’ and hence a different race from Turkish people arguably says more about geographic convention, current political borders and religious differences than it does about genetics, let alone race.

Science writer Nicholas Wade reports that, although the various peoples of the so-called “Caucasoid race” do indeed cluster together, more fine-grained analyses reveals “two other major clusters”, one of which is “formed by the people of Central and South Asia, including India and Pakistan”, the other of which equates to “the Middle East, where there is considerable admixture with people from Europe and Africa” (A Troublesome Inheritance: p98; Li et al 2008).

This suggests that Lynn may indeed be justified in separating Europeans from Middle Eastern, North African and South Asian populations, but that the latter should perhaps themselves be separated among themselves between, on the one hand, Middle Eastern populations (perhaps including North Africans), and on the other the peoples of Central and South Asia. Certainly, grouping the dark-complexioned Dravidian-speaking communities of South India with predominantly Arabic-speaking North Africans, whose respective homelands are located several thousand miles away, reflects a crude and arguably distinctly Eurocentric conception of racial differentiation.

At any rate, while they may be genetically and even phenotypically quite distinct from one another, all the peoples grouped together by Lynn as “South Asians and North Africans”, nevertheless do indeed perform very similarly in IQ tests, at least according the findings cited by Lynn. They also, he reports, score substantially lower than do their fellow Caucasoids, white Europeans.

Thus, the average IQ of North Africans, South Asians and Middle Eastern populations in their indigneous native homelands is, Lynn reports, just 84 (p80), while South Asians resident in the UK score only slightly higher with an average IQ of just 89 (p82-4). 

This conclusion is surely surprising and should, in my opinion, be treated with caution. 

For one thing, all of the earliest known human civilizations – namely, Mesopotamia, Egypt and the Indus Valley civilization – emerged among these peoples, or at least in regions today inhabited primarily by people of this race.[23]

Moreover, people of Indian ancestry in particular are today regarded as a model majority in both Britain and America, and their overrepresentation in the professions, especially medicine, is widely commented upon.

Meanwhile, other groups originating in the Middle East and North Africa, notably the Lebanese and Iranians (and, of course, Jews: see below), also tend to be economically successful when transplanted to other parts of the world, such as North America, Latin America and West Africa.

Indeed, according to some measures, British-Indians are now the highest earning ethnicity in Britain, or the second-highest earning group after the Chinese, and Indians are also the highest earners in the USA, with Iranians and Lebanese ranking third and seventh respectively.

Yet all theses rankings oddly omit another ethnic group which also traces at least part of its ancestry to this part of the world and which surely outranks all other ethnicities in terms of disproportionate wealth – namely, Jewish people, who are discussed in the next section of this review.

Interestingly, in this light, one study cited by Lynn showed a massive gain of 14-points for children from India who had been resident in the UK for more than four years as compared to those who had been resident for less than four years, the former scoring almost as high in IQ as the indigenous British, with an average IQ of 97 (p83-4; Mackintosh & Mascie-Taylor 1985).[24]

In the light of this finding, it would be interesting to measure the IQs of a sample composed exclusively of people who traced their ancestry to India but who had been resident in the UK for the entirety of their lives (or even whose ancestors had been resident in the UK for successive generations), since all of the other studies cited by Lynn of the IQs of Indian children in the UK presumably include both recent arrivals and long-term residents grouped together, yet many British-Indians have now been resident in the UK for multiple generations.

In fact, however, the co-author of this paper, Nicholas Mackintosh, claims, in his own review of Lynn’s book, that the results of his study are misreported by Lynn. In fact, he asserts, the study in question (which I have not myself read) reported an average IQ of 97 for ten-year old children of Indian ancestry resident in Britain but only of 93 for children of Pakistani background (Mackintosh 2007).

In the same book review, Mackintosh also claims that another paper which he co-authored and which is cited by Lynn regarding the IQs of South Asian children resident in the UK is also misreported, and again in fact recorded significantly higher IQs for children of Indian ancestry than for those of Pakistani origin, the former averaging 91 and the latter only 85 (West et al 1992).

Thus, Mackintosh reports:

In fact, three British studies have given the same IQ tests to Indian and Pakistani children, and in all three, Indian children have outscored the Pakistanis by 4–6 IQ points” (Mackintosh 2007: 94).

In this light, it is interesting to observe that there is also a large difference in socio-economic status and average earnings as between, on the one hand, British-Indians, and, on the other, both British-Pakistanis and Bangladeshis. Indeed, the same data suggesting that British-Indians are the highest earning ethnicity in Britain also show that British-Pakistanis and Bangladeshis are among the lowest earners in the UK.

Likewise, within the British education system, schoolchildren of Indian descent outperform those of Pakistani and Bangladeshi background – as well as those white British descent (Fuerst & Lynn 2021).

Similarly, India itself now enjoys considerably higher living standards than does Pakistan – but, interestingly, somewhat lower living standards than Bangladesh

The primary divide between these three countries is, of course, not so much racial as religious. This suggests a religion as a causal factor in the reported differences.

Indeed, data on average earnings by religion rather than national origin show a similar pattern with Hindus having the highest average salaries of any religious community in the UK excepting Jews, and Muslims having the lowest.[25]

A similar pattern is apparent in the USA, where Hindus, again, come second to Jews, with Muslims among the lower earning groups.

A similar pattern is even observed in predominantly Muslim countries, where Christian communities, such as the Copts of Egypt, and also seemingly Christians in Lebanon, tend to be wealthier than Muslims on average.

Indeed, despite persecution, Christian communities in the Ottoman Empire, such as the Armenians and Greeks, as well as Jews, seem to have been disproportionately wealthy as compared to the Muslim majority, often dominating commerce.

Indeed, this disproportionate wealth was surely a factor in provoking the resentment that ultimately led to their genocide, mid-twentieth century Jewish-American racialist Nathaniel Weyl claiming:

In both Egypt and elsewhere in the world of Islam, the Muslim majority is almost always surpassed in energy ability and intelligence by the Jewish and Christian minorities. For this reason, the latter are chronically persecuted and periodically suppressed” (The Geography of Intellect: 64, n9.)

Likewise, among diaspora groups originating in this region of the world but today resident elsewhere, it seems to be non-Muslim groups, notably Jews, but also Hindus and the economically successful Lebanese diaspora (who seem to be mostly Christian rather than Muslim), who have proven the most economically successful.

Turning to international comparisons, one study purported to find that Muslim countries tend to have lower average IQs than do non-Muslim countries (Templer 2010). 

Perhaps, then, cultural practices in Muslim countries are responsible for reducing IQs (Dutton 2020). 

For example, consanguineous (i.e. incestuous) marriage, especially cross-cousin marriage, although not actually a part of Muslim teaching, and actually discouraged in some Islamic aḥādīth, is widespread throughout much of the Muslim world and may have an adverse impact on intelligence levels due to the effects of inbreeding depression (Woodley 2009). 

Another cultural practice that could affect intelligence in Muslim countries is the practice of even pregnant women, though exempt from requirement to fast during daylight hours during Ramadan, nevertheless still choosing to do so as proof of their piety and devotion (cf. Aziz et al 2004). 

However, Lynn’s own data show little difference between IQs in India and those in Pakistan and Bangladesh, nor indeed between IQs in India and those in Muslim countries in the Middle East or North Africa. Nor, according to Lynn’s data, do people of Indian ancestry resident in the UK score noticeably higher in IQ than do people who trace their ancestry to Bagladeshi and Pakistani – though, as we have seen, Mackintosh (2007) suggests otherwise. 

An alternative suggestion is that Middle-Eastern and North African IQs have been depressed as a result of interbreeding with sub-Saharan Africans, perhaps as a result of the Islamic slave trade.[26]

This is possible because, although male slaves in the Islamic world were routinely castrated and hence incapable of procreation, female slaves outnumbered males and were often employed as concubines, a practice which, unlike in puritanical North America, was regarded as perfectly socially acceptable on the part of slave owners

This would be consistent with the finding that Arab populations from the Middle East show some evidence of sub-Saharan African ancestry in their mitochondrial DNA, which is passed down the female line, but not in their Y-chromosome ancestry, passed down the male line (Richards et al 2003). 

In contrast, in the United States, the use of female slaves for sexual purposes, although it certainly occurred, was, in the prevailing puritanical Christian morality of the American South, in theory very much frowned upon.

In addition, in North America, due to the one-drop rule, all mixed-race descendants of slaves with any detectable degree of black African ancestry were classed as black. Therefore, at least in theory, the white bloodline would have remained ‘pure’, though some mixed-race individuals may have been able to pass

Therefore, sub-Saharan African genes may have entered the Middle Eastern, and North African, gene-pools in a way they were not able to do among whites in North America. 

This might explain why genotypic intelligence among North African and Middle Eastern populations may have declined in the period since the great civilizations of Mesopotamia and ancient Egypt, and even since the Golden Age of Islam, when the intellectual achievements of Middle Eastern and North African peoples seemed so much more impressive.

This would again be redolent of Arthur De Gobineau’s infamous theory that empires decline because, through their empires, they conquer large numbers of ostensibly inferior peoples, who then inevitably interbreed with their conquerors, which, according to De Gobineau, diluted the very qualities that permitted their imperial glories in the first place.

However, it is difficult to see how this could have had a significant effect on the genetics, or the IQs, of Muslim people in South Asia, where any sub-Saharan African genetic input must have been minimal and highly dilute.

On the other hand, it is possible that, in the Indian subcontinent, it was relatively lower caste Indians who were more receptive to Islam, since it offered them a chance to reject the caste system, and perhaps even partially escape endemic caste discrimination, just as it is also seems to have been disproportionately lower caste Indians also converted to Buddhism and Christianity.

Therefore, if lower caste Indians were, on average, of lower intelligence than upper caste Indians, as some evidence suggests to be the case (Chopra 1966; Lynn & Cheng 2018), and as is also true of social class differences in the contemporary west, then, if South Asian Muslims do indeed score somehwat lower in average IQ than do Hindus, then it is possible that this simply reflects the biological inheritance of intelligence over the generations from their low caste forebears.[27]

Jews

Besides Indians, another economically and intellectually overachieving model minority who derive, at least in part, from the race whom Lynn classes as “South Asians and North Africans” are Jews

Lynn has recently written a whole book on the topic of Jewish intelligence and achievement, titled The Chosen People: A Study of Jewish Intelligence and Achievement (review forthcoming). 

However, in ‘Race Differences in Intelligence’, Jews do not even warrant a chapter of their own. Instead, they are discussed only at the end of the chapter on “South Asians and North Africans”, although Ashkenazi Jews, and Sephardi Jews (but not Mizrahi Jews), also have substantial European ancestry. 

The decision not to devote an entire chapter to the Jewish people is surely correct, because, although even widely disparate groups (e.g. AshkenazimSephardic and Mizrahim, even the Lemba) do indeed share genetic affinities, Jews are not racially distinct (i.e. reliably physically distinguishable on phenotypic criteria) from other peoples. 

However, the decision to include them in the chapter on “South Asians and North Africans” is potentially controversial, since, as Lynn readily acknowledges, Ashkenazi Jews, who today constitute the majority of Jews, have substantial European as well as Middle Eastern ancestry, as indeed do Sephardi Jews (but not Mizrahi Jews). 

Lynn claims British and US Jews have average IQs of around 108 (p68). His data for Israel are not broken down by ethnicity, but give an average IQ for Israel as a whole of 95, which Lynn, rather conjecturally, infers scores of 103 for Ashkenazi Jews, 91 for Mizrahi Jews and 86 for Palestinian-Arabs (p94). 

Lynn’s explanations for Ashkenazi intelligence, however, are wholly unpersuasive. 

First, he observes that, despite Biblical and Talmudic admonitions against miscegenation with Gentiles, Jews inevitably interbred to some extent with the host populations alongside whom they lived. From this, Lynn infers that: 

Ashkenazim Jews in Europe will have absorbed a significant proportion of the genes for higher intelligence possessed by… Europeans” (p95). 

It is indeed true that, if, as Lynn claims, Europeans are indeed a more intelligent race than are populations from the Middle East, then interbreeding with Europeans may indeed explain how Ashkenazim came to score higher in IQ than do other populations tracing their ancestry to the Middle East. 

However, interbreeding with Europeans can hardly explain how Ashkenazi Jews came to outscore, and outperform academically and economically, even the very Europeans with whom they are said to have interbred! 

This explanation therefore fails to explain why Ashkenazim have higher IQs than do Europeans. 

Lynn’s second explanation for high Ashkenazi Jewish IQs is equally unpersuasive. He suggests that: 

The second factor that has probably operated to increase the intelligence of Ashkenazim Jews in Europe and the United States as compared with Oriental Jews is that the Ashkenazim Jews have been more subject to persecution… Oriental Jews experienced some persecution sufficient to raise their IQ of 91, as compared with 84 among other South Asians and North Africans, but not so much as that experienced by Ashkenazim Jews in Europe.” (p95).[28]

On purely theoretical grounds, the idea that persecution selects for intelligence may seem plausible, if hardly compelling.

For example, one might speculate that only the relatively smarter Jews were able to anticipate looming pogroms and hence escape – or, alternatively, since wealth is correlated with intelligence, perhaps only the relatively richer, and hence generally smarter, Jews could afford the costs of migration, including bribes to officials, in order to escape such looming pogroms.[29] 

These are, however, obviously speculative, post-hoc ‘just-so stories’ (in the negative Gouldian sense), and, in the absence of hard data, I put little stock in them. 

There is in fact no evidence that persecution generally acts to increase a group’s intelligence. On the contrary, other groups who have been subject to persecution throughout much of their histories – e.g. the Roma (i.e. Gypsies) and African-Americans – are generally found to have relatively low IQs. 

East and South-East Asians

Excepting Jews, the highest average IQs are found among East Asians, who have, according to Lynn’s data, an average IQ of 105, somewhat higher than that of Europeans (p121-48). 

However, whereas Jews score relatively higher in verbal intelligence than spatio-visual ability, East Asians show the opposite pattern, with relatively higher scores for spatio-visual ability.[30]

However, it is important to emphasize that this relatively high figure applies only to East Asians – i.e. Chinese, Japanese Koreans, Taiwanese etc. – though it has been suggested that the results for China may reflect the oversampling of western diaspora populations and populations from technologically and economically advanced urban areas of China, as opposed to relatively more backward rural regions where IQs seem to be much lower.

Moreover, these high average IQ scores do not apply to the related populations of Southeast Asia (i.e. Thais, Filipinos, Vietnamese, Malaysians, Cambodians, Indonesians etc.), who actually score much lower in IQ, with average scores of only around 87 in their indigenous homelands, but rising to 93 among those resident in the US. 

Thus, Lynn distinguishes the East Asians from Southeast Asians as a separate race, on the grounds that the latter, despite “some genetic affinity with East Asians” form a distinct genetic cluster in data gathered and analyzed by Cavalli-Sforza et al, and also have distinct morphological features, with “the flattened nose and epicanthic eye-fold… [being] less prominent” than among East Asians (p97). 

This is an important point, since many previous writers on the topic have implied that the higher average IQs of East Asians applied to all ‘Asians’ or ‘Mongoloids’, which would presumably include South-East Asians.[31]

Yet, in Lynn’s opinion, it is just as misleading to group all these groups together as ‘Mongoloid’ or ‘Asian’ as it was to group “Europeans” and “South Asians and North Africans” together as ‘Caucasian’ or ‘Caucasoid’. 

However, whether low scores throughout South-East Asia are entirely genetic in origin is unclear. Thus, Vietnamese resident in the West have sometimes, but not always, scored considerably higher, and Jason Malloy suggests that Lynn exaggerates the overrepresentation of ethnic Chinese among Vietnamese immigrants to the West so as attribute such results to East Asians rather than South-East Asians (Malloy 2014).[32]

Moreover, in relation to Lynn’s Cold Winters Theory (discussed below), whereby it is claimed that populations were exposed to colder temperatures during their evolution evolved higher levels of intelligence in order to cope with the adaptive challenges that surviving cold temperatures posed, it is notable that climate varies greatly across China, reflecting the geographic size of the country, with Southern China having a subtropical climate with mild winters.

However, perhaps East Asians, like the Han Chinese, are to be regarded as only relatively recent arrivals in what is now Southern China. This would be consistent with claim of some physical anthropologists that the some aspects of the morphology of East Asians reflects adaptation to the extreme cold of Siberia and the Steppe, and also with the historical expansion of the Han Chinese.

Even more problematic for Cold Winters Theory is the fact that, although Lynn classifies them as East Asian (p121), the higher average IQ scores of East Asians (as compared to whites), does not even extend to the people after whom the Mongoloid race was named – namely the Mongols themselves.

According to Lynn, Mongolians score only around the same as whites, with an average IQ of only 101 (Lynn 2007).

This report is based on just two studies. Moreover, it had not been published at the time the first edition of ‘Race Differences in Intelligence’ came off the presses.

However, Lynn infers a lower IQ for Mongolians from their lower level of cultural, technological and economic development (p240).

Yet, inhabiting the Mongolian-Manchurian grassland Steppe and Gobi Desert, Mongolians were surely subjected to an environment even colder and more austere than that of other East Asians.

On the one hand, this might explain their lower levels of cultural, technological and economic development. On the other, according to Lynn’s Cold Winters Theory, it ought presumably to have resulted in their evolving, if anything, even higher levels of intelligence than other East Asians.

Lynn’s explanation for this anomaly is that the low population-size of the Mongols, and their isolation from other populations, meant that the necessary mutations for higher IQ never arose (p240).[33]

This is the same explanation that Lynn provides for the related anomaly of why Eskimos (“Arctic Peoples”), to whom Mongolians share some genetic affinity, also score low in IQ, an explanation that is discussed in the final part of this review.

Native Americans

Another group sometimes subsumed with Asian populations as “Mongoloids” are the indigenous populations of the American continent, namely “Native Americans”. 

However, on the basis of both genetic data from Cavalli-Sforza et al and morphological differences (“darker and sometimes reddish skin, hooked or straight nose, and lack of the complete East Asian epicanthic fold”), Lynn classifies them as a separate race and hence accords them a chapter of their own. 

His data suggest average IQs of about 86, for both Native Americans resident in Latin America, and also for those resident in North America, despite the substantially higher living standards of the latter (p158; 162-3; p166). 

Mestizo populations, however, have somewhat higher scores, with average IQs intermediate between those of the parent populations (p160).[34]

This average IQ of around 86 is virtually identical to that recorded among African-Americans, to whom Lynn, as discussed above, attributes an average IQ of around 85.

Interestingly, this conclusion contradicts an earlier tradition in the hereditarian literature which attributed to Native Americans a somewhat higher IQ than that recorded among African Americans, despite the fact that, at the time (and, arguably still today), Native Americans experienced higher rates of poverty and economic deprivation than did African Americans, and a comparable degree of historical persecution and oppression.

This was used by some hereditarians to argue that economic deprivation, poverty and a recent history of oppression, could not by themselves fully explain the low scores recorded among African-Americans, mid-twentieth century biologist and hereditarian Robert E Kuttner concluding:

The results of the comparison of Indian and Negro school children indicate that the former record distinctly superior performance despite a generally inferior socio-economic position in society. This serves to demonstrate that the factors commonly regarded as exerting a decisive formative influence on test performance are strongly modified by the inherent capacities of the groups involved” (Kuttner 1968: 160).

Similarly, celebrated educational psychologist Arthur Jensen, in his accessible but rigorous 1981 popular introduction to the science of IQ testing, Straight Talk About Mental Tests, points out that:

“[O]n a composite of twelve SES and other environmental indices, the American Indian population ranks about as far below black standards as blacks rank below those of whites… But it turns out that Indians score higher than blacks on tests of intelligence and scholastic achievement, from the first to the twelfth grade. On a nonverbal reasoning test given in the first grade, before schooling could have had much impact, Indian children exceeded the mean score of blacks by the equivalent of 14 IQ points. Similar findings occur with Mexican-Americans, who rate below blacks on SES and other environmental indices, but score considerably higher on IQ tests, especially of the nonverbal type” (Straight Talk About Mental Tests: p217).

Yet, Lynn, as we have seen, reports a difference in average IQs as between Native Americans and African-Americans of only a single IQ point.

With regard to specific abilities and the various subfactors of intelligence, Native Americans, like the Asian populations to whom they are related, score rather higher on spatio-visual intelligence than on verbal intelligence (p156). 

In particular, American Indians also evince especially high visual memory (p159-60). 

As he did for African-Americans, Lynn also discusses the musical abilities of Native Americans. Interestingly, psychometrical testing shows that their musical ability is rather higher than their general cognitive ability, giving a MQ (Musical Quotient) of approximately 92 (p160). 

They also show the same pattern of musical abilities as do African-Americans, with higher scores for rhythmical ability than for other forms of musical ability (p160). 

However, whereas blacks, as we have seen, only score as high as Europeans for rhythmical ability, but no higher, Native Americans, because of higher IQs (and MQs) overall, actually outscore both Europeans and African-Americans when it comes to rhythmical ability. 

These results are curious. Unlike African-Americans, Native Americans are not, to my knowledge, known for their contribution to any genres of western music, and neither are their indigenous musical traditions especially celebrated. 

Artic Peoples” (i.e. Eskimos)

Distinguished from other Native Americans are the inhabitants of the far north of the American landmass. These, together with other indigenous populations from the area around the Bering straight, namely those from Greenland, the Aleutian Islands, and the far north-east of Siberia, together form the racial group whom Lynn refers to as “Arctic Peoples”, though the more familiar, if less politically correct, term would be ‘Eskimos’.[35]

As well as forming a distinctive genetic cluster per Cavalli-Sforza et al, they are also morphologically distinct, not least in their extreme adaptation to the cold, with, Lynn reports: 

Shorter legs and arms and a thick trunk to conserve heat, a more pronounced epicanthic eye-fold, and a nose well flattened into the face to reduce the risk of frostbite” (p149). 

As we will see, Lynn is a champion of what is sometimes called Cold Winters Theory – namely the theory that the greater environmental challenges, and hence cognitive demands, associated with living in colder climates selected for increased intelligence among those races inhabiting higher latitudes. 

Therefore, on the basis of this theory, one might imagine that Eskimos, who surely evolved in one of the most difficult, and certainly in the coldest, environment of any human group, would also have the highest IQs. 

This conclusion would also be supported by the observation that, according to the data cited by Lynn himself, Eskimos also have the largest average brain-size of any race (p153). 

Interestingly, some early reports did indeed suggest that Eskimos had high levels of cognitive ability as compared to whites.[36] However, Lynn now reports that Eskimos actually have rather lower IQ scores than do whites and East Asians, with results from 15 different studies giving an average IQ of around 90. 

Actually, however, viewed in global perspective, this average IQ of 90 for Eskimos is not that low. Indeed, of the ten major races surveyed by Lynn, only Europeans and East Asians score higher.[37]

It is an especially high score for a population who, until recently, lived exclusively as hunter-gatherers. Other foraging groups, or descendants of peoples who, until recently, subsisted as foragers, tend, according to Lynn’s data, to have low IQs (e.g. Australian Aboriginals, San Bushmen, Pygmies). 

One obvious explanation for the relatively low IQs of Eskimos as compared to Europeans and East Asians would be their deprived living conditions

However, Lynn is skeptical of the claim that environmental factors are entirely to blame for the difference in IQ between Eskimos and whites, since he observes: 

The IQ of the Arctic Peoples has not shown any increase relative to that of Europeans since the early 1930s, although their environment has improved in so far as in the second half of the twentieth century they received improved welfare payments and education. If the intelligence of the Arctic Peoples had been impaired by adverse environmental conditions in the 1930s it should have increased by the early 1980s” (p153-4). 

He also notes that all the children tested in the studies he cites were enrolled in schools (since this was where the testing took place), and hence were presumably reasonably familiar with the procedure of test-taking (p154).

Lynn’s explanation for the relatively low scores of Eskimos is discussed below in the final part of this review.

Visual Memory, Spatial Memory and Hunter-Gathering 

Eskimos also score especially high on tests of visual memory, something not usually measured in standard IQ tests (p152-3). 

This is a proficiency they share in common with Native Americans (p159-60), to whom they are obviously closely related. 

However, as we have seen, Australian Aboriginals, who are not closely related to either group, also seem to possess a similar ability, though Lynn refers to this as “spatial Memory” rather than “visual Memory” (p107-8). 

These are, strictly speaking, somewhat different abilities, although they may not be entirely separate either, and may also be difficult to distinguish between in tests. 

If Aboriginals score high on spatial memory, they may then also score high on visual memory, and vice versa for Eskimos and Native Americans. However, since Lynn does not provide comparative data on visual memory among Aboriginals, or on spatial memory among Eskimos or Native Americans, this is not certain. 

Interestingly, one thing all these three groups share in common is a recent history of subsisting, at least in part, as hunter-gatherers.[38]

One is tempted, then, to attribute this ability to the demands of a hunter-gatherer lifestyle, perhaps reflecting the need to remember the location of plant foods which appear only seasonally, or to find one’s way home after a long hunting expedition.[39] 

It would therefore be interesting to test the visual and spatial memories of other groups who either continue to subsist as hunter-gatherers or only recently transitioned to agriculture or urban life, such as Pygmies and San Bushmen. However, since tests of spatial and visual memory are not included in most IQ tests, the data is probably not yet available.  

For his part, Lynn attributes Eskimo visual memory to the need to “find their way home after going out on long hunting expeditions” (p152-3). 

Thus, just as the desert environment of Australian Aboriginals provides few landmarks, so: 

The landscape of the frozen tundra [of the Eskimos] provides few distinctive cues, so hunters would need to note and remember such few features as do exist” (p153). 

Proximate Causes: Heredity or Environment?

Chapter fourteen discusses the proximate causes of race differences in intelligence and the extent to which the differences observed can be attributed to either heredity or environmental factors, and, if partly the latter, which environmental factors are most important.  

Lynn declares at the beginning of the chapter that the objective of his book is “to broaden the debate” from an exclusive focus on the black-white test score gap in the US, to instead looking at IQ differences among all ten racial groups across the world for whom data on IQ or intelligence is presented in Lynn’s book (p182). 

Actually, however, in this chapter alone, Lynn does indeed focus primarily on black-white differences, if only because it is in relation to this difference that most research has been conducted, and hence to this difference that most available evidence relates. 

Downplaying the effect of schooling, Lynn identifies malnutrition as the major environmental influence on IQ (p182-7). 

However, he rejects malnutrition as an explanation for the low scores of American blacks, noting there is no evidence of short stature in black Americans and nor have surveys have found a greater prevalence of malnutrition (p185). 

As to global differences, he concludes that: 

The effect of malnourishment on Africans in sub-Saharan Africa and the Caribbean probably explains about half of the low IQs, leaving the remaining half to genetic factors” (p185). 

However, it is unclear what is meant by “half of the low scores” as he has identified no comparison group.[40] 

He also argues that the study of racially mixed individuals further suggests a genetic component to observed IQ differences. Thus, he claims: 

There is a statistically significant association between light skin and intelligence” (p190). 

As evidence he cites his own study (Lynn 2002) to claim: 

When the amount of European ancestry in American blacks is assessed by skin color, dark-skinned blacks have an IQ of 85 and light-skinned blacks have an IQ of 92” (p190). 

However, he fails to explain how he managed to divide American blacks into two discrete groups by reference to a trait that obviously varies continuously. 

More importantly, he neglects to mention altogether two other studies that also investigated the relationship between IQ and degree of racial admixture among African-Americans, but used blood-groups rather than skin tone to assess ancestry (Loehlin et al 1973; Scarr et al 1977). 

This is surely a more reliable measure of ancestry than is skin tone, since the latter is affected by environmental factors (e.g. exposure to the sun darkens the skin), and could conceivably have an indirect psychological effect.[41]

However, both these studies found no association between ancestry and IQ (Loehlin et al 1973; Scarr et al 1977).[42] 

Meanwhile, Lynn mentions the Eyferth study (1961) of the IQs of German children fathered by black and white US servicemen in the period after World War II, only to report, “the IQ of African-Europeans [i.e. those fathered by the black US servicemen] was 94 in relation to 100 for European women” (p63). 

However, he fails to mention that the IQ of those German children fathered by black US servicemen (i.e. those of mixed race) was actually almost identical to that of those fathered by white US servicemen (who, with German mothers, were wholly white). This finding is, of course, evidence against the hereditarian hypothesis with respect to race differences. 

Yet Lynn can hardly claim to be unaware of this finding, or its implications with respect to race differences, since this is actually among the studies most frequently cited by opponents of the hereditarian hypothesis with respect to the black-white test score gap for precisely this reason. 

Lynn’s presentation of the evidence regarding the relative contributions of heredity and environment to race differences in IQ is therefore highly selective and biased. 

An Evolutionary Analysis 

Only in the last three chapters does Lynn provide the belated “Evolutionary Analysis” promised in his subtitle. 

Lynn’s analysis is evolutionary in two senses. 

First, he presents both a functionalist explanation of why race differences in intelligence (supposedly) evolved (Chapter 16). This is the sort of ultimate evolutionary explanation with which evolutionary psychologists and sociobiogists are usually concerned. 

However, in addition, Lynn also traces evolution of intelligence over evolutionary history, both in humans of different races (Chapter 17) and among our non-humans and our pre-human ancestors (Chapter 15). 

In other words, he addresses the questions of both adaptation and phylogeny, two of Niko Tinbergen’s famous Four Questions

In discussing the former of these two questions (namely, why race differences in intelligence evolved: Chapter 16), Lynn identifies climate as the ultimate environmental factor responsible for the evolution of race differences in intelligence. 

Thus, he claims that, as humans spread out beyond Africa towards regions further from the equator and hence generally with colder temperatures, especially during winters, the colder climates that these pioneers encountered posed greater challenges for the humans who encountered them in terms of feeding themselves and obtaining shelter etc., and that different human races evolved different levels of intelligence in response to the adaptive challenges posed by such difficulties.

In support of this claim, he cites a fascinating study that found a correlation between, on the one hand, latitude, and, on the other, both of the number and complexity of the tools used by different groups of hunter gatherers (Torrence 1983). Lynn reports that, in addition to differences in the complexity of the tools used:

Torrence… found that hunter-gatherer peoples in tropical and subtropical latitudes, such as the Amazon basin and New Guinea, typically have between 10 and 20 different tools, whereas those in the colder northern latitudes of Siberia, Alaska, and Greenland have between 25 and 60 different tools” (p282)

Hunting vs. Gathering 

The greater problems supposedly posed by colder climates included not just difficulties of keeping warm (i.e. the need for clothing, fires, insulated homes), but also the difficulties of keeping fed. 

Thus, Lynn emphasizes the dietary differences between foragers inhabiting different regions of the world: 

Among contemporary hunter-gatherers the proportions of foods obtained by hunting and gathering varies by hunting and by gathering varies according to latitude. Peoples in tropical and subtropical latitudes are largely gatherers, while peoples in temperate environments rely more on hunting, and peoples in arctic and sub-arctic environments rely almost exclusively on hunting and fishing and have to do so because plant foods are unavailable except for berries and nuts in the summer and autumn” (p227). 

I must confess that I was previously unaware of this dietary difference. However, in my defence, this is perhaps because many anthropologists seem all too ready to overgeneralize from the lifestyles of the most intensively studied tropical groups (e.g. the San of Southern Africa) to imply that what is true of these groups is true of all foragers, and was moreover necessarily also true of all our hunter-gatherer ancestors before they transitioned to agriculture. 

Thus, for example, feminist anthropologists seemingly never tire of claiming that it is female gatherers, not male hunters, who provide most of the caloric demands of foraging peoples. 

Actually, however, this is true only for groups inhabiting tropical climes, where plant foods are easily obtainable all year round, not of hunter-gatherers in general (Ember 1978). 

It is certainly not true, for example, of Eskimos, among whom females are almost entirely reliant on male hunters to provision them for most of the year, since plant foods are hardly available at all except for during a few summer months. 

Similarly, radical-leftist anthropologist Marshall Sahlins famously characterized hunter-gatherer peoples as The Original Affluent Society, because, according to his data, they do not want for food and actually have more available leisure-time than do most agriculturalists, and even most modern westerners. 

Unfortunately, however, he relied primarily on data from tropical peoples such as the !Kung San to arrive at his estimates, and these findings do not necessarily generalize to other groups such as the Inuit or other Eskimos

The idea that it was our ancestor’s transition to a primarily carnivorous diet that led to increases in hominid brain-size and intelligence was once a popular theory in paleoanthropology. 

However, it has now fallen into disfavour, if only because it put accorded male hunters the starring role in hominid evolution, with female gatherers relegated to a supporting role, and hence offended the sensibilities of feminists, who have become increasingly influential in academia, even in science. 

Nevertheless, it is seems to be true that, across taxa, carnivores tend to have larger brains than herbivores. 

Of course, non-human carnivores did not evolve the exceptional intelligence of humans.  

However, Desmond Morris in The Naked Ape (reviewed here) argued that, because our hominid ancestors only adopted a primarily carnivorous diet relatively late in their evolution, they were unable to compete with such specialized hunters as lions and tigers in terms of their fangs and claws. They therefore had to adopt a different approach, using intelligence instead or claws and fangs, hence inventing handheld weapons and cooperative group hunting. 

Lynn’s argument, however, is somewhat different to the traditional version of the so-called hunting ape hypothesis, as championed by popularizers like Desmond Morris and Robert Ardley

Thus, in the traditional version, it is the intelligence of early hominids, the descendants all populations of contemporary humans, that increased as a result of the increasing cognitive demands that hunting placed upon us. 

However, Lynn argues that it is only certain races that were subject to such selection, as their dependence on hunting increased as they populated colder regions of the globe. 

Indeed, Lynn’s arguments actually cast some doubt on the traditional version of the hunting ape theory

After all, anatomically modern humans are thought to have first evolved in Africa. Yet if African foragers actually subsisted primarily on a diet of wild plant foods, and only occasionally hunted or scavenged meat to supplement this primarily herbivorous diet, then the supposed cognitive demands of hunting can hardly be invoked to explain the massive increase in hominid brain-size that occurred during the period before our ancestors left Africa to colonize the remainder of the world.[43]

Indeed, Lynn is seemingly clear that he rejects the Hunting Ape Hypothesis, writing that the increases in hominid brain-size after our ancestors “entered a new niche of the open savannah in which survival was more cognitively demanding” occurred, not because of the cognitive demands of hunting, but rather that: 

The cognitive demands of the new niche would have consisted principally of finding a variety of different kinds of foods and protecting themselves from predators” (p202)[44]

Cold Winters Theory’ 

It may indeed be true that surviving in the extreme cold is more difficult than surviving the sometimes extreme heat of tropical climate. After all, around the world, many more people die annually from the extreme cold than from extreme heat (Zhau et al 2021).

Indeed, cold weather may not just be challenging for humans, but rather inimicable to life itself. Thus, the coldest regions of Euasia are invariably arid tundra, whereas, in contrast, tropical rainforests are positively teeming with life.

However, there are several problems with so-called ‘Cold Winters Theory’ as an explanation for the race differences in IQ reported by Lynn. 

For one thing, other species have evidently adapted themselves to colder climates without evolving a level of intelligence as high as human populations, let alone that of Europeans and East Asians. 

Indeed, I am not aware of any studies even suggesting a relationship between brain-size or intelligence and the temperature or latitude of their species-ranges among non-human species. However, one might expect to find an association between temperature and brain-size, if only because of Bergmann’s rule

Similarly, Neanderthals were ultimately displaced and driven to extinction throughout Eurasia by anatomically-modern humans, who, at least according to the conventional account, outcompeted Neanderthals due to their superior intelligence and tool-making ability. 

However, whereas anatomically modern humans are thought to have evolved in tropical Africa before spreading outwards to Eurasia, the Neanderthals were a cold-adapted species of hominid who had evolved and thrived in Eurasia during the last Ice age. Therefore, if anatomically-modern humans indeed outcompeted Neanderthals because they were smarter, it was certainly not because they evolved in a colder climate.

At any rate, even if the conditions were indeed less demanding in tropical Africa than in temperate or polar latitudes, then, according to basic Darwinian (and Malthusian) theory, in the absence of some other factor limiting population growth (e.g. warfare, predation, homicide, disease), this would presumably mean that humans would respond to greater resource abundance in the tropics by reproducing until they reached the greater carrying capacity of that environment.   

By the time the carrying capacity of the environment was reached, however, the environment would no longer be so resource-abundant given the greater number of humans competing for its resources. 

This leads me to believe that the key factors selecting for increases in the intelligence of hominids were not ecological but rather social – i.e. not access to food and shelter etc., but rather competition with other humans. 

Also, I remain unconvinced that the environments inhabited by the two races that have, according to Lynn, the lowest average IQs, namely, San Bushmen and Australian Aborigines, are cognitively undemanding. 

These are, of course, the Kalahari Desert and Australian outback (also composed, in large part, of deserts) respectively, two notoriously barren and arid environments.[45]

Meanwhile, the Eskimos occupy what is certainly the coldest, and also undoubtedly one of the most demanding, environments anywhere in the world, and also have, according to Lynn’s own data, the largest brains.

However, according to Lynn’s data, their average IQ is only about 90, high for a foraging group, but well below that of Europeans and East Asians.[46] 

For his part, Lynn attempts to explain away this anomaly by arguing that Arctic Populations were prevented from evolving higher IQs by small and dispersed populations, reflecting of the harshness of the environment. This meant the necessary mutations either never arose or never spread through the population (p153; p239-40; p221).[47]
 
On the other hand, he explains their large brains as reflecting visual memory rather than general intelligence, as well as a lack of mutations for neural efficiency (p153; p240).

However, these seem like post-hoc rationalizations.

After all, if conditions were harsher in Eurasia than in Africa, then this would presumably also have resulted in smaller and more dispersed populations in Eurasia than in Africa. However, this evidently did not prevent mutations for higher IQ spreading among Eurasians. 

Why then, when the environment becomes even harsher, and the population even more dispersed, would this pattern suddenly reverse itself? 
 
Likewise, if whole-brain-size is related to general intelligence, it is inconsistent to invoke specific abilities to explain Inuit brains. 

Thus, according to Lynn, Australian Aborigines have high spatial memory, which is closely related to visual memory. However, also according to Lynn, only their right visual cortex is enlarged (p108-9) and they have small overall brain-size (p108-9; p210; p212). 

Endnotes

[1] Curiously, Lynn reports, this black advantage for movement-time does not appear in the simplest form of elementary task (simple reaction time), where the subject simply has to press a button on the lighting of a light, rather than hitting a specific button, rather than alternative buttons, on the lighting of a particular light rather than other lights (p58). These latter forms of elementary cognitive test presumably involve some greater degree of cognitive processing. 

[2] First, there are the practical difficulties. Obviously, non-human animals cannot use written tests, or an interview format. Designing a maze for laboratory mice may be relatively straightforward, but building a comparable maze for elephants is rather more challenging. Second, and more important, different species likely have evolved different specialized abilities for dealing with specific adaptive problems. For example, migratory birds may have evolved specific spatio-visual abilities for navigation. However, this is not necessarily reflective of high general intelligence, and to assess their intelligence solely on the basis of their migratory ability, or even their general spatio-visual ability, would likely overestimate their general level of cognitive ability. In other words, it reflects a modulardomain-specific adaptation.
Admittedly, the same is true to some extent for human races. Thus, some races score relatively higher on certain types of intellectual ability. For example, East Asians tend to score higher on spatio-visual ability than on verbal ability; Ashkenazi Jews show the opposite pattern, scoring higher in verbal intelligence than in spatio-visual ability; while American blacks score relatively higher in tests involving rote memory than in those requiring abstract reasoning ability. Similarly, as discussed by Lynn, some races seem to have certain quite specific abilities not commensurate to their general intelligence (e.g. Aborigine visual memory). However, in general, both between and within races, most variation in human intelligence loads onto the ‘g-factor’ of general intelligence.

[3] American anthropologist Carleton Coon is credited as the first to first to propose that population differences in skull size reflect a thermoregulatory adaptation to climatic differences (Coon 1955). An alternative theory, less supported, is that it was differing levels of ambient light that resulted in differences in brain-size as between different populations tracing their ancestry to different parts of the globe (Pearce & Dunbar 2011). On this view, the larger brains of populations who trace their descent to areas of greater latitude presumably reflect only the demands of the visual system, rather than any differences in general intelligence. Yet another theory, less politically-correct than these, is so-called Cold Winters Theory, which posits that colder climates placed a greater premium on intelligence, which caused populations inhabiting colder regions of the globe to evolve larger brains and higher levels of intelligence. This is, of course, the theory championed by Lynn himself, and I discuss the problems with this theory the final part of this review.

[4] Curiously however, although, as reported by Lynn, the cold-adapted Eskimos indeed have the largest brains of any human poulation, the same does not seem to be true of another arctic population, namely the reindeer-herding Sámi (or Lapps) of Scandinavia and the Kola Penninsula. On the contrary, anthropologist Carleton Coon reports that the Sámi actually “have very small heads” (The Races of Europe: p266). This would seem to be contrary to  Bermann’s Rule. However, this may be accounted for by the diminutive stature of Sámi. Thus, head-size (and brain-size) also correlates with overall body-size, and Coon also reports that, although small in absolute size, Sámi heads are actually “large in proportion to body size” (The Races of Europe: p303).

[5] Lynn has recently published research regarding differences in IQ across different regions of Italy (Lynn 2010).

[6] Actually, Lynn acknowledges causation in both directions, possibly creating a feedback loop. He also acknowledges other factors in contributing to differences in economic development and prosperity, including the effects of the economic system adopted. For example, countries that adopted communism tend to be poorer than comparable countries that have capitalist economies (e.g. Eastern Europe is poorer than Western Europe, and North Korea poorer than South Korea).  

[7] Incidentally, Lynn cites two studies of Polish IQ, whose results are even more divergent than those of Portugal or Ireland, giving average IQs of 106 and 91 respectively. One of these scores is substantially below the European average, while the other the substantially above. 

[8] Essayist Ron Unz has argued that IQs in Ireland have risen in concert with living standards in Ireland (Unz 2012a; Unz 2012b). However, judging from dates when the studies cited by Lynn in ‘Race Differences in Intelligence’ were published, there is no obvious increase over time. True the earliest study, an MA thesis, published in 1973 gives the lowest figure, with an average IQ of just 87 (Gill and Byrt 1973). This rises to 97 in a study published in 1981 that provided little details on its methodology (Buj 1981). However, it declines again for in the latest study cited by Lynn on Irish IQs, which was published in 1993 but gives average IQs of just 93 and 91 for two separate samples (Carr 1993). In the more recent 2015 edition, Lynn cites a few extra studies, eleven in total. Again, however, there is no obvious increase over time, the latest study cited by Lynn, which was published in 2012, giving an average IQ of just 92 (2015 edition).

[9] While this claim is made in reference to immigrants to America and the West, it is perhaps worth noting that East Asians in South-East Asia, namely the Overseas Chinese, largely dominate the economies of South-East Asia, and are therefore on average much wealthier than the average Chinese person still residing in China (see World on Fire by Amy Chua). Given the association of intelligence with wealth, this would suggest that Chinese immigrants to South-East Asia are not substantially less intelligent than those who remained in China. Did the more intelligent Chinese migrate to South-East Asia, while the less intelligent migrated to America? If so, why would this be?

[10] According to Daniel Nettle in Personality: What Makes You the Way You Are, in the framework of the five-factor model of personality, a liking for travel is associated primarily with extraversion. One study found that an intention to migrate was positively associated with both extraversion and openness to experience, but negatively associated with agreeableness, conscientiousness, and neuroticism (Fouarge et al 2019). A study of migration within the United States found a rather more complex set of relationships between migration and each of the big five personality traits (Jokela 2009).

[11] Other Catholic countries, namely those in Southern Europe, such as Italy and Spain, may indeed have slightly lower IQs, at least in the far south of these countries. However, as we have seen, Lynn explains this in terms of racial admixture from Middle-Eastern and North African populations. Therefore, there is no need to invoke priestly celibacy in order to explain it. The crucial test case, then, is Catholic countries other than Ireland from Northern Europe, such as Austria and France.

[12] In the 2015 edition, he returns to a slightly higher figure of 71.

[13] In the 2006 edition, Lynn cites no studies from the Horn of Africa. However, in the 2015 edition, he cites five studies from Ethiopia, and, in The Intelligence of Nations, he and co-author David Becker also cite a study on IQs in Somalia.

[14] Indeed, physical anthropologist John Baker, in his excellent Race (which I have reviewed here, here and here) argues that:

The ‘Aethiopid’ race of Ethiopia and Somaliland are an essentially Europid [i.e. Caucasian] subrace with some Negrid admixture” (Race: p225).

Similarly, leading mid-twentieth century Ameican anthropologist Carleton Coon, using the word ‘white’ as a synonym for ‘Caucasian’, even asserts that  “the Gallas, the Somalis, the Ethiopians, and the inhabitants of Eritrea” are all “white or near white” (The Races of Europe: p445).
These claims surely exaggerate the Caucasian component in the ancestry of populations from the Horn of Africa. However, recent genetic studies do indeed show affinities between populations from the Horn of Africa and those from the Middle East (e.g. Ali et al 2020; Khan 2011a; Khan 2011b; Hodgson 2014).

[15] However, it is not at all clear that the same is true for black African minorities resident in other western polities, whose IQs are also, according to Lynn’s data, also considerably above those for indigenous Africans. Here, I suspect black populations are more diverse.
For example, in Britain, Afro-Caribbean people, who emigrated to Britain by way of the West Indies, are probably mostly mixed-race, like African-Americans, since both descend from white-owned slave populations. However, Britain also plays host to many immigrants direct from Africa, most of whom are, I suspect, of relatively unmixed sub-Saharan African descent. Yet, despite having greater levels of sub-Saharan African DNA, African immigrants to the UK outperform Afro-Caribbeans in UK schools, just as they do African-Americans in the US (Chisala 2015a).

[16] Blogger John ‘Chuck’ Fuerst suggests, the higher scores for Somali immigrants might reflect the fact that peoples from the Horn of Africa actually, as we have seen, have genetic affinities with North African and Middle Eastern populations (Fuerst 2015). However, the problem with attributing the relatively high scores of Somali refugees and immigrants to Caucasoid-admixture is that, as we have seen, according to the data collected by Lynn, IQs are no higher in the Horn of Africa than elsewhere in sub-Saharan Africa.

[17] If anything, “Bushmen” should presumably be grouped, not with Pygmies, with rather the distinct but related Khoikhoi pastoralists. However, the latter are now all but extinct as an independent people and are not mentioned by Lynn.

[18] For example, Lynn also acknowledges that those whom he terms “South Asians and North Africans” are “closely related to the Europeans” (p79). However, they nevertheless merit a chapter of their own. Likewise, he acknowledges that “South-East Asians” share “some genetic affinity with East Asians with whom they are to some degree interbred” (p97). Nevertheless, he justifies considering these two ostensible races in separate chapters, partly on the basis that “the flattened nose and epicanthic eye-fold are less prominent” among the former (p97). Yet the morphological differences between Pygmies and Khoisan are even greater, but they are lumped together in the same chapter.

[19] There is indeed, as Lynn notes, a correlation between a group’s IQ and their lifestyle (i.e. whether they are foragers or agriculturalists). However, the direction of causation is unclear. Does high intelligence allow a group to transition to agriculture, or does an agriculturalist lifestyle somehow increase a group’s average IQ? And, if the latter, is this a genetic or a purely environmental effect?

[20] Indeed, the very word slave is thought to derive from the ethnonym Slav, because of the frequency with which Slavic peoples were enslaved during the Middle Ages. Often they were enslaved by Muslims, the Ottoman Turks having conquered much of Southeast Europe. Other times they were enslaved by Europeans and thence often sold on to the Ottoman Turks. As the last peoples in Europe to be Christianized, Slavs were long vulnerable to enslavement by both Muslims and Christians since, just as Islamic law forbade the enslavement of fellow Muslims, so Papal degree long prohibited the capture and enslavement of other Christians. Indeed, it is claimed that non-Slavic captives from elsewhere in Europe were often falsely described as Slavs in order to justify their enslavement.

[21] In the more recent 2015 edition of his book, Lynn reports an additional study of Pygmy intelligence, namely his own 2011 report of the results of tests conducted by anthropologists the results of which were first published in 1986 (Lynn 2011). This study rectified two of the problems that I identify with the sole study on this subject cited in the first edition. First, it did include a comparison with neighbouring populations of non-Pygmy black Africans given the same test. Second, by assigning to the neighbouring Negroids an average IQ of 71 (since this is, he reports in the 2015 edition, the average IQ of black Africans in general), this permitted him to calculate an average IQ for pygmies as well, which Lynn estimate as 57, though, in the paper itself, relying on his earlier estimate of the sub-Saharan African IQ in the first edition of his book at 67, he gave an even lower figure of 53.

[22] Thus, he suggests that the lower performance of the Aboriginals tested by Drinkwater (1975), as compared to those tested by Kearins (1981), may reflect the fact that the latter were the descendants of coastal populations of Aborigines, for whom the need to navigate in deserts without landmarks would have been less important. 

[23] The fact that the earliest civilization emerged among Middle Eastern, North African and South Asian populations is attributed by Lynn to the sort of environmental factors of the sort that, elsewhere in his book, he largely discounts or downplays. Thus, Lynn writes: 

“[Europeans] were not able to develop early civilizations like those built by the South Asians and North Africans because Europe was still cold, was covered with forest, and had heavy soils that were difficult to plough unlike the light soils on which the early civilizations were built, and there were no river flood plains to provide annual highly fertile alluvial deposits from which agricultural surpluses could be obtained to support an urban civilization and an intellectual class” (p237).

[24] I assume that this is the study that Lynn is citing, since this is the only matching study included in his references. However, curiously, Lynn refers to this study here as “Mackintosh et al 1985” (p83-4), despite there being only two authors listed in his references, such that “Mackintosh & Mascie-Taylor 1985” would be the more usual citation. Indeed, Lynn uses this latter form of citation (i.e. “Mackintosh & Mascie-Taylor 1985”) elsewhere when citing what seems to be the same paper in his earlier chapter on Africans (p47; p49).

[25] In order to disentangle the effects of national origin and religion on average IQs among British South Asians, it would be interesting to have data on the incomes (and IQs) of Pakistani Hindus, Bangladeshi Hindus and Muslim Indians resident in the West. However, I have not been able to find any such data.

[26] An alternative possibility is that it was the spread of Arab genes, as a result of the Arab conquests, and resulting spread of Islam, that depressed IQs in the Middle-East and North Africa, since Arabs were, prior to the rise of Islam, a relatively backward group of desert nomads, whose intellectual achievements were minimal compared to those of many of the groups whom they conquered (e.g. Persians, Mesopotamians, Assyrians, and Egyptians).
Indeed, even the achievements of Muslim civilization during the Islamic Golden Age seem to have been disproportionately those of Persian converts, not the Arabs themselves.
This might explain the economic success of the Iranian diaspora, who consider themselves Persian or Iranian rather than Arabic, and speak a non-Arabic Indo-European language. It might also explain, in racial rather than religious terms, why Coptic Christians in Egypt and Maronite Christians in Lebanon tend to be relatively wealthier than the Muslim majority in the countries in which they reside, since neither of these groups generally consider themselves Arabic (despite speaking an Arabic language), and they likely have less Arabic admixture than Muslims from the same country. The economically successful Lebanese diaspora is also mostly Christian, and hence arguably non-Arab.

[27] Actually, it is not at all clear whether, on purely theoretical grounds, we would expect higher caste Indians to have relatively higher intelligence than lower caste Indians. It is true that, in general, at least in modern western economies, people of higher socio-economic status do indeed, on average, have higher IQs than people of relatively low socio-economic status. However, this is thought to be because higher intelligence facilitates upward social mobility whereas low intelligence is associated with downward mobility.
Yet caste is a very different phenomenon from socio-economic status in the comparatively meritocratic contemporary west. Under the Indian caste system, caste was inherited and fixed at birth. There was therefore, at least in theory, no possibility of upward or downward social mobility. Therefore, there would have been no possibility of talented and intelligent lower caste Indians rising to a higher caste than that into which they were born, nor of low-IQ Brahmins descending into a lower caste strata.
Therefore, the general finding that higher socio-economic status is associated with higher intelligence may not hold for Indian castes, or, more likely, the association betweeen intelligence and caste may be much weaker than for other societies.
Moreover, the caste system was originally thought to have been imposed by Indo-Aryan invaders, who conquered much of the Indian subcontinent, imposing the caste system to maintain their racial and ethnic integrity over the Dravidian peoples whom they are thought to have subjugated.
Yet there is no reason to think that the Indo-Aryan conquerers were any more intelligent than the Dravidian peoples whom they conquered. On the contrary, they were, like later waves of Steppe nomads (Mongols, Huns etc.) who devasted so much of the Near East, Europe and East Asia in their successive waves of conquest, pastoralist barbarians, in many respects quite primitive, and any advantage they possessed was strictly a military one, namely their mastery of the horse, or, in the case of the Indo-Aryans, the horse-drawn chariot.
In contrast, the Dravidian peoples were, in all likelihood, founders of and heirs to the great Indus Valley Civilization, and hence, in many respects, more technologically advanced, more ‘civilized’, and perhaps also more intelligent than the barbarian nomads who conquered and subjugated them
Besides theoretical considerations, there is also little real data on caste differences in IQ. As noted above, a few studies do indeed suggest that higher caste people score, on average, higher in IQ than lower caste people (Chopra 1966; Lynn & Cheng 2018). On the other hand, both economic development and measured IQs are higher in predominantly Dravidian South India than in the predominantly Indo-Aryan North, even though Brahmins, the highest of the four varna, are disproportionately concentrated in the North (Lynn & Yadav 2015).

[28] One might, incidentally, question Lynn’s assumption that Oriental Jews were less subject to persecution than were the Ashkenazim in Europe. This is, of course, the politically correct view, which sees Islamic civilization as, prior to recent times, more tolerant than Christendom. On this view, anti-Jewish sentiment only emerged in the Middle East as a consequence of Zionism and the establishment of the Jewish state in what was formerly Palestine. However, for alternative views, see The Myth of the Andalusian Paradise. See also Robert Spencer’s The Truth About Muhammad (which I have reviewed here), in which he argues that Islam is inherently antisemitic (i.e. anti-Jewish).
Interestingly, Kevin Macdonald, in A People That Shall Dwell Alone (which I have reviewed here) makes almost the opposite argument to that of Lynn. Thus, he argues that it was precisely because Jews were so discriminated against in the Muslim world that their culture, and ultimately their IQs, were to decline, as they were, according to Macdonald, largely excluded from high-status and cognitively-demanding occupations, which were reserved for Muslims (p301-4). Thus, Macdonald concludes: 

The pattern of lower verbal intelligence, relatively high fertility, and low-investment parenting among Jews in the Muslim world is linked ultimately to anti-Semitism” (A People That Shall Dwell Alone (reviewed here): p304). 

[29] Lynn, for his part, does not explain why he believes persecution supposedly select for higher intelligence, simply assuming that it is logial that it would.

[30] This pattern among East Asians of lower scores on the verbal component of IQ tests was initially attributed to a lack of fluency in the language of the test, since the first East Asians to be tested were among diaspora populations resident in the West. However, the same pattern has now been found even among East Asians tested in their first language, in both the West and East Asia.

[31] For example, Sarich and Miele, in Race: The Reality of Human Differences (which I have reviewed here) write that “Asians have a slightly higher IQ than do whites” (Race: The Reality of Human Differences: p196). However, in fact, this applies only to East Asians, not to South-East Asians (nor to South Asians and West Asians, who are “Asian” in at least the strict geographical, and the British-English, sense.) Similarly, in his own oversimplified tripartite racial taxonomy in Race, Evolution and Behavior (which I have reviewed here), Philippe Rushton seems to imply that the traits he attributes to Mongoloids, including high IQs and large brain-size, apply to all members of this race, including South-East Asians and even Native Americans.

[32] Ethnic Chinese were overrepresented among Vietnamese boat people, though less so among later waves of immigrants. However, perhaps a greater problem is that they were disproportionately middle-class and drawn from the business elite, and hence unrepresentative of the Vietnamese as a whole, and likely of disproportionately high cognitive ability, since higher social classes tend to have higher average IQs.

[33] In his paper on Mongolian IQs, Lynn also suggests that Mongolians have lower IQs than other East Asians because they are genetically intermediate between East Asians and Eskimos (“Arctic Peoples”), who themselves have lower IQs (Lynn 2007). However, this merely begs the question as to why Eskimos themselves have lower IQs than East Asians, another anomaly with respect to Cold Winters Theory, which is discussed in the final part of this review.

[34] With regard to the population of Colombia, Lynn writes: 

The population of Colombia is 75 percent Native American and Mestizo, 20 percent European, and 5 percent African. It is reasonable to assume that the higher IQ of the Europeans and the lower IQ of the Africans will approximately balance out and that the IQ of 84 represents the intelligence of the Native Americans” (p58). 

However, this assumption that the African and European genetic contributions will balance out seems dubious since, by Lynn’s own reckoning, the European-descended share of the Colombian population is three times greater than that of those who are African-descended. Moreover, all these populations, not just Mestizos, surely contain individuals with some degree of racial admixture from the other populations, making the calculation of the expected average IQ of the population as a whole even more complex.

[35] The currently-preferred term Inuit is not sufficiently inclusive, because it applies only to those Eskimos indigenous to the North American continent, not the related but culturally distinct populations inhabiting Siberia or the Aleutian Islands. I continue to use the term Eskimos, because it is more accurate, not obviously pejorative, probably more widely understood, and also because I deplore the euphemism treadmill. Elsewhere, I have generally deferred to Lynn’s own usage, for example mostly using ‘Aborigine’, rather than the now preferred ‘Aboriginal’, a particularly preposterous example of the euphemism treadmill since the terms are so similar, comparable to how, today, it is acceptable to say ‘people of colour’, but not ‘coloured people’.

[36] For example, Hans Eysenck made various references in his writings to the fact that Eskimo children performed as well as European children in IQ tests as evidence for his claim that economic deprivation did not necessarily reduce IQ scores (e.g. The Structure and Measurement of Intelligence: p23). See also discussion in: Jason Malloy, A World of Difference: Richard Lynn Maps World Intelligence (Malloy 2016).

[37] Certain specific subpopulations also score higher (e.g. Ashkenazim and Māoris, though the latter only barely). However, these are subpopulations within the major ten races that Lynn identifies, not races in and of themselves.

[38] Actually, by the time Columbus landed in the Americas, many Native Americans had already partly transitioned to agriculture. However, not least because of a lack of domesticated animals that they could use as a meat source, most supplemented this with hunting and sometimes gathering too.

[39] However, Lynn reports that Japanese also score high on tests of visual memory (p143). However, excepting perhaps the Ainu, the Japanese do not have a recent history of subsisting as foragers. This suggests that foraging is not the only possible cause of high visual memory in a population.

[40] Presumably the comparison group Lynn has in mind are Europeans, since, as we have seen it is European living standards that he takes as his baseline for the purposes of estimating a group’s ”genotypic IQ” (p69), and, in a sense, all the IQ scores that he reports are measured against a European standard in so far as they are calculated by reference to an arbitrarily assigned average of 100 for European populations.

[41] Thus, it is at least theoretically possible that a relatively darker-skinned African-American child might be treated differently than a lighter-skinned child, especially one whose race is relatively indeterminate, by others (e.g. teachers) in a way that could conceivably affect their cognitive development and IQ. In addition, a darker skinned African-American child might, as a consequence of their darker complexion, come to identify as an African American to a greater extent than a lighter skinned child, which might affect who they socialize with, which celebrities they identify with and the extent to which they identify with broader black culture, all of which could conceivably have an effect on IQ. I do not contend that these effects are likely or even plausible, but they are at least theoretically possible. Using blood group to assess ancestry, especially if one actually introduces controls for skin tone (since this may be associated with blood-group, since both are presumed to be markers of degree of African ancestry), obviously eliminates this possibility. Today, this can also be done by looking at subjects’ actual DNA, which obviously has the potential to provide a more accurate measure of ancestry than either skin-tone or blood-group (e.g. Lasker et al 2019).

[42] More recently, a better study has been published regarding the association between European admixture and intelligence among African-Americans, which used genetic data to assess ancestry, and actually sought to control for the possible confounding effect of skin-colour and appearance (Lasker et al 2019). Unlike the blood-group studies, this largely supports the hereditarian hypothesis. However, this was not available at the time Lynn authored his book. Also, it ought to be noted that it was published in a controversial pay-to-publish academic journal, and therefore the quality of peer review to which the paper was subjected may be open to question. No doubt in the future, with the reduced costs of genetic testing, more studies using a similar methodology will be conducted, finally resolving the question of the relative contributions of heredity and environment to the black-white test score gap in America, and perhaps disparities between other ethnic groups too.

[43] It is a fallacy, however, to assume that what is true for those foraging peoples that have managed to survive as foragers in modern times and hence come to be studied by anthropologists was necessarily also true of all foraging groups before the transition to agriculture. On the contrary, those foraging groups that have survived into modern times, tend to have done so only in the ecologically most marginal and barren environments (e.g. the Kalahari Desert occupied by the San), since these areas are of least use to agriculturalists, and therefore represent the only regions where more technologically and socially advanced agriculturalists have yet to displace them (see Ember 1978). However, this would seem to suggest that African hunter-gatherers, prior to the expansion of Bantu agriculturalists, would have occupied more fertile areas, and therefore might have had even less need to rely on hunting than do contemporary hunter-gatherers such as the San, who are today largely restricted to the Kalahari Desert.

[44] Here, interestingly, Lynn departs from the theory of fellow race realist, and fellow exponent of ‘Cold Winters Theory’, Philippe Rushton. The latter, in his book, Race, Evolution and Behavior (which I have reviewed here), argues that: 

Hunting in the open grasslands of northern Europe was more difficult than hunting in the woodlands of the tropics and subtropics where there is plenty of cover for hunters to hide in” (Race, Evolution and Behavior: p228). 

In contrast, Lynn argues “open grasslands”, albeit on the African Savannah rather than in Northern Europe, actually made things harder, not for predators, but rather for prey – or at least arboreal primate prey. Thus, Lynn writes: 

The other principle problem of the hominids living in open grasslands would have been to protect themselves against lions, cheetahs and leopards. Apes and monkeys escape from the big cats by climbing into trees and swinging or jumping form one tree to another. For the Autralopithecines and the later hominids in open grasslands this was no longer possible” (p203). 

[45] To clarify, this is not to say that either San Bushmen or Australian Aborigines evolved primarily in these desert environments. On the contrary, many of them formerly occupied more fertile areas, before being displaced by more advanced neighbours, Bantu agriculturalists in the case of Khoisan, and European (more specifically British) colonizers, in the case of Aborigines. However, that they are nevertheless capable of surviving in these demanding desert environments suggests either:

(1) They are more intelligent than Lynn concludes; or
(2) That surviving in challenging environments does not require the level of intelligence that Lynn’s Cold Winters Theory supposes.

[46] Besides Eskimos, another potential test case for ‘Cold Winters Theory’ are the Sámi (or Lapps) of Northern Scandinavia. Like Eskimos, they have inhabited an extremely cold, northern environment for many generations and are genetically, and morphologically, quite distinct from other populations. Also, again like Eskimos, they maintained a foraging lifestyle until modern times. However, unlike other cold-adapted populations, the Sámi have, according to Carleton Coon, “very small heads” and hence presumably not especially large brains, though he also reports that their head-size is actually large in proportion to body-size. (The Races of Europe: p266; p303). According to Armstrong et al (2014), the only study of Sámi cognitive ability of which I am aware, the average IQ of the Sámi is almost identical to that of neighbouring populations of Finns (about 101).

[47] Lynn gives the same explanation for the relatively lower recorded IQs of Mongolians, as compared to other East Asians (p240).

References

Ali et al (2020) Genome-wide analyses disclose the distinctive HLA architecture and the pharmacogenetic landscape of the Somali population. Science Reports 10:5652.

Anderson M (2015) Chapter 1: Statistical Portrait of the U.S. Black Immigrant Population. In A Rising Share of the U.S. Black Population Is Foreign Born. Pew Research Center: Social & Demographic Trends, April 9, 2015. 

Armstrong et al (2014) Cognitive abilities amongst the Sámi population. Intelligence 46: 35-39.

Aziz et al (2004) Intellectual development of children born of mothers who fasted in Ramadan during pregnancy International Journal for Vitamin and Nutrition Research (2004), 74, pp. 374-380.

Beals et al (1984) Brain Size, Cranial Morphology, Climate, and Time Machines. Current Anthropology 25(3), 301–330.

Buj (1981) Average IQ values in various European countries Personality and Individual Differences 2(2): 168-9.

Carr (1993) Twenty Years a Growing: A Research Note on Gains in the Intelligence Test Scores of Irish Children over Two Decades Irish Journal of Psychology 14(4): 576-582.

Chisala (2015a) The IQ Gap Is No Longer a Black and White IssueUnz Review, 25 June. 

Chisala (2015b) Closing the Black-White IQ Gap Debate, Part I, Unz Review, 5 October.

Chisala (2015c) Closing the Black-White IQ Gap Debate, Part 2Unz Review, 22 October. 

Chisala (2019) Why Do Blacks Outperform Whites in UK Schools? Unz Review, November 29

Chopra (1966) Relationship of Caste System with Measured Intelligence and Academic Achievement of Students in India, Social Forces 44(4): 573-576

Coon (1955) Some Problems of Human Variability and Natural Selection in Climate and Culture. American Naturalist 89(848): 257-279

Drinkwater (1975) Visual memory skills of medium contact aboriginal childrenAustralian Journal of Psychology 28(1): 37-43. 

Dutton (2020) Why Islam Makes You Stupid . . . But Also Means You’ll Conquer The World (Whitefish, MT: Washington Summit, 2020).

Ember (1978) Myths about Hunter-Gatherers Ethnology 17(4): 439-448 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2:102–114. 

Fouarge et al (2019) Personality traits, migration intentions, and cultural distance. Papers in Regional Science 98(6): 2425-2454

Fuerst (2015) The Measured proficiency of Somali Americans, HumanVarieties.org.

Fuerst & Lynn (2021) Recent Studies of Ethnic Differences in the Cognitive Ability of Adolescents in the United Kingdom, Mankind Quarterly 61(4):987-999.

Gill & Byrt (1973). The Standardization of Raven’s Progressive Matrices and the Mill Hill Vocabulary Scale for Irish School Children Aged 6–12 Years. University College, Cork: MA Thesis.

Hodgeson et al (2014) Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics 10(6): e1004393.

Jokela (2009) Personality predicts migration within and between U.S. states Journal of Research in Personality 43(1): 79-83.

Kearins (1986) Visual spatial memory in aboriginal and white Australian childrenAustralian Journal of Psychology 38(3): 203-214. 

Kearins (1981) Visual spatial memory in Australian Aboriginal children of desert regions Cognitive Psychology 13(3): 434-460. 

Khan (2011a) The genetic affinities of Ethiopians. Discover Magazine, January 10.

Khan (2011b) A genomic sketch of the Horn of Africa. Discover Magazine, June 10

Klekamp et al (1987) A quantitative study of Australian aboriginal and Caucasian brains. Journal of Anatomy 150: 191–210.

Knapp & Seagrim (1981) Visual memory Australian aboriginal children and children of European descent International Journal of Psychology 16(1-4): 213-231.

Kuttner (1968) Use of Accentuated Environmental Inequalities in Research on Racial Differences, Mankind Quarterly 8(1): 147-160.

Langan & LoSasso (2002) Discussions on Genius and Intelligence: Mega Foundation Interview with Arthur Jensen‘ (Eastport, New York: MegaPress) .

Lasker et al (2019) Global ancestry and cognitive abilityPsych 1(1): 431-459 .

Li et al (2008) Worldwide Human Relationships Inferred from Genome-Wide Patterns of Variation, Science 319(5866): 1100-4.

Loehlin et al (1973) Blood group genes and negro-white ability differences. Behavior Genetics 3(3): 263-270.

Lynn (2002) Skin Color and Intelligence in African-Americans. Population & Environment 23: 201-207. 

Lynn (2007) IQ of Mongolians. Mankind Quarterly 47(3).

Lynn (2010) In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38, 93-100. 

Lynn (2011) Intelligence of the Pygmies. Mankind Quarterly, 51(4), 464–470

Lynn (2015) Selective Emigration, Roman Catholicism and the Decline of Intelligence in the Republic of Ireland. Mankind Quarterly 55(3): 242-253.

Lynn & Cheng (2018) Caste Differences in Intelligence, Education and Earnings in India and Nepal: A Review Mankind Quarterly 59(1).

Lynn & Yadav (2015) Differences in cognitive ability, per capita income, infant mortality, fertility and latitude across the states of India, Intelligence 49: 179-185

Mackintosh (2007) Review of Race differences in intelligence: An Evolutionary Hypothesis [sic], by Richard Lynn, Intelligence 35,(1): 94-96

Mackintosh & Mascie-Taylor (1985). The IQ question. In Education for All. Cmnd paper 4453. London: HMSO. 

Malloy (2014) HVGIQ: VietnamHumanvarieties.org, June 19. 

Malloy (2006) A World of Difference: Richard Lynn Maps World Intelligence. Gnxp.com, February 01. 

Pearce & Dunbar (2011) Latitudinal variation in light levels drives human visual system size, Biology Letters, 8(1): 90–93. 

Pereira et al (2005). African female heritage in Iberia: a reassessment of mtDNA lineage distribution in present timesHuman Biology77 (2): 213–29. 

Richards et al (2003) Extensive Female-Mediated Gene Flow from Sub-Saharan Africa into Near Eastern Arab PopulationsAmerican Journal of Human Genetics 72(4):1058–1064.

Rushton, J. P., & Ankney, C. D. (2009). Whole brain size and general mental ability: A reviewInternational Journal of Neuroscience119, 691-731

Sailer (1996) Great Black HopesNational Review, August 12

Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 

Templer (2010) The Comparison of Mean IQ in Muslim and Non-Muslim CountriesMankind Quarterly 50(3):188-209 

Torrence (1983) Time budgeting and hunter-gatherer technology. In G. Bailey (Ed.). Hunter-Gatherer Economy in Prehistory: A European Perspective. Cambridge, Cambridge University Press.

West et al (1992) Cognitive and educational attainment in different ethnic groups, Journal of Biosocial Science 24(4): 539-554.

Woodley (2009) Inbreeding depression and IQ in a study of 72 countries Intelligence 37(3): 268-276

Zhau et al (2021) Global, regional, and national burden of mortality associated with non-optimal ambient temperatures from 2000 to 2019: a three-stage modelling study, Lancet 5(7): E415-E425

Richard Dawkins’ ‘The Selfish Gene’: Selfish Genes, Selfish Memes and Altruistic Phenotypes

‘The Selfish Gene’, by Richard Dawkins, Oxford University Press, 1976.

Selfish Genes ≠ Selfish Phenotypes

Richard Dawkins’s ‘The Selfish Gene’ is among the most celebrated, but also the most misunderstood, works of popular science.

Thus, among people who have never read the book (and, strangely, a few who apparently have) Dawkins is widely credited with arguing that humans are inherently selfish, that this disposition is innate and inevitable, and even, in some versions, that behaving selfishly is somehow justified by our biological programming, the titular ‘Selfish Gene’ being widely misinterpreted as referring to a gene that causes us to behave selfishly.

Actually, Dawkins is not concerned, either directly or primarily, with humans at all.

Indeed, he professes to be “not really very directly interesting in man”, whom he dismisses as “a rather aberrant species” and hence peripheral to his own interest, namely how evolution has shaped the bodies and especially the behaviour of organisms in general (Dawkins 1981: p556).

‘The Selfish Gene’ is then, unusually, if not uniquely, for a bestselling work of popular science, a work, not of human biology nor even of non-human zoology, ethology or natural history, but rather of theoretical biology.

Moreover, in referring to genes as ‘selfish’, Dawkins has in mind not a trait that genes encode in the organisms they create, but rather a trait of the genes themselves.

In other words, individual genes are themselves conceived of as ‘selfish’ (in a metaphoric sense), in so far as they have evolved by natural selection to selfishly promote their own survival and replication by creating organisms designed to achieve this end.

Indeed, ironically, as Dawkins is at pains to emphasise, selfishness at the genetic level can actually result in altruism at the level of the organism or phenotype.

This is because, where altruism is directed towards biological kin, such altruism can facilitate the replication of genes shared among relatives by virtue of their common descent. This is referred to as kin selection or inclusive fitness theory and is one of the central themes of Dawkins’ book.

Yet, despite this, Dawkins still seems to see organisms themselves, humans very much included, as fundamentally selfish – albeit a selfishness tempered by a large dose of nepotism.

Thus, in his opening paragraphs no less, he cautions:

If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from our biological nature. Let us try to teach generosity and altruism, because we are born selfish” (p3).

The Various Editions

In later editions of his book, namely those published since 1989, Dawkins tempers this rather cynical view of human and animal behaviour by the addition of a new chapter – Chapter 12, titled ‘Nice Guys Finish First’.

This new chapter deals with the subject of reciprocal altruism, a topic he had actually already discussed earlier, together with the related, but distinct, phenomenon of mutualism,[1] in Chapter 10 (entitled, ‘You Scratch My Back, I’ll Ride on Yours’).

In this additional chapter, he essentially summarizes the work of political scientist Robert Axelrod, as discussed in Axelrod’s own book The Evolution of Co-Operation. This deals with evolutionary game theory, specifically the iterated prisoner’s dilemma, and the circumstances in which a cooperative  strategy can, by cooperating only with those who have a history of reciprocating, survive, prosper, evolve, and, in the long-term, ultimately outcompete  and hence displace those strategies which maximize only short-term self-interest.

Post-1989 editions also include another new chapter titled ‘The Long Reach of the Gene’ (Chapter 13).

If, in Chapter 12, the first additional chapter, Dawkins essentially summarised the contents of of Axelrod’s book, The Evolution of Cooperation, then, in Chapter 13, he summarizes his own book, The Extended Phenotype.

In addition to these two additional whole chapters, Dawkins also added extensive endnotes to these post-1989 editions.

These endnotes clarify various misunderstandings which arose from how he explained himself in the original version, defend Dawkins against some criticisms levelled at certain passages of the book and also explain how the science progressed in the years since the first publication of the book, including identifying things he and other biologists got wrong.

With still more recent new editions, the content of ‘The Selfish Gene’ has burgeoned still further. Thus, he 30th Anniversary Edition boasts only a new introduction; the recent 40th Anniversary Edition, published in 2016, boasts a new Epilogue too. Meanwhile, the latest so-called Extended Selfish Gene boasts, in addition to this, two whole new chapters.

Actually, these two new chapters are not all that new, being lifted wholesale from, once again, The Extended Phenotype, a work whose contents Dawkins has already, as we have seen, summarized in Chapter 13 (‘The Long Reach of the Gene’), itself an earlier addition to the book’s seemingly ever expanding contents list.

The decision not to entirely rewrite ‘The Selfish Gene’ was apparently that of Dawkins’ publisher, Oxford University Press.

This was probably the right decision. After all, ‘The Selfish Gene’ is not a mere undergraduate textbook, in need of revision every few years in order to keep up-to-date with the latest published research.

Rather, it was a landmark work of popular science, and indeed of theoretical biology, that introduced a new approach to understanding the evolution of behaviour and physiology to a wider readership, composed of biologist and non-biologist alike, and deserves to stand in its original form as a landmark in the history of science.

However, while the new introductions and the new epilogue is standard fare when republishing a classic work several years after first publication, the addition of four (or two, depending on the edition) whole new chapters strikes me less readily defensible.

For one thing, they distort the structure of the book, and, though interesting in and of themselves, always read for me rather as if they have been tagged on at the end as an afterthought – as indeed they have.

The book certainly reads best, in a purely literary sense, in its original form (i.e. pre-1989 editions), where Dawkins concludes with an optimistic, if fallacious, literary flourish (see below).

Moreover, these additional chapters reek of a shameless marketing strategy, designed to deceive new readers into paying the full asking price for a new edition, rather than buying a cheaper second-hand copy or just keeping their old one.

This is especially blatant in respect of the book’s latest incarnation, The Extended Selfish Gene, which, according to the information provided on Oxford University Press’s own website, was released only three months after the previous 40th Anniversary Edition, yet includes two additional chapters.

One frankly expects better from so presigious a publisher such as Oxford University Press, and indeed so celebrated a biologist and science writer as Richard Dawkins, especially as I suspect neither are especially short of money.

If I were recommending someone who has never read the book before on which edition to buy, I would probably advise them to get a second-hand copy of any post-1989 editions, since these can now be picked up very cheap, and include the additional endnotes which I personally often found very interesting.

On the other hand, if you want to read three additional chapters either from or about The Extended Phenotype then you are probably best to buy, instead, well… The Extended Phenotype – as this is also now a rather old book of which, as with ‘The Selfish Gene’, old copies can now be picked up very cheap.

The ‘Gene’s-Eye-View’ of Evolution

The Selfish Gene is a seminal work in the history of biology primarily because Dawkins takes the so-called gene’s-eye-view of evolution to its logical conclusion. To this extent, contrary to popular opinion, Dawkins’ exposition is not merely a popularization, but actually breaks new ground theoretically.

Thus, John Maynard Smith famously talked of kin selection by analogy with ‘group selection’ (Smith 1964). However, William Hamilton, who formulated the theory underlying these concepts, always disliked the term ‘kin selection’ and talked instead of the direct, indirect and inclusive fitness of organisms (Hamilton 1964a; 1964b).

However, Dawkins takes this line of thinking to its logical conclusion by looking – not at the fitness or reproductive success of organisms or phenotypes – but rather at the success in self-replication of genes themselves.

Thus, although he certainly stridently rejects group-selection, Dawkins replaces this, not with the familiar individual-level selection of classical Darwinism, but rather with a new focus on selection at the level of the gene itself.

Abstract Animals?

Much of the interest, and no little of the controversy, arising from ‘The Selfish Gene’ concerned, of course, the potential application of its theory to humans. However, in the book itself, humans, whom, as mentioned above, Dawkins dismisses as a “rather aberrant species” in which he professes to be “not really very directly interested” (Dawkins 1981: p556) are actually mentioned only occasionally and briefly.

Indeed, most of the discussion is purely theoretical. Even the behaviour of non-human animals is described only for illustrative purposes, and even these illustrative examples often involve simplified hypothetical creatures rather than descriptions of the behaviour of real organisms.

For example, he illustrates his discussion of the relative pros and cons of either fighting or submitting in conflicts over access to resources by reference to ‘hawks’ and ‘doves’ – but is quick to acknowledge that these are hypothetical and metaphoric creatures, with no connection to the actual bird species after whom they are named:

The names refer to conventional human usage and have no connection with the habits of the birds from whom the names are derived: doves are in fact rather aggressive birds” (p70).

Indeed, even Dawkins’ titular “selfish genes” are rather abstract and theoretical entities. Certainly, the actual chemical composition and structure of DNA is of only peripheral interest to him.

Indeed, often he talks of “replicators” rather than “genes” and is at pains to point out that selection can occur in respect of any entity capable of replication and mutation, not just DNA or RNA. (Hence his introduction of the concept of memes: see below).

Moreover, Dawkins uses the word ‘gene’ in a somewhat different sense to the way the word is employed by most other biologists. Thus, following George C. Williams in Adaptation and Natural Selection, he defines a “gene” as:

Any portion of chromosomal material that potentially lasts for enough generations to serve as a unit of natural selection” (p28).

This, of course, makes his claim that genes are the principle unit of selection something approaching a tautology or circular argument.

Sexual Selection in Humans?

Where Dawkins does mention humans, it is often to point out the extent to which this “rather aberrant species” apparently conspicuously fails to conform to the predictions of selfish-gene theory.

For example, at the end of his chapter on sexual selection (Chapter 9, titled, “Battle of the Sexes”) he observes that, in contrast to most other species, among humans, at least in the West, it seems to be females who are most active in using physical appearance as a means of attracting mates:

One feature of our own society that seems decidedly anomalous is the matter of sexual advertisement… It is strongly to be expected on evolutionary grounds that where the sexes differ, it should be the males that advertise and the females that are drab… [Yet] there can be no doubt that in our society the equivalent of the peacock’s tail is exhibited by the female, not the male” (p164).

Thus, among most other species, it is males who have evolved more elaborate plumages and other flashy, sexually selected ornaments. In contrast, females of the same species are often comparatively drab in appearance.

Yet, in modern western societies, Dawkins observes, it is more typically women who “paint their faces and glue on false eyelashes” (p164).

Here, it is notable that Dawkins, being neither an historian nor an anthropologist, is careful to restricts his comments to “our own society” and, elsewhere, to “modern western man”.

Thus, one explanation is that it is only our own WEIRD, western societies that are anomalous?

Thus, Matt Ridley, in The Red Queen, proposes that maybe:

Modern western societies have been in a two-century aberration from which they are just emerging. In Regency England, Louis XIV’s France, medieval Christendom, ancient Greece, or among the Yanomamö, men followed fashion as avidly as women. Men wore bright colours, flowing robes, jewels, rich materials, gorgeous uniforms, and gleaming, decorated armour. The damsels that knights rescued were no more fashionably accoutred than their paramours. Only in Victorian times did the deadly uniformity of the black frock coat and its dismal modern descendant, the grey suit, infect the male sex, and only in this century have women’s hemlines gone up and down like yo-yos” (The Red Queen: p292).

There is an element of truth here. Indeed, the claim is corroborated by Darwin, who observed in The Descent of Man:

In most, but not all parts of the world, the men are more highly ornamented than the women, and often in a different manner; sometimes, though rarely, the women are hardly ornamented at all” (The Descent of Man).

However, I suspect Ridley’s observation partly reflects a misunderstanding of the different purposes for which men and women use clothing, including bright and elaborate clothing.

Indeed, it rather reminds me of Margaret Mead’s claim that, among the Tschambuli of Papua New Guinea, sex-roles were reversed because, here, she reported, it was men who painted their faces and wore ‘make-up’, not women.

Yet what Mead neglected to mention, or perhaps failed to understand, was that the ‘make-up’ and face-paint that she evidently found so effeminate was actually war-paint that a Tschambuli warrior was only permitted to wear after killing his first enemy warrior, an obviously very male activity (see Homicide: Foundations of Human Behavior: p152).

Darwin himself, incidentally, although alluding to the “highly orgnamented” appearance of men of many cultures in the passage from The Descent of Man quoted above, well understood the different purposes of male and female ornamentation, writing in this same work:

Women are everywhere conscious of the value of their own beauty; and when they have the means, they take more delight in decorating themselves with all sorts of ornaments than do men” (The Descent of Man).

Of course, clothes and makeup are an aspect of behaviour rather than morphology, and thus more directly analogous to, say, the nests (or, more precisely, the bowers) created by male bowerbirds than the tail of the peacock.

However, behaviour is, in principle, no less subject to natural selection (and sexual selection) than is morphology, and therefore the paradox remains.

Moreover, even concentrating our focus exclusively on morphology, the sex difference still seems to remain.

Thus, perhaps the closest thing to a ‘peacock’s tail’ in humans (i.e. a morphological trait designed to attract mates) is a female trait, namely breasts.

Thus, as Desmond Morris first observed, in humans, the female breasts seem to have been co-opted for a role in sexual selection, since, unlike among other mammals, women’s breasts are permanent, from puberty on, not present only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016).

In contrast, men possess no obvious equivalent of the peacock’s tail’ (i.e. a trait that has evolved in response to female choice) – though Geoffrey Miller makes a fascinating (but ultimately unconvincing) case that the human brain may represent a product of sexual selection (see The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature).[2]

Interestingly, in an endnote to post-1989 editions of The Selfish Gene, Dawkins himself tentatively speculates that maybe the human penis might represent a sexually-selected ‘fitness indicator’.

Thus, he points out that the human penis is large as compared to that of other primates, yet also lacks a baculum (i.e. penis bone) that facilitates erections. This, he speculates, could mean that the capacity to maintain an erection might represent an honest signal of health in accordance with Zahavis handicap principle (307-8).

However, it is more likely that the large size, or more specifically the large width, of the human penis reflects instead a response to the increased size of the vagina, which itself increased in size to enable human females to give birth to large-brained, and hence large-headed, infants (see Bowman 2008; Sexual Selection and the Origins of Human Mating Systems: pp61-70).[3]

How then can we make sense of this apparent paradox, whereby, contrary to Bateman’s principle, sexual selection appears to have operated more strongly on women than on men?

For his part, Dawkins himself offers no explanation, merely lamenting:

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (p165).

However, in respect of what David Buss calls short-term mating strategies (i.e. casual sex, hook-ups and one night stands), this is certainly not the case.

On the contrary, patterns of everything from prostitution and rape to erotica and pornography consumption confirm that, in respect of short-term ‘commitment’-free casual sex, it remains women who are very much in demand and men who are the ardent pursuers (see The Evolution of Human Sexuality: which I have reviewed here).

Thus, in one study conducted on a University campus, 72% of male students agreed to go to bed with a female stranger who approached them with a request to this effect. In contrast, not a single one of the 96 females approached agreed to the same request from a male questioner (Clark and Hatfield 1989).

(What percentage of the students sued the university for sexual harassment was not revealed.)

However, humans also form long-term pair-bonds to raise children, and, in contrast to males of most other mammalian species, male parents often invest heavily in the offspring of such unions.

Men are therefore expected to be relatively choosier in respect of long-term romantic partners (e.g. wives) than they are for casual sex partners. This may then explain the relatively high levels of reproductive competition engaged in by human females, including high levels of what Dawkins calls ‘sexual advertising’.

Reproductive competition between women may be especially intense in western societies practising what Richard Alexander termed socially-imposed monogamy.

This refers to societies where there are large differences between males in social status and resource holdings, but where even wealthy males are prohibited by law from marrying multiple women at once.[4]

Here, there may be intense competition as between females for exclusive rights to resource-abundant ‘alpha male’ providers (Gaulin and Boser 1990).

Thus, to some extent, the levels of sexual competition engaged in by women in western societies may indeed be higher than in non-western, polygynous societies.

This, then, might explain why females use what Dawkins terms ‘sexual advertising’ to attract long-term mates (i.e. husbands). However, it still fails to explain why males don’t – or, at least, don’t seem to do so to anything like the same degree.

Darwin himself may have come closer than many of his successors to arriving at an answer, observing that:

Man is more powerful in body and mind than woman, and in the savage state he keeps her in a far more abject state of bondage than does the male of any other animal; therefore it is not surprising that he should have gained the power of selection” (The Descent of Man).

Therefore, in contrast to mating patterns in modern western societies, female choice may actually have played a surprisingly limited role in human evolutionary history, given that, in most pre-modern societies, arranged marriages were, and are, the norm.

Male mating competition may then have taken the form of male-male contest competition (e.g. fighting) rather than displaying to females – i.e. what Darwin called intra-sexual selection’ rather than ‘inter-sexual selection’.

Thus, while men indeed possess no obvious analogue to the peacock’s tail, they do seem to possess traits designed for fighting – namely considerably greater levels of upper-body musculature and violent aggression as compared to women (see Puts 2010).

In other words, human males may not have any obvious ‘peacock’s tail’, but we perhaps we do have, if you like, stag’s antlers.

From Genes to Memes

Dawkins’ eleventh chapter, which was, in the original version of the book (i.e. pre-1989 editions), the final chapter, is also the only chapter to focus exclusively on humans.

Entitled ‘Memes: The New Replicators’, it focuses again on the extent to which humans are indeed an “aberrant species”, being subject to cultural as well as biological evolution to a unique degree.

Interestingly, however, Dawkins argues that the principles of natural selection discussed in the preceding chapters of the book can be applied just as usefully to cultural evolution as to biological evolution.

In doing so, he coins the concept of the meme as the cultural unit of selection, equivalent to a gene, passing between minds analogously to a virus.

This term has been enormously influential in intellectual discourse, and indeed in popular discourse, and even passed into popular usage.

The analogy of memes to genes certainly makes for an interesting thought-experiment. However, like any analogy, it can be taken too far.

Certainly ideas can be viewed as spreading between people, and as having various levels of fitness depending on the extent to which they catch on.

Thus, to take one famous example, Dawkins famously described religions such as Islam and Christianity as Viruses of the Mind, which travel between, and infect, human minds in a manner analogous to a virus.

Thus, proponents of Darwinian medicine contend that pathogens such as flu and the common cold produce symptoms such as coughing, sneezing and diarrhea precisely because these behaviours promote the spread and replication of the pathogen to new hosts through the bodily fluids thereby expelled.

Likewise, rabies causes dogs and other animals to become aggressive and bite, which likewise facilitates the spread of the rabies virus to new hosts.[5]

By analogy, successful religions are typically those that promote behaviours that facilitate their own spread.

Thus, a religion that commands its followers to convert non-believers, persecute apostates, ‘be fruitful and multiply’ and indoctrinate your offspring with their beliefs is, for obvious reasons, likely to spread faster and have greater longevity than a religious doctrine that commands adherents become celibate hermits and that proselytism is a mortal sin.

Thus, Christians are admonished by scripture to save souls and preach the gospel among heathens; while Muslims are, in addition, admonished to wage holy war against infidels and persecute apostates.

These behaviour facilitate the spread of Christianity and Islam just as surely as coughing and sneezing promote the spread of the flu.[6]

Like genes, memes can also be said to mutate, though this occurs not only through random (and not so random) copying errors, but also by deliberate innovation by the human minds they ‘infect’. Memetic mutation, then, is not entirely random.

However, whether this way of looking at cultural evolution is a useful and theoretically or empirically productive way of conceptualizing cultural change remains to be seen.

Certainly, I doubt whether ‘memetics’, as it is sometimes termed, will ever be a rigorous science comparable to genetics, after which it is named, as some of the concept’s more enthusiastic champions have sometimes envisaged. Neither, I suspect, did Dawkins ever originally intend or envisage it as such, having seemingly coined the idea as something of an afterthought.

At any rate, one of the main factors governing the ‘infectiousness’ or ‘fitness’ of a given meme, is the extent to which the human mind is receptive to it and the human mind is itself a product of biological evolution.

The basis for understanding human behaviour, even cultural behaviour, is therefore how natural selection has shaped the human mind – in other words evolutionary psychology not memetics.

Thus, humans will surely have evolved resistance to memes that are contrary to their own genetic interests (e.g. celibacy) as a way of avoiding exploitation and manipulation by third-parties.

For more recent discussion of the status of the meme concept (the ‘meme meme’, if you like) see The Meme Machine; Virus of the Mind; The Selfish Meme; and Darwinizing Culture.

Escaping the Tyranny of Selfish Replicators?

Finally, at least in the original, non-‘extended’ editions of the book, Dawkins concludes ‘The Selfish Gene’, with an optimistic literary flourish, emphasizing once again the alleged uniqueness of the “rather aberrant” human species.[7]

Thus, his final paragraph ends:

We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators” (p201).

This makes for a dramatic, and optimistic, conclusion. It is also flattering to anthropocentric notions of human uniqueness, and of free will.

Unfortunately, however, it ignores the fact that the “we” who are supposed to be doing the rebelling are ourselves a product of the same process of natural selection and, indeed, of the same selfish replicators against whom Dawkins calls on us to rebel. Indeed, even the (alleged) desire to revolt is a product of the same process.[8]

Likewise, in the book’s opening paragraphs, Dawkins proposes:

Let us try to teach generosity and altruism, because we are born selfish. Let us understand what our selfish genes are up to, because we may then at least have the chance to upset their designs.” (p3)

However, this ignores, not only that the “us” who are to do the teaching and who ostensibly wish to instill altruism in others are ourselves the product of this same evolutionary process and these same selfish replicators, but also that the subjects whom we are supposed to indoctrinate with altruism are themselves surely programmed by natural selection to be resistant to any indoctrination or manipulation by third-parties to behave in ways that conflict with their own genetic interests.

In short, the problem with Dawkins’ cop-out Hollywood Ending is that, as anthropologist Vincent Sarich is quoted as observing, Dawkins has himself “spent 214 pages telling us why that cannot be true”. (See also Straw Dogs: Thoughts on Humans and Other Animals: which I have reviewed here).[9]

The preceding 214 pages, however, remain an exciting, eye-opening and stimulating intellectual journey, even over thirty years after their original publication.

__________________________

Endnotes

[1] Mutualism is distinguished from reciprocal altruism by the fact that, in the former, both parties receive an immediate benefit from their cooperation, whereas, in the latter, for one party, the reciprocation is delayed. It is reciprocal altruism that therefore presents the greater problem for evolution, and for evolutionists, because, here, there is the problem policing the agreement – i.e. how is evolution to ensure that the immediate beneficiary does indeed reciprocate, rather than simply receiving the benefit without later returning the favour (a version of the free rider problem). The solution, according to Axelrod, is that, where parties interact repeatedly over time, they come to engage in reciprocal altruism only with other parties with a proven track record of reciprocity, or at least without a proven track record of failing to reciprocate. 

[2] Certainly, many male traits are attractive to women (e.g. height, muscularity). However, these also have obvious functional utility, not least in increasing fighting ability, and hence probably have more to do with male-male competition than female choice. In contrast, many sexually-selected traits are positive hindicaps to their bearers, in all spheres except attracting mates. Indeed, one influential theory of sexual selection claims that it is precisely because they represent a handicap that they serve as an honest indicator of fitness and hence a reliable index of genetic quality.

[3] Thus, Edwin Bowman writes:

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008).

Similarly, in their controversial book Human Sperm Competition: Copulation, Masturbation and Infidelity, Robin Baker and Mark Bellis persuasively contend:

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: p171).

In turn, larger vaginas probably select for larger penises in order to fill the vagina (Bowman 2008).

According to Baker and Bellis, this is because the human penis functions as a suction piston, functioning to remove the sperm deposited by rival males, as a form of sperm competition, a theory that actually has some experimental support, not least from some hilarious research involving sex toys of differing sizes and shapes (Gallup et al 2003; Gallup and Burch 2004; Goetz et al 2005; see also Why is the Penis Shaped Like That).

Thus, according to this view:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171).

However, even in the absence of sperm competition, Alan Dixson observes:

In primates and other mammals the length of the erect penis and vaginal length tend to evolve in tandem. Whether or not sperm competition occurs, it is necessary for males to place ejaculates efficiently, so that sperm have the best opportunity to migrate through the cervix and gain access to the higher reaches of the female tract” (Sexual Selection and the Origins of Human Mating Systems: p68).

[4] In natural conditions, it is assumed that, in egalitarian societies, where males have roughly equal resource holdings, they will each attract an equal number of wives (i.e. given an equal sex ratio, one wife for each man). However, in highly socially-stratified societies, where there are large differences in resource holdings between men, it is expected that wealthier males will be able to support, and provide for, multiple wives, and will use their greater resource-holdings for this end, so as to maximize their reproductive success (see here). This is a version of the polygyny threshold model (see Kanazawa and Still 1999).

[5] There are also pathogens that affect the behaviour of their hosts in more dramatic ways. For example, one parasite, Toxoplasma gondii, when it infects a mouse, reduces the mouse’s aversion to cat urine, which is theorized to increase the risk of its being eaten by a cat, facilitating the reproductive life-cycle of the pathogen at the expense of that of its host. Similarly, the fungus, ophiocordyceps unilateralis turns ants into so-called zombie ants, who willingly leave the safety of their nests, and climb and lock themselves onto a leaf, again in order to facilitate the life cycle of their parasite at the expense of their own. Another parasite, dicrocoelium dendriticum (aka the lancet liver fluke) also affect the behaviour of ants whom it infects, causing them to climb to the tip of a blade of grass during daylight hours, increasing the chance they will be eaten by cattle or other grazing animals, facilitating the next stage of the parasite’s life-history

[6] In contrast, biologist Richard Alexander in Darwinism and Human Affairs cites the Shakers as an example of the opposite type of religion, namely one that, because of its teachings (namely, strict celibacy) largely died out.
In fact, however, Shakers did not quite entirely disappear. Rather, a small rump community of Shakers the Sabbathday Lake Shaker Village survives to this day, albeit greatly reduced in number and influence. This is presumably because, although the Shakers did not, at least in theory, have children, they did proselytise.
In contrast, any religion which renounced both reproduction and proselytism would presumably never spread beyond its initial founder or founders, and hence never come to the attention of historians, theorists of religion, or anyone else in the first place.

[7]  As noted above, this is among the reasons that The Selfish Gene’ works best, in a purely literary sense, in its original incarnation. Later editions have at least two further chapters tagged on at the end, after this dramatic and optimistic literary flourish.

[8] Dawkins is then here here guilty of a crude dualism. Marxist neuroscientist Steven Rose, in an essay in Alas Poor Darwin (which I have reviewed here and here) has also accused Dawkins of dualism for this same passage, writing:

Such a claim to a Cartesian separation of these authors’ [Dawkins and Steven Pinker] minds from their biological constitution and inheritance seems surprising and incompatible with their claimed materialism” (Alas Poor Darwin: Arguments Against Evolutionary Psychology: p262).

Here, Rose may be right, but he is also a self-contradictory hypocrite, since his own views represent an even cruder form of dualism. Thus, in an earlier book, Not in Our Genes: Biology, Ideology, and Human Nature, co-authored with fellow-Marxists Leon Kamin and Richard Lewontin, Rose and his colleagues wrote, in a critique of sociobiological conceptions of a universal human nature:

Of course there are human universals that are in no sense trivial: humans are bipedal; they have hands that seem to be unique among animals in their capacity for sensitive manipulation and construction of objects; they are capable of speech. The fact that human adults are almost all greater than one meter and less than two meters in height has a profound effect on how they perceive and interact with their environment” (passage extracted in The Study of Human Nature: p314).

Here, it is notable that all the examples “human universals that are in no sense trivial” given by Rose, Lewontin and Kamin are physiological not psychological or behavioural. The implication is clear: yes, our bodies have evolved through a process of natural selection, but our brains and behaviour have somehow been exempt from this process. This of course, is an even cruder form of dualism than that of Dawkins.

As John Tooby and Leda Cosmides observe:

This division of labor is, therefore, popular: Natural scientists deal with the nonhuman world and the “physical” side of human life, while social scientists are the custodians of human minds, human behavior, and, indeed, the entire human mental, moral, political, social, and cultural world. Thus, both social scientists and natural scientists have been enlisted in what has become a common enterprise: the resurrection of a barely disguised and archaic physical/mental, matter/spirit, nature/human dualism, in place of an integrated scientific monism” (The Adapted Mind: Evolutionary Psychology and the Generation of Culture: p49).

A more consistent and thoroughgoing critique of Dawkins dualism is to be found in John Gray’s excellent Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here).

[9] This quotation comes from p176 of Marek Kohn’s The Race Gallery: The Return of Racial Science (London: Vintage, 1996). Unfortunately, Kohn does not give a source for this quotation.

__________________________

References

Bowman EA (2008) Why the human penis is larger than in the great apes Archives of Sexual Behavior 37(3): 361.

Clark & Hatfield (1989) Gender differences in receptivity to sexual offers, Journal of Psychology & Human Sexuality, 2:39-53.

Dawkins (1981) In defence of selfish genes, Philosophy 56(218):556-573.

Gallup et al (2003). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277-289.

Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology, 2, 12-23.

Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005.

Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: a preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences, 38: 749-763

Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52.

Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226.

Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.

Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236.

Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219.

Puts (2010) Beauty and the beast: mechanisms of sexual selection in humans, Evolution and Human Behavior 31:157-175.

Smith (1964). Group Selection and Kin Selection, Nature 201(4924):1145-1147.